forked from github-starred/komodo
* attach env_file to compose build and compose pull stages * fmt and bump rust version * bump dependencies * ignored for Sqlite message * fix Build secret args info * improve secret arguments info * improve environment, ports, volumes deserializers * rename `mongo` to `database` in config * support _FILE in secret env vars * improve setup - simpler compose * remove aws ecr container registry support, alpine dockerfiles * log periphery config * ssl_enabled mode * log http vs https * periphery client accept untrust ssl certs * fix nav issue from links * configurable ssl * KOMODO_ENSURE_SERVER -> KOMODO_FIRST_SERVER * mount proc and ssl volume * managed sync * validate files on host resource path * remove sync repo not configured guards * disable confirm dialog * fix sync hash / message Option * try dev dockerfile * refresh sync resources after commit * socket invalidate handling * delete dev dockerfile * Commit Changes * Add Info tab to syncs * fix new Info parsing issue with serde default * refresh stack cache on create / update * managed syncs can't sync themselves * managed syncs seems to work * bump thiserror * use alpine as main dockerfile * apt add --no-cache * disable user write perms, super admin perms to manage admins * manage admin user UI * implement disable non admin create frontend * disable create non admin * Copy button shown based on permission * warning message on managed sync * implement monaco editor * impl simple match tags config * resource sync support match tags * more match tag filtering * improve config with better saving diffs * export button use monaco * deser Conversions with wrapping strings * envs editing * don't delete variables / user groups if match tags defined * env from_str improve * improve dashboards * remove core ca stuff for now * move periphery ssl gen to dedicated file * default server address periphery:8120 * clean up ssl configs * server dashboard * nice test compose * add discord alerter * discord alerter * stack hideInfo logic * compose setup * alert table * improve config hover card style * update min editor height and stack config * Feat: Styling Updates (#94) * sidebar takes full screen height * add bg accent to navbar * add aschild prop to topbar alerts trigger * stylize resource rows * internally scrollable data tables * better hover color for outlined button * always show scrollbar to prevent layout shift * better hover color for navbar * rearrange buttons * fix table and resource row styles * cleanup scrollbar css * use page for dashboard instead of section * fix padding * resource sync refactor and env keep comments * frontend build * improve configs * config nice * Feat/UI (#95) * stylize resource rows * internally scrollable data tables * fix table and resource row styles * use page for dashboard instead of section * fix padding * add `ResourcePageHeader` to required components * add generic resource page header component * add resource page headers for all components * add resource notificaitons component * add `TextUpdateMenu2` for use in resource page * cleanup resource notificaitons * update resource page layout * ui edits * sync kind of work * clean up unused import * syncs seem to work * new sync pending * monaco diff hide unchanged regions * update styling all in config resource select links * confirm update default strings * move procedure Add Stage to left * update colors / styles * frontend build * backend for write file contents to host * compose reference ports comment out * server config * ensure parent directory created * fix frontend build * remove default stack run_directory * fix periphery compose deploy response set * update compose files * move server stats under tabs * fix deployment list item getting correct image when not deployed * stack updates cache after file write * edit files on host * clean up unused imports * top level config update assignment must be spread * update deps, move alert module * move stack module * move sync module * move to sync db_client usage after init * support generic OIDC provider * init builders / server templates specifying https * special cases for server / deployment state * improve alert details * add builder template `use_https` config * try downgrade aws sdk ec2 for x86 build * update debian dockerfiles to rm lists/* * optionally configure seperate KOMODO_OIDC_REDIRECT * add defaults to compose.env * keep tags / search right aligned when view only * clean up configs * remove unused migrator deps * update roadmap support generic OIDC * initialize sync use confirm button * key_value syntax highlighting * smaller debian dockerfiles * clean up deps.sh * debian dockerifle * New config layout (#96) * new config layout * fix image config layout and components config * fix dom nesting and cleanup components * fix label, make switches flex row * ensure smooth scroll on hash navigations * width 180 on config sidebar * slight edits to config * log whether https builder * DISABLED <switch> ENABLED * fix some more config * smaller checked component * server config looking good * auto initialize compose files when files on host * stack files on host good * stack config nice * remove old config * deployments looking good * build looking good * Repo good * nice config for builders * alerter good * server template config * syncs good * tweak stack config * use status badge for update tables * unified update page using router params * replace /updates with unified updates page * redirect all resource updates to unified update page * fix reset handling * unmount legacy page * try periphery rustls * rm unused import * fix broken deps * add unified alerts apge * mount new alerts, remove old alerts page * reroute resource alerts to unified alerts page * back to periphery openssl * ssl_enabled defaults to false for backward compat * reqwest need json feature * back to og yaml monaco * Uncomment config fields for clearer config * clean up compose env * implement pull or clone, avoid deleting repo directory * refactor mongo configuration params * all configs respect empty string null * add back status to header * build toml don't have version if not auto incrementing * fix comile * fix repo pull cd to correct dir * fix core pull_or_clone directory * improve statuses * remove ' ' from kv list parser * longer CSRF valid for, to give time to login / accept * don't compute diff / execute if there are any file_errors * PartialBuilderConfig enum user inner option * move errors to top * fix toml init serializer * server template and bulder manually add config.params line * better way to check builder / template params empty * improve build configs * merge links into network area deployment * default periphery config * improve SystemCommand editor * better Repo server / builder Info * improve Alerts / Updates with ResourceSelector * fix unused frontend * update ResourceSync description * toml use [resource.config] syntax * update toml syntax * update Build.image_registry schema * fix repo / stack resource link alias * reorder image registry * align toml / yaml parser style * some config updates --------- Co-authored-by: Karamvir Singh <67458484+karamvirsingh98@users.noreply.github.com> Co-authored-by: kv <karamvir.singh98@gmail.com>
1095 lines
30 KiB
Rust
1095 lines
30 KiB
Rust
use anyhow::Context;
|
|
use formatting::format_serror;
|
|
use komodo_client::{
|
|
api::execute::*,
|
|
entities::{
|
|
all_logs_success,
|
|
permission::PermissionLevel,
|
|
server::Server,
|
|
update::{Log, Update},
|
|
user::User,
|
|
},
|
|
};
|
|
use periphery_client::api;
|
|
use resolver_api::Resolve;
|
|
|
|
use crate::{
|
|
helpers::{periphery_client, update::update_update},
|
|
monitor::update_cache_for_server,
|
|
resource,
|
|
state::{action_states, State},
|
|
};
|
|
|
|
impl Resolve<StartContainer, (User, Update)> for State {
|
|
#[instrument(name = "StartContainer", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
StartContainer { server, container }: StartContainer,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the server (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure deployment not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard = action_state
|
|
.update(|state| state.starting_containers = true)?;
|
|
|
|
// Send update after setting action state, this way frontend gets correct state.
|
|
update_update(update.clone()).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
let log = match periphery
|
|
.request(api::container::StartContainer { name: container })
|
|
.await
|
|
{
|
|
Ok(log) => log,
|
|
Err(e) => Log::error(
|
|
"start container",
|
|
format_serror(&e.context("failed to start container").into()),
|
|
),
|
|
};
|
|
|
|
update.logs.push(log);
|
|
update_cache_for_server(&server).await;
|
|
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<RestartContainer, (User, Update)> for State {
|
|
#[instrument(name = "RestartContainer", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
RestartContainer { server, container }: RestartContainer,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the deployment (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure server not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard = action_state
|
|
.update(|state| state.restarting_containers = true)?;
|
|
|
|
// Send update after setting action state, this way frontend gets correct state.
|
|
update_update(update.clone()).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
let log = match periphery
|
|
.request(api::container::RestartContainer { name: container })
|
|
.await
|
|
{
|
|
Ok(log) => log,
|
|
Err(e) => Log::error(
|
|
"restart container",
|
|
format_serror(
|
|
&e.context("failed to restart container").into(),
|
|
),
|
|
),
|
|
};
|
|
|
|
update.logs.push(log);
|
|
update_cache_for_server(&server).await;
|
|
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<PauseContainer, (User, Update)> for State {
|
|
#[instrument(name = "PauseContainer", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
PauseContainer { server, container }: PauseContainer,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the server (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure server not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard =
|
|
action_state.update(|state| state.pausing_containers = true)?;
|
|
|
|
// Send update after setting action state, this way frontend gets correct state.
|
|
update_update(update.clone()).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
let log = match periphery
|
|
.request(api::container::PauseContainer { name: container })
|
|
.await
|
|
{
|
|
Ok(log) => log,
|
|
Err(e) => Log::error(
|
|
"pause container",
|
|
format_serror(&e.context("failed to pause container").into()),
|
|
),
|
|
};
|
|
|
|
update.logs.push(log);
|
|
update_cache_for_server(&server).await;
|
|
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<UnpauseContainer, (User, Update)> for State {
|
|
#[instrument(name = "UnpauseContainer", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
UnpauseContainer { server, container }: UnpauseContainer,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the server (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure server not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard = action_state
|
|
.update(|state| state.unpausing_containers = true)?;
|
|
|
|
// Send update after setting action state, this way frontend gets correct state.
|
|
update_update(update.clone()).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
let log = match periphery
|
|
.request(api::container::UnpauseContainer { name: container })
|
|
.await
|
|
{
|
|
Ok(log) => log,
|
|
Err(e) => Log::error(
|
|
"unpause container",
|
|
format_serror(
|
|
&e.context("failed to unpause container").into(),
|
|
),
|
|
),
|
|
};
|
|
|
|
update.logs.push(log);
|
|
update_cache_for_server(&server).await;
|
|
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<StopContainer, (User, Update)> for State {
|
|
#[instrument(name = "StopContainer", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
StopContainer {
|
|
server,
|
|
container,
|
|
signal,
|
|
time,
|
|
}: StopContainer,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the server (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure server not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard = action_state
|
|
.update(|state| state.stopping_containers = true)?;
|
|
|
|
// Send update after setting action state, this way frontend gets correct state.
|
|
update_update(update.clone()).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
let log = match periphery
|
|
.request(api::container::StopContainer {
|
|
name: container,
|
|
signal,
|
|
time,
|
|
})
|
|
.await
|
|
{
|
|
Ok(log) => log,
|
|
Err(e) => Log::error(
|
|
"stop container",
|
|
format_serror(&e.context("failed to stop container").into()),
|
|
),
|
|
};
|
|
|
|
update.logs.push(log);
|
|
update_cache_for_server(&server).await;
|
|
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<DestroyContainer, (User, Update)> for State {
|
|
#[instrument(name = "DestroyContainer", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
DestroyContainer {
|
|
server,
|
|
container,
|
|
signal,
|
|
time,
|
|
}: DestroyContainer,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the server (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure server not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard =
|
|
action_state.update(|state| state.pruning_containers = true)?;
|
|
|
|
// Send update after setting action state, this way frontend gets correct state.
|
|
update_update(update.clone()).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
let log = match periphery
|
|
.request(api::container::RemoveContainer {
|
|
name: container,
|
|
signal,
|
|
time,
|
|
})
|
|
.await
|
|
{
|
|
Ok(log) => log,
|
|
Err(e) => Log::error(
|
|
"stop container",
|
|
format_serror(&e.context("failed to stop container").into()),
|
|
),
|
|
};
|
|
|
|
update.logs.push(log);
|
|
update_cache_for_server(&server).await;
|
|
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<StartAllContainers, (User, Update)> for State {
|
|
#[instrument(name = "StartAllContainers", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
StartAllContainers { server }: StartAllContainers,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the server (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure server not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard = action_state
|
|
.update(|state| state.starting_containers = true)?;
|
|
|
|
update_update(update.clone()).await?;
|
|
|
|
let logs = periphery_client(&server)?
|
|
.request(api::container::StartAllContainers {})
|
|
.await
|
|
.context("failed to start all containers on host")?;
|
|
|
|
update.logs.extend(logs);
|
|
|
|
if all_logs_success(&update.logs) {
|
|
update.push_simple_log(
|
|
"start all containers",
|
|
String::from("All containers have been started on the host."),
|
|
);
|
|
}
|
|
|
|
update_cache_for_server(&server).await;
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<RestartAllContainers, (User, Update)> for State {
|
|
#[instrument(name = "RestartAllContainers", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
RestartAllContainers { server }: RestartAllContainers,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the server (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure server not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard = action_state
|
|
.update(|state| state.restarting_containers = true)?;
|
|
|
|
update_update(update.clone()).await?;
|
|
|
|
let logs = periphery_client(&server)?
|
|
.request(api::container::StartAllContainers {})
|
|
.await
|
|
.context("failed to restart all containers on host")?;
|
|
|
|
update.logs.extend(logs);
|
|
|
|
if all_logs_success(&update.logs) {
|
|
update.push_simple_log(
|
|
"restart all containers",
|
|
String::from(
|
|
"All containers have been restarted on the host.",
|
|
),
|
|
);
|
|
}
|
|
|
|
update_cache_for_server(&server).await;
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<PauseAllContainers, (User, Update)> for State {
|
|
#[instrument(name = "PauseAllContainers", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
PauseAllContainers { server }: PauseAllContainers,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the server (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure server not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard =
|
|
action_state.update(|state| state.pausing_containers = true)?;
|
|
|
|
update_update(update.clone()).await?;
|
|
|
|
let logs = periphery_client(&server)?
|
|
.request(api::container::PauseAllContainers {})
|
|
.await
|
|
.context("failed to pause all containers on host")?;
|
|
|
|
update.logs.extend(logs);
|
|
|
|
if all_logs_success(&update.logs) {
|
|
update.push_simple_log(
|
|
"pause all containers",
|
|
String::from("All containers have been paused on the host."),
|
|
);
|
|
}
|
|
|
|
update_cache_for_server(&server).await;
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<UnpauseAllContainers, (User, Update)> for State {
|
|
#[instrument(name = "UnpauseAllContainers", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
UnpauseAllContainers { server }: UnpauseAllContainers,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the server (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure server not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard = action_state
|
|
.update(|state| state.starting_containers = true)?;
|
|
|
|
update_update(update.clone()).await?;
|
|
|
|
let logs = periphery_client(&server)?
|
|
.request(api::container::StartAllContainers {})
|
|
.await
|
|
.context("failed to unpause all containers on host")?;
|
|
|
|
update.logs.extend(logs);
|
|
|
|
if all_logs_success(&update.logs) {
|
|
update.push_simple_log(
|
|
"unpause all containers",
|
|
String::from(
|
|
"All containers have been unpaused on the host.",
|
|
),
|
|
);
|
|
}
|
|
|
|
update_cache_for_server(&server).await;
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<StopAllContainers, (User, Update)> for State {
|
|
#[instrument(name = "StopAllContainers", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
StopAllContainers { server }: StopAllContainers,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the server (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure server not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard = action_state
|
|
.update(|state| state.stopping_containers = true)?;
|
|
|
|
update_update(update.clone()).await?;
|
|
|
|
let logs = periphery_client(&server)?
|
|
.request(api::container::StopAllContainers {})
|
|
.await
|
|
.context("failed to stop all containers on host")?;
|
|
|
|
update.logs.extend(logs);
|
|
|
|
if all_logs_success(&update.logs) {
|
|
update.push_simple_log(
|
|
"stop all containers",
|
|
String::from("All containers have been stopped on the host."),
|
|
);
|
|
}
|
|
|
|
update_cache_for_server(&server).await;
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<PruneContainers, (User, Update)> for State {
|
|
#[instrument(name = "PruneContainers", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
PruneContainers { server }: PruneContainers,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the server (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure server not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard =
|
|
action_state.update(|state| state.pruning_containers = true)?;
|
|
|
|
update_update(update.clone()).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
let log = match periphery
|
|
.request(api::container::PruneContainers {})
|
|
.await
|
|
.context(format!(
|
|
"failed to prune containers on server {}",
|
|
server.name
|
|
)) {
|
|
Ok(log) => log,
|
|
Err(e) => Log::error(
|
|
"prune containers",
|
|
format_serror(
|
|
&e.context("failed to prune containers").into(),
|
|
),
|
|
),
|
|
};
|
|
|
|
update.logs.push(log);
|
|
update_cache_for_server(&server).await;
|
|
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<DeleteNetwork, (User, Update)> for State {
|
|
#[instrument(name = "DeleteNetwork", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
DeleteNetwork { server, name }: DeleteNetwork,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
update_update(update.clone()).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
let log = match periphery
|
|
.request(api::network::DeleteNetwork { name: name.clone() })
|
|
.await
|
|
.context(format!(
|
|
"failed to delete network {name} on server {}",
|
|
server.name
|
|
)) {
|
|
Ok(log) => log,
|
|
Err(e) => Log::error(
|
|
"delete network",
|
|
format_serror(
|
|
&e.context(format!("failed to delete network {name}"))
|
|
.into(),
|
|
),
|
|
),
|
|
};
|
|
|
|
update.logs.push(log);
|
|
update_cache_for_server(&server).await;
|
|
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<PruneNetworks, (User, Update)> for State {
|
|
#[instrument(name = "PruneNetworks", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
PruneNetworks { server }: PruneNetworks,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the server (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure server not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard =
|
|
action_state.update(|state| state.pruning_networks = true)?;
|
|
|
|
update_update(update.clone()).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
let log = match periphery
|
|
.request(api::network::PruneNetworks {})
|
|
.await
|
|
.context(format!(
|
|
"failed to prune networks on server {}",
|
|
server.name
|
|
)) {
|
|
Ok(log) => log,
|
|
Err(e) => Log::error(
|
|
"prune networks",
|
|
format_serror(&e.context("failed to prune networks").into()),
|
|
),
|
|
};
|
|
|
|
update.logs.push(log);
|
|
update_cache_for_server(&server).await;
|
|
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<DeleteImage, (User, Update)> for State {
|
|
#[instrument(name = "DeleteImage", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
DeleteImage { server, name }: DeleteImage,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
update_update(update.clone()).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
let log = match periphery
|
|
.request(api::image::DeleteImage { name: name.clone() })
|
|
.await
|
|
.context(format!(
|
|
"failed to delete image {name} on server {}",
|
|
server.name
|
|
)) {
|
|
Ok(log) => log,
|
|
Err(e) => Log::error(
|
|
"delete image",
|
|
format_serror(
|
|
&e.context(format!("failed to delete image {name}")).into(),
|
|
),
|
|
),
|
|
};
|
|
|
|
update.logs.push(log);
|
|
update_cache_for_server(&server).await;
|
|
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<PruneImages, (User, Update)> for State {
|
|
#[instrument(name = "PruneImages", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
PruneImages { server }: PruneImages,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the server (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure server not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard =
|
|
action_state.update(|state| state.pruning_images = true)?;
|
|
|
|
update_update(update.clone()).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
let log =
|
|
match periphery.request(api::image::PruneImages {}).await {
|
|
Ok(log) => log,
|
|
Err(e) => Log::error(
|
|
"prune images",
|
|
format!(
|
|
"failed to prune images on server {} | {e:#?}",
|
|
server.name
|
|
),
|
|
),
|
|
};
|
|
|
|
update.logs.push(log);
|
|
update_cache_for_server(&server).await;
|
|
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<DeleteVolume, (User, Update)> for State {
|
|
#[instrument(name = "DeleteVolume", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
DeleteVolume { server, name }: DeleteVolume,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
update_update(update.clone()).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
let log = match periphery
|
|
.request(api::volume::DeleteVolume { name: name.clone() })
|
|
.await
|
|
.context(format!(
|
|
"failed to delete volume {name} on server {}",
|
|
server.name
|
|
)) {
|
|
Ok(log) => log,
|
|
Err(e) => Log::error(
|
|
"delete volume",
|
|
format_serror(
|
|
&e.context(format!("failed to delete volume {name}"))
|
|
.into(),
|
|
),
|
|
),
|
|
};
|
|
|
|
update.logs.push(log);
|
|
update_cache_for_server(&server).await;
|
|
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<PruneVolumes, (User, Update)> for State {
|
|
#[instrument(name = "PruneVolumes", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
PruneVolumes { server }: PruneVolumes,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the server (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure server not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard =
|
|
action_state.update(|state| state.pruning_volumes = true)?;
|
|
|
|
update_update(update.clone()).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
let log =
|
|
match periphery.request(api::volume::PruneVolumes {}).await {
|
|
Ok(log) => log,
|
|
Err(e) => Log::error(
|
|
"prune volumes",
|
|
format!(
|
|
"failed to prune volumes on server {} | {e:#?}",
|
|
server.name
|
|
),
|
|
),
|
|
};
|
|
|
|
update.logs.push(log);
|
|
update_cache_for_server(&server).await;
|
|
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<PruneDockerBuilders, (User, Update)> for State {
|
|
#[instrument(name = "PruneDockerBuilders", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
PruneDockerBuilders { server }: PruneDockerBuilders,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the server (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure server not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard =
|
|
action_state.update(|state| state.pruning_builders = true)?;
|
|
|
|
update_update(update.clone()).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
let log =
|
|
match periphery.request(api::build::PruneBuilders {}).await {
|
|
Ok(log) => log,
|
|
Err(e) => Log::error(
|
|
"prune builders",
|
|
format!(
|
|
"failed to docker builder prune on server {} | {e:#?}",
|
|
server.name
|
|
),
|
|
),
|
|
};
|
|
|
|
update.logs.push(log);
|
|
update_cache_for_server(&server).await;
|
|
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<PruneBuildx, (User, Update)> for State {
|
|
#[instrument(name = "PruneBuildx", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
PruneBuildx { server }: PruneBuildx,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the server (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure server not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard =
|
|
action_state.update(|state| state.pruning_buildx = true)?;
|
|
|
|
update_update(update.clone()).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
let log =
|
|
match periphery.request(api::build::PruneBuildx {}).await {
|
|
Ok(log) => log,
|
|
Err(e) => Log::error(
|
|
"prune buildx",
|
|
format!(
|
|
"failed to docker buildx prune on server {} | {e:#?}",
|
|
server.name
|
|
),
|
|
),
|
|
};
|
|
|
|
update.logs.push(log);
|
|
update_cache_for_server(&server).await;
|
|
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
impl Resolve<PruneSystem, (User, Update)> for State {
|
|
#[instrument(name = "PruneSystem", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
&self,
|
|
PruneSystem { server }: PruneSystem,
|
|
(user, mut update): (User, Update),
|
|
) -> anyhow::Result<Update> {
|
|
let server = resource::get_check_permissions::<Server>(
|
|
&server,
|
|
&user,
|
|
PermissionLevel::Execute,
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the server (or insert default).
|
|
let action_state = action_states()
|
|
.server
|
|
.get_or_insert_default(&server.id)
|
|
.await;
|
|
|
|
// Will check to ensure server not already busy before updating, and return Err if so.
|
|
// The returned guard will set the action state back to default when dropped.
|
|
let _action_guard =
|
|
action_state.update(|state| state.pruning_system = true)?;
|
|
|
|
update_update(update.clone()).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
let log = match periphery.request(api::PruneSystem {}).await {
|
|
Ok(log) => log,
|
|
Err(e) => Log::error(
|
|
"prune system",
|
|
format!(
|
|
"failed to docker system prune on server {} | {e:#?}",
|
|
server.name
|
|
),
|
|
),
|
|
};
|
|
|
|
update.logs.push(log);
|
|
update_cache_for_server(&server).await;
|
|
|
|
update.finalize();
|
|
update_update(update.clone()).await?;
|
|
|
|
Ok(update)
|
|
}
|
|
}
|