Compare commits

..

216 Commits

Author SHA1 Message Date
mbecker20
973aae82fd improve 2fa ux 2025-12-10 21:39:56 -08:00
mbecker20
c3626e0949 cli support clearing users 2fa methods 2025-12-10 21:39:56 -08:00
mbecker20
5ba96a58e4 binaries use bookworm 2025-12-10 21:39:56 -08:00
mbecker20
c6a1cf2867 deploy 2.0.0-dev-98 2025-12-10 21:39:56 -08:00
mbecker20
bd39aec1fd support passkey 2fa 2025-12-10 21:39:56 -08:00
mbecker20
079895a64d fix imports in permissions module 2025-12-10 21:39:56 -08:00
mbecker20
5af1c646a5 deploy 2.0.0-dev-97 2025-12-10 21:39:56 -08:00
mbecker20
747e03b5c4 Fix incorrect permissions query due to resource RBAC update query filter. Align logic with Alert resource RBAC filter. Re #1011 2025-12-10 21:39:56 -08:00
mbecker20
78c6761eca add swarm specific permission config 2025-12-10 21:39:56 -08:00
mbecker20
a949891d26 apply login credential lock (for demo users) to 2FA 2025-12-10 21:39:56 -08:00
mbecker20
9d256e420a improve totp enrollment dialog closing, fix redirect away from /login when using two factor 2025-12-10 21:39:56 -08:00
mbecker20
b460504b5a deploy 2.0.0-dev-96 2025-12-10 21:39:56 -08:00
mbecker20
e60316f2c1 TOTP 2FA with third party logins 2025-12-10 21:39:56 -08:00
mbecker20
c45e3eab16 deploy 2.0.0-dev-95 2025-12-10 21:39:56 -08:00
mbecker20
ec12f63272 double recovery code length 2025-12-10 21:39:56 -08:00
mbecker20
f2f77b4eb6 totp unenroll 2025-12-10 21:39:56 -08:00
mbecker20
677a65f8b0 local login TOTP 2fa support 2025-12-10 21:39:56 -08:00
mbecker20
a5a1090177 deploy 2.0.0-dev-94 2025-12-10 21:39:56 -08:00
mbecker20
ccb18fc88a fix server auto rotate keys default for sync 2025-12-10 21:39:56 -08:00
mbecker20
0ca583fe5e deploy Deployments to swarms 2025-12-10 21:39:56 -08:00
mbecker20
4013cd3f1e dev-93 2025-12-10 21:39:56 -08:00
mbecker20
5854a7de99 Deployment as swarm service 2025-12-10 21:39:56 -08:00
mbecker20
8c52232852 swarm stack deploy and destroy 2025-12-10 21:39:56 -08:00
mbecker20
fb9e5b1860 ts types 2025-12-10 21:39:56 -08:00
mbecker20
a1c00a8d32 implement the swarm resource associations for stack / deployment targeting swarm 2025-12-10 21:39:56 -08:00
mbecker20
775f5bf703 more stack / deployment deploy on swarm 2025-12-10 21:39:56 -08:00
mbecker20
effe737ffe periphery swarm stack deploy 2025-12-10 21:39:56 -08:00
mbecker20
436959cbcd docker service create 2025-12-10 21:39:56 -08:00
mbecker20
9957287443 remove buttons for swarm resources 2025-12-10 21:39:56 -08:00
mbecker20
c77de8683c swarm stack logs and Remove 2025-12-10 21:39:56 -08:00
mbecker20
087bcb7044 improve swarm stack page 2025-12-10 21:39:56 -08:00
mbecker20
0dfcaaf06a add swarm remove executions to procedure UI excluded types 2025-12-10 21:39:56 -08:00
mbecker20
6624a821a5 deploy 2.0.0-dev-92 2025-12-10 21:39:56 -08:00
mbecker20
c32af84e02 remove swarm entities apis 2025-12-10 21:39:56 -08:00
mbecker20
9d9add7b34 Uppercase server docker ops logs 2025-12-10 21:39:56 -08:00
mbecker20
db099dbe1e improve config with tall linkto sidebars 2025-12-10 21:39:56 -08:00
mbecker20
413a8abc84 default rate limit attempts to 5 per 15 sec 2025-12-10 21:34:30 -08:00
mbecker20
36243e83c1 KL-4 must fallback to axum extracted IP for cases not using reverse proxy 2025-12-10 21:34:30 -08:00
mbecker20
076113a1de KL-4: Align log consistency 2025-12-10 21:34:30 -08:00
mbecker20
c998edaa73 KL-1: core_allowed_origins example config should be empty 2025-12-10 21:34:30 -08:00
mbecker20
4de32b08e5 KL-8 modify action state internal behavior comments 2025-12-10 21:34:30 -08:00
mbecker20
262999c58f KL-7 Improve typescript safety: disable allow any 2025-12-10 21:34:30 -08:00
mbecker20
be57f52e6e KL-6 Improve error handling in startup paths 2025-12-10 21:34:30 -08:00
mbecker20
3591789316 KL-6 Improve alert cache error handling 2025-12-10 21:34:30 -08:00
mbecker20
83f2bd65fe KL-6 Remove monitoring_interval panic 2025-12-10 21:34:30 -08:00
mbecker20
5e5cdb81f8 KL-5 ext: Tighten the skew tolerance re https://curity.io/resources/learn/jwt-best-practices/?utm_source=chatgpt.com#9-dealing-with-time-based-claims 2025-12-10 21:34:30 -08:00
mbecker20
721038a1df KL-5 JWT clock skew tolerance 2025-12-10 21:34:30 -08:00
mbecker20
7de98ad519 cargo clippy and bump to rust 1.91.1 2025-12-10 21:34:30 -08:00
mbecker20
8c62f2b5c5 KL-4 ext 2: Improve rate limiting / attempt state conveyance with response 2025-12-10 21:34:30 -08:00
mbecker20
85787781ee KL-4 ext: Remove brute-force compromising credential failure reasons to improve auth rate limiter effectiveness 2025-12-10 21:34:30 -08:00
mbecker20
25e2d4e926 KL-4 Authentication rate limiting 2025-12-10 21:34:30 -08:00
mbecker20
b9b8d45cbc KL-2/3 Input validation for local auth, service users, api keys, and variables 2025-12-10 21:34:30 -08:00
mbecker20
2f2e863dbf KL-1 Configurable CORS support 2025-12-10 21:34:30 -08:00
mbecker20
a3a01f1625 add schemas for swarm commands 2025-11-27 23:47:22 -08:00
mbecker20
aec8fa2bf1 task view page with logs 2025-11-23 23:45:28 -08:00
mbecker20
7ff2dba82f swarm service page with logs 2025-11-23 12:48:23 -08:00
mbecker20
9c86b2d239 swarm service logs api 2025-11-23 00:43:59 -08:00
mbecker20
b1fec7c663 docker swarm stack read apis 2025-11-22 16:05:51 -08:00
mbecker20
8341c6b802 sward resources use list items 2025-11-22 02:41:49 -08:00
mbecker20
73285c4374 move swarm to list item + inspect on pages 2025-11-21 14:34:20 -08:00
mbecker20
32d48cdb02 swarm configs view 2025-11-21 13:21:14 -08:00
mbecker20
8e081fd09c swarm config apis 2025-11-21 12:55:37 -08:00
mbecker20
04531f1dea Explore swarm info 2025-11-20 13:00:29 -08:00
mbecker20
80f439d472 basic swarm 2025-11-19 18:02:50 -08:00
mbecker20
d5e03d6d16 prog on swarm 2025-11-19 01:48:43 -08:00
mbecker20
a9f55bb8e6 fix Terminals permissions when no perm on Server 2025-11-17 15:44:14 -08:00
mbecker20
9e765f93f5 bump deps 2025-11-17 15:43:56 -08:00
mbecker20
b3aa8e906f add context to onboarding login error and error messages sent over the communication websocket in general 2025-11-17 13:23:44 -08:00
mbecker20
03fe442aa0 in prog 2025-11-17 12:59:08 -08:00
ChanningHe
d268009a6a validate compose_cmd_wrapper for required placeholder and add interpolation support (#977) 2025-11-12 10:09:15 -08:00
mbecker20
f0697e812a shift + N open new variable dialog 2025-11-11 14:22:56 -08:00
mbecker20
78766463d6 create variable Enter submit 2025-11-11 14:18:28 -08:00
mbecker20
0fa1edba2c deploy 2.0.0-dev-90 2025-11-11 14:13:55 -08:00
mbecker20
bbd968cac3 bump toml pretty with fix syncing procedure executions with multiline batch patterns 2025-11-11 14:13:25 -08:00
mbecker20
5f24fc1be3 deploy 2.0.0-dev-89 2025-11-11 00:44:49 -08:00
mbecker20
7ecd2b0b0b improve cmd wrapper with comment removal support 2025-11-11 00:43:54 -08:00
mbecker20
7bf44d2e04 fix some broken tabs 2025-11-11 00:35:24 -08:00
mbecker20
24e0672384 dashboard resets page title 2025-11-11 00:18:16 -08:00
mbecker20
04f081631f deploy 2.0.0-dev-88 2025-11-11 00:16:07 -08:00
mbecker20
b1af956b63 fix dashboard pie chart code splitting issue 2025-11-11 00:05:32 -08:00
mbecker20
370712b29f gen served client types 2025-11-11 00:05:02 -08:00
mbecker20
2b6c552964 canius lite update 2025-11-11 00:04:52 -08:00
mbecker20
434a1d8ea9 clippy lint 2025-11-11 00:04:39 -08:00
ChanningHe
0b7f28360f Add optional command wrapper for Docker Compose in StackConfig (#973) 2025-11-10 23:59:09 -08:00
ChanningHe
3c8ef0ab29 Add track option for Additional Env Files (#955) 2025-11-10 23:47:07 -08:00
mbecker20
930b2423c3 deploy 2.0.0-dev-87 2025-11-07 10:33:23 -08:00
mbecker20
546747b5f2 add timeout to dns ip resolve, only use ipv4 2025-11-07 10:32:55 -08:00
mbecker20
c6df866755 better aws builder config organization 2025-11-07 10:04:45 -08:00
mbecker20
ea5e684915 better useUserTargetPermissions 2025-11-06 22:18:31 -08:00
mbecker20
64db8933de RefreshBuildCache after build 2025-11-04 00:27:34 -08:00
mbecker20
7a5580de57 builder uppercase login 2025-11-04 00:06:46 -08:00
mbecker20
b1656bb174 log about enabling user linger 2025-11-03 10:29:50 -08:00
Badal Singh
559ce103da Update setup-periphery.py (#958) 2025-11-03 09:57:49 -08:00
mbecker20
75e278a57b builder fix partial_default 2025-10-30 00:41:27 -07:00
mbecker20
430f3ddc34 fix omni search container double select on same name 2025-10-29 00:02:32 -07:00
mbecker20
6c30c202e9 add Terminals to omni search 2025-10-28 23:59:41 -07:00
mbecker20
c5401de1c5 tweak user level tab view 2025-10-28 11:42:29 -07:00
mbecker20
7a3d9e0ef6 tweak description 2025-10-28 00:32:39 -07:00
mbecker20
595e3ece42 deploy 2.0.0-dev-86 2025-10-27 21:05:13 -07:00
mbecker20
a3bc895755 fix terminal disconnect 2025-10-27 21:04:46 -07:00
mbecker20
3e3def03ec terminal init properly lexes init command 2025-10-27 21:01:15 -07:00
mbecker20
bc672d9649 deploy 2.0.0-dev-85 2025-10-27 20:01:18 -07:00
mbecker20
ea6dee4d51 clippy lint 2025-10-27 19:13:43 -07:00
mbecker20
b985f18c74 deploy 2.0.0-dev-84 2025-10-27 19:12:54 -07:00
mbecker20
45909b2f04 pid1 reaper doesn't work, init: true should be required in compose 2025-10-27 19:06:50 -07:00
mbecker20
2b5a54ce89 deploy 2.0.0-dev-83 2025-10-27 18:31:56 -07:00
mbecker20
a18f33b95e formalize the terminal message variants 2025-10-27 18:31:30 -07:00
mbecker20
f35b00ea95 bump clap dependency 2025-10-27 16:18:30 -07:00
mbecker20
70fab08520 clean up terminal modules 2025-10-27 16:17:20 -07:00
mbecker20
0331780a5f rename variables shell -> command 2025-10-27 11:08:57 -07:00
mbecker20
06cdfd2bbc Terminal -> Terminals tabs 2025-10-27 02:53:06 -07:00
mbecker20
1555202569 Create Terminal don't auto set request after changed 2025-10-27 02:42:06 -07:00
mbecker20
5139622aad deploy 2.0.0-dev-82 2025-10-27 02:28:48 -07:00
mbecker20
61ce2ee3db improve new terminal 2025-10-27 02:04:15 -07:00
mbecker20
3171c14f2b comment on spawn process reaper 2025-10-27 01:41:06 -07:00
mbecker20
521db748d8 deploy 2.0.0-dev-81 2025-10-27 01:27:42 -07:00
mbecker20
35bf224080 deploy 2.0.0-dev-80 2025-10-27 01:21:44 -07:00
mbecker20
e0b31cfe51 CreateTerminal only shows resources which are actually available to connect to 2025-10-27 00:44:56 -07:00
mbecker20
0a890078b0 deploy 2.0.0-dev-79 2025-10-27 00:38:08 -07:00
mbecker20
df97ced7a4 deploy 2.0.0-dev-78 2025-10-27 00:03:26 -07:00
mbecker20
d4e5e2e6d8 add execute_<>_terminal convenience methods 2025-10-26 23:35:17 -07:00
mbecker20
19aa60dcb5 deploy 2.0.0-dev-77 2025-10-26 23:21:15 -07:00
mbecker20
fc19c53e6f deploy 2.0.0-dev-76 2025-10-26 23:00:59 -07:00
mbecker20
4f0af960db Big Terminal refactor + most commands run directly / bypass 'sh -c "..."' 2025-10-26 23:00:35 -07:00
mbecker20
e2ec5258fb add "New" kb shortcut 2025-10-23 23:55:24 -07:00
mbecker20
49b6545a02 reorder cli command list 2025-10-23 23:53:10 -07:00
mbecker20
0aabaa9e62 deploy 2.0.0-dev-75 2025-10-23 12:23:10 -07:00
mbecker20
dc65986eab binaries still built with bullseye for compat, but final images use trixie 2025-10-23 12:22:50 -07:00
mbecker20
1d8f28437d km attach <CONTAINER> 2025-10-23 12:22:02 -07:00
mbecker20
c1502e89c2 deploy 2.0.0-dev-74 2025-10-23 11:51:40 -07:00
mbecker20
0bd15fc442 ResourceQuery.names supports names or ids 2025-10-23 11:23:37 -07:00
mbecker20
5a3621b02e km exec 2025-10-23 01:55:50 -07:00
mbecker20
38192e2dac deploy 2.0.0-dev-73 2025-10-23 00:56:15 -07:00
mbecker20
5d271d5547 use Ping timeout to handle reconnect if for some reason network cuts but ws doesn't receive Close 2025-10-23 00:55:51 -07:00
mbecker20
11fb67a35b ssh use cancel token so stdout.write_all isn't cancelled mid-write, which leads to undefined behavior 2025-10-23 00:14:17 -07:00
mbecker20
a80499dcc4 improve stack config files responsive 2025-10-22 19:02:30 -07:00
mbecker20
8c76b8487f alert responsive, better Server terminal disabled 2025-10-22 13:48:08 -07:00
mbecker20
2b32d9042a deploy 2.0.0-dev-72 2025-10-22 01:00:19 -07:00
mbecker20
dc48f1f2ca deploy 2.0.0-dev-71 2025-10-22 00:50:02 -07:00
mbecker20
8e7b7bdcf1 deploy 2.0.0-dev-70 2025-10-22 00:44:54 -07:00
mbecker20
f11d64f72e add 'init' param to make 'execute_terminal' in single call possible 2025-10-22 00:44:33 -07:00
mbecker20
2ffae85180 dashboard table section headers link to resources page 2025-10-22 00:03:12 -07:00
mbecker20
bd79d0f1e0 km ssh <SERVER> [COMMAND] -n [NAME] 2025-10-21 23:55:36 -07:00
mbecker20
e890b1f675 deploy 2.0.0-dev-69 2025-10-21 23:32:18 -07:00
mbecker20
3b7de25c30 Shift + X - Terminals, Shift + N - New (Resource, Terminal) 2025-10-21 16:11:27 -07:00
mbecker20
793bb99f31 nav to terminal on create 2025-10-21 16:00:50 -07:00
mbecker20
d465c9f273 deploy 2.0.0-dev-68 2025-10-21 15:51:38 -07:00
mbecker20
ce641a8974 terminal page 2025-10-21 15:51:18 -07:00
mbecker20
1b89ceb122 deploy 2.0.0-dev-67 2025-10-21 02:50:21 -07:00
mbecker20
2dbc011d26 remove unneeded log on client terminal disconnect 2025-10-21 02:33:19 -07:00
mbecker20
246da88ae1 deploy 2.0.0-dev-66 2025-10-21 02:29:12 -07:00
mbecker20
a8c16f64b1 km ssh 2025-10-21 02:28:42 -07:00
mbecker20
a5b711a348 stack tabs localstorage increment 2025-10-20 20:35:08 -07:00
mbecker20
9666e9ad83 Fix monitoring table with proper server version component 2025-10-20 03:01:07 -07:00
mbecker20
7479640c73 add hover information for mysterious server header icons 2025-10-20 02:53:18 -07:00
mbecker20
4823825035 give websocket indicator info on hover 2025-10-20 02:35:12 -07:00
mbecker20
23897a7acf clippy 2025-10-20 02:16:52 -07:00
mbecker20
20d5588b5c deploy 2.0.0-dev-65 2025-10-20 02:15:15 -07:00
mbecker20
f7e15ccde5 progress on terminals page 2025-10-20 02:14:51 -07:00
mbecker20
cf7623b1fc combine all resources / table view into dashboard 2025-10-20 01:40:27 -07:00
mbecker20
d3c464c05d start Terminals management page 2025-10-20 00:42:45 -07:00
mbecker20
5c9d416aa4 prog on docs update 2025-10-19 23:33:41 -07:00
mbecker20
aabcd88312 update connect-servers docs 2025-10-19 23:07:50 -07:00
mbecker20
9d2624c6bc clarify root directory in periphery config file 2025-10-19 23:07:19 -07:00
mbecker20
ee11fb0b6c clean up setup script 2025-10-19 23:07:02 -07:00
mbecker20
45adfbddd0 mounting custom CA 2025-10-19 23:06:48 -07:00
mbecker20
d26d035dc6 clean up docs intro 2025-10-19 22:03:17 -07:00
mbecker20
e673ba0adf deploy 2.0.0-dev-64 2025-10-19 21:48:15 -07:00
mbecker20
f876facfa7 improve git status message / failure propogation 2025-10-19 21:47:29 -07:00
mbecker20
3a47d57478 container class px-[1.2rem] 2025-10-19 20:31:40 -07:00
mbecker20
a707028277 responsive tweaks 2025-10-19 20:07:30 -07:00
mbecker20
0c6276c677 fix Resources / Containers mobile 2025-10-19 19:51:28 -07:00
mbecker20
fc9c6706f1 keep more descriptive settings header mobile 2025-10-19 13:24:44 -07:00
mbecker20
7674269ce9 fix user dropdown not showing username mobile 2025-10-19 13:11:34 -07:00
mbecker20
3b511c5adc improve server terminal mobile responsiveness 2025-10-19 13:00:30 -07:00
mbecker20
87221a10e9 fix mobile ContainerTerminal responsiveness 2025-10-19 12:56:11 -07:00
mbecker20
450cb6a148 fix stack config files mobile responsiveness 2025-10-19 12:46:51 -07:00
mbecker20
f252cefb21 responsive server docker tab 2025-10-19 12:37:26 -07:00
mbecker20
7855e9d688 run dkf 2025-10-19 12:30:59 -07:00
mbecker20
feb263c15f more type safe tabs 2025-10-19 12:27:55 -07:00
mbecker20
4f8d1c22cc rest of tabs also use mobile friendly 2025-10-19 12:11:11 -07:00
mbecker20
60bd47834e deploy 2.0.0-dev-63 2025-10-19 11:48:09 -07:00
mbecker20
4d632a6b61 improve resources mobile tabs responsiveness 2025-10-19 11:47:47 -07:00
mbecker20
381dd76723 deploy 2.0.0-dev-62 2025-10-19 01:37:10 -07:00
mbecker20
077e28a5fe fix ConfigList too wide on mobile 2025-10-19 01:36:50 -07:00
mbecker20
6b02aaed7d hide core pubkey copy if origin not https 2025-10-19 01:28:45 -07:00
mbecker20
e466944c05 improve mobile settings view 2025-10-19 01:24:41 -07:00
mbecker20
8ff94b7465 deploy 2.0.0-dev-61 2025-10-19 00:35:26 -07:00
mbecker20
b17df5ed7b show host public ip 2025-10-19 00:34:52 -07:00
mbecker20
207dc30206 cli is distroless, no shell / update-ca-certificates 2025-10-18 22:12:44 -07:00
mbecker20
c3eb386bdb fix copy entrypoint 2025-10-18 22:07:16 -07:00
mbecker20
4279e46892 deploy 2.0.0-dev-60 2025-10-18 12:59:19 -07:00
mbecker20
8d3d2fee12 use entrypoint scripts to make update-ca-certificates consistent when using custom CMD 2025-10-18 12:58:55 -07:00
mbecker20
1df36c4266 deploy 2.0.0-dev-59 2025-10-18 11:36:07 -07:00
mbecker20
36f7ad33c7 core and periphery images auto run update-ca-certificates on start, only need to mount in. 2025-10-18 11:35:45 -07:00
mbecker20
ec34b2c139 deploy 2.0.0-dev-58 2025-10-18 11:02:11 -07:00
mbecker20
d14c28d1f2 new otel instrumentation 2025-10-18 11:01:47 -07:00
mbecker20
68f7a0e9ce all info menu to top of settings 2025-10-18 00:45:59 -07:00
mbecker20
50f0376f0a Add Core title and public key to top of Settings 2025-10-18 00:01:41 -07:00
mbecker20
bbd53747ad fix km ps -h description 2025-10-17 17:17:18 -07:00
mbecker20
6a2adf1f83 tweak logs 2025-10-16 01:06:37 -07:00
mbecker20
128b15b94f deploy 2.0.0-dev-57 2025-10-16 00:59:46 -07:00
mbecker20
8d74b377b7 more otel refinements 2025-10-16 00:59:20 -07:00
mbecker20
d7e972e5c6 stack ui doesn't show project missing when deploying 2025-10-15 23:49:26 -07:00
mbecker20
e5cb4aac5a Fix: Webhook triggered checks linked repo branch for build, stack, sync 2025-10-15 18:06:43 -07:00
mbecker20
d0f62f8326 rework tracing events / improve opentelemetry output 2025-10-15 01:41:18 -07:00
mbecker20
47c4091a4b onboarding key uses recognizable key 2025-10-14 16:57:35 -07:00
mbecker20
973480e2b3 remove all the unnecessary instrument debug 2025-10-14 00:33:53 -07:00
mbecker20
b9e1cc87d2 remove instrument from validate_cancel_repo_build 2025-10-13 23:52:55 -07:00
mbecker20
05d20c8603 deploy 2.0.0-dev-56 2025-10-13 22:05:07 -07:00
mbecker20
fe2d68a001 fix config loading 2025-10-13 22:04:42 -07:00
mbecker20
26fd5b2a6d deploy 2.0.0-dev-55 2025-10-13 20:30:40 -07:00
mbecker20
76457bcb61 apply env / shell interpolation as *final* config loading stage, to include env vars. 2025-10-13 20:26:13 -07:00
mbecker20
ebd2c2238d bump deps 2025-10-13 19:51:05 -07:00
mbecker20
b7fc1bef7b refine default env 2025-10-13 13:53:12 -07:00
mbecker20
50b9f2e1bf deploy 2.0.0-dev-54 2025-10-13 13:06:23 -07:00
443 changed files with 36809 additions and 12486 deletions

1385
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -8,7 +8,7 @@ members = [
]
[workspace.package]
version = "2.0.0-dev-53"
version = "2.0.0-dev-98"
edition = "2024"
authors = ["mbecker20 <becker.maxh@gmail.com>"]
license = "GPL-3.0-or-later"
@@ -26,7 +26,9 @@ environment_file = { path = "lib/environment_file" }
environment = { path = "lib/environment" }
interpolate = { path = "lib/interpolate" }
secret_file = { path = "lib/secret_file" }
validations = { path = "lib/validations" }
formatting = { path = "lib/formatting" }
rate_limit = { path = "lib/rate_limit" }
transport = { path = "lib/transport" }
database = { path = "lib/database" }
encoding = { path = "lib/encoding" }
@@ -39,9 +41,8 @@ noise = { path = "lib/noise" }
git = { path = "lib/git" }
# MOGH
run_command = { version = "0.0.6", features = ["async_tokio"] }
serror = { version = "0.5.3", default-features = false }
slack = { version = "1.1.0", package = "slack_client_rs", default-features = false, features = ["rustls"] }
slack = { version = "2.0.0", package = "slack_client_rs", default-features = false, features = ["rustls"] }
derive_default_builder = "0.1.8"
derive_empty_traits = "0.1.0"
async_timing_util = "1.1.0"
@@ -49,30 +50,30 @@ partial_derive2 = "0.4.3"
derive_variants = "1.0.0"
mongo_indexed = "2.0.2"
resolver_api = "3.0.0"
toml_pretty = "1.2.0"
toml_pretty = "2.0.0"
mungos = "3.2.2"
svi = "1.2.0"
# ASYNC
reqwest = { version = "0.12.23", default-features = false, features = ["json", "stream", "rustls-tls-native-roots"] }
tokio = { version = "1.47.1", features = ["full"] }
tokio-util = { version = "0.7.16", features = ["io", "codec"] }
reqwest = { version = "0.12.24", default-features = false, features = ["json", "stream", "rustls-tls-native-roots"] }
tokio = { version = "1.48.0", features = ["full"] }
tokio-util = { version = "0.7.17", features = ["io", "codec"] }
tokio-stream = { version = "0.1.17", features = ["sync"] }
pin-project-lite = "0.2.16"
futures = "0.3.31"
futures-util = "0.3.31"
arc-swap = "1.7.1"
# SERVER
tokio-tungstenite = { version = "0.28.0", features = ["rustls-tls-native-roots"] }
axum-extra = { version = "0.10.3", features = ["typed-header"] }
tower-http = { version = "0.6.6", features = ["fs", "cors"] }
axum-server = { version = "0.7.2", features = ["tls-rustls"] }
axum = { version = "0.8.6", features = ["ws", "json", "macros"] }
tower-http = { version = "0.6.8", features = ["fs", "cors", "set-header"] }
axum = { version = "0.8.7", features = ["ws", "json", "macros"] }
axum-extra = { version = "0.12.2", features = ["typed-header"] }
axum-server = { version = "0.8.0", features = ["tls-rustls"] }
tower-sessions = "0.14.0"
# SER/DE
ipnetwork = { version = "0.21.1", features = ["serde"] }
indexmap = { version = "2.11.4", features = ["serde"] }
indexmap = { version = "2.12.1", features = ["serde"] }
serde = { version = "1.0.227", features = ["derive"] }
strum = { version = "0.27.2", features = ["derive"] }
bson = { version = "2.15.0" } # must keep in sync with mongodb version
@@ -89,60 +90,67 @@ thiserror = "2.0.17"
# LOGGING
opentelemetry-otlp = { version = "0.31.0", features = ["tls-roots", "reqwest-rustls"] }
opentelemetry_sdk = { version = "0.31.0", features = ["rt-tokio"] }
tracing-subscriber = { version = "0.3.20", features = ["json"] }
tracing-subscriber = { version = "0.3.22", features = ["json"] }
opentelemetry-semantic-conventions = "0.31.0"
tracing-opentelemetry = "0.32.0"
opentelemetry = "0.31.0"
tracing = "0.1.41"
tracing = "0.1.43"
# CONFIG
clap = { version = "4.5.48", features = ["derive"] }
clap = { version = "4.5.53", features = ["derive"] }
dotenvy = "0.15.7"
envy = "0.4.2"
# CRYPTO / AUTH
uuid = { version = "1.18.1", features = ["v4", "fast-rng", "serde"] }
jsonwebtoken = { version = "10.0.0", features = ["aws_lc_rs"] } # locked back with octorust
rustls = { version = "0.23.32", features = ["aws-lc-rs"] }
pem-rfc7468 = { version = "0.7.0", features = ["alloc"] }
openidconnect = "4.0.1"
webauthn-rs = { version = "0.5.4", features = ["danger-allow-state-serialisation"] }
openidconnect = { version = "4.0.1", features = ["accept-rfc3339-timestamps"] }
uuid = { version = "1.19.0", features = ["v4", "fast-rng", "serde"] }
jsonwebtoken = { version = "10.2.0", features = ["aws_lc_rs"] } # locked back with octorust
rustls = { version = "0.23.35", features = ["aws-lc-rs"] }
pem-rfc7468 = { version = "1.0.0", features = ["alloc"] }
totp-rs = { version = "5.7.0", features = ["qr"] }
webauthn-rs-proto = "0.5.4"
base64urlsafedata = "0.5.4"
data-encoding = "2.9.0"
urlencoding = "2.1.3"
bcrypt = "0.17.1"
base64 = "0.22.1"
pkcs8 = "0.10.2"
snow = "0.10.0"
hmac = "0.12.1"
sha1 = "0.10.6"
sha2 = "0.10.9"
rand = "0.9.2"
hex = "0.4.3"
spki = "0.7.3"
der = "0.7.10"
hex = "0.4.3"
# SYSTEM
hickory-resolver = "0.25.2"
portable-pty = "0.9.0"
bollard = "0.19.3"
shell-escape = "0.1.5"
crossterm = "0.29.0"
bollard = "0.19.4"
sysinfo = "0.37.1"
shlex = "1.3.0"
# CLOUD
aws-config = "1.8.8"
aws-sdk-ec2 = "1.172.0"
aws-credential-types = "1.2.8"
aws-config = "1.8.12"
aws-sdk-ec2 = "1.196.0"
aws-credential-types = "1.2.11"
## CRON
english-to-cron = "0.1.6"
english-to-cron = "0.1.7"
chrono-tz = "0.10.4"
chrono = "0.4.42"
croner = "3.0.0"
croner = "3.0.1"
# MISC
async-compression = { version = "0.4.32", features = ["tokio", "gzip"] }
async-compression = { version = "0.4.35", features = ["tokio", "gzip"] }
derive_builder = "0.20.2"
shell-escape = "0.1.5"
comfy-table = "7.2.1"
typeshare = "1.0.4"
dashmap = "6.1.0"
wildcard = "0.3.0"
colored = "3.0.0"
regex = "1.12.1"
bytes = "1.10.1"
bytes = "1.11.0"
regex = "1.12.2"

4
action/deploy-fe.ts Normal file
View File

@@ -0,0 +1,4 @@
const cmd = "km run -y action deploy-komodo-fe-change";
new Deno.Command("bash", {
args: ["-c", cmd],
}).spawn();

View File

@@ -1,7 +1,7 @@
## Builds the Komodo Core, Periphery, and Util binaries
## for a specific architecture.
## for a specific architecture. Requires OpenSSL 3 or later.
FROM rust:1.90.0-bullseye AS builder
FROM rust:1.91.1-bookworm AS builder
RUN cargo install cargo-strip
WORKDIR /builder

View File

@@ -1,9 +1,9 @@
## Builds the Komodo Core, Periphery, and Util binaries
## for a specific architecture.
## for a specific architecture. Requires OpenSSL 3 or later.
## Uses chef for dependency caching to help speed up back-to-back builds.
FROM lukemathwalker/cargo-chef:latest-rust-1.90.0-bullseye AS chef
FROM lukemathwalker/cargo-chef:latest-rust-1.90.0-bookworm AS chef
WORKDIR /builder
# Plan just the RECIPE to see if things have changed

View File

@@ -23,7 +23,9 @@ noise.workspace = true
# external
futures-util.workspace = true
comfy-table.workspace = true
tokio-util.workspace = true
serde_json.workspace = true
crossterm.workspace = true
serde_qs.workspace = true
wildcard.workspace = true
tracing.workspace = true

View File

@@ -1,4 +1,4 @@
FROM rust:1.90.0-bullseye AS builder
FROM rust:1.91.1-bullseye AS builder
RUN cargo install cargo-strip
WORKDIR /builder

View File

@@ -61,7 +61,8 @@ async fn list_containers(
.map(|s| (s.id.clone(), s))
.collect::<HashMap<_, _>>())),
client.read(ListAllDockerContainers {
servers: Default::default()
servers: Default::default(),
containers: Default::default(),
}),
)?;
@@ -145,7 +146,8 @@ pub async fn inspect_container(
.map(|s| (s.id.clone(), s))
.collect::<HashMap<_, _>>())),
client.read(ListAllDockerContainers {
servers: Default::default()
servers: Default::default(),
containers: Default::default()
}),
)?;

View File

@@ -221,6 +221,21 @@ pub async fn handle(
Execution::SendAlert(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RemoveSwarmNodes(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RemoveSwarmStacks(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RemoveSwarmServices(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RemoveSwarmConfigs(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RemoveSwarmSecrets(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::ClearRepoCache(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
@@ -488,6 +503,26 @@ pub async fn handle(
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::RemoveSwarmNodes(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::RemoveSwarmStacks(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::RemoveSwarmServices(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::RemoveSwarmConfigs(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::RemoveSwarmSecrets(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::ClearRepoCache(request) => client
.execute(request)
.await

View File

@@ -7,7 +7,7 @@ use komodo_client::{
api::read::{
ListActions, ListAlerters, ListBuilders, ListBuilds,
ListDeployments, ListProcedures, ListRepos, ListResourceSyncs,
ListSchedules, ListServers, ListStacks, ListTags,
ListSchedules, ListServers, ListStacks, ListTags, ListTerminals,
},
entities::{
ResourceTargetVariant,
@@ -35,6 +35,7 @@ use komodo_client::{
ResourceSyncListItem, ResourceSyncListItemInfo,
ResourceSyncState,
},
terminal::Terminal,
},
};
use serde::Serialize;
@@ -74,15 +75,18 @@ pub async fn handle(list: &args::list::List) -> anyhow::Result<()> {
Some(ListCommand::Syncs(filters)) => {
list_resources::<ResourceSyncListItem>(filters, false).await
}
Some(ListCommand::Terminals(filters)) => {
list_terminals(filters).await
}
Some(ListCommand::Schedules(filters)) => {
list_schedules(filters).await
}
Some(ListCommand::Builders(filters)) => {
list_resources::<BuilderListItem>(filters, false).await
}
Some(ListCommand::Alerters(filters)) => {
list_resources::<AlerterListItem>(filters, false).await
}
Some(ListCommand::Schedules(filters)) => {
list_schedules(filters).await
}
}
}
@@ -189,6 +193,26 @@ where
Ok(())
}
async fn list_terminals(
filters: &ResourceFilters,
) -> anyhow::Result<()> {
let client = crate::command::komodo_client().await?;
// let query = ResourceQuery::builder()
// .tags(filters.tags.clone())
// .templates(TemplatesQueryBehavior::Exclude)
// .build();
let terminals = client
.read(ListTerminals {
target: None,
use_names: true,
})
.await?;
if !terminals.is_empty() {
print_items(terminals, filters.format, filters.links)?;
}
Ok(())
}
async fn list_schedules(
filters: &ResourceFilters,
) -> anyhow::Result<()> {
@@ -1134,6 +1158,28 @@ impl PrintTable for ResourceListItem<AlerterListItemInfo> {
}
}
impl PrintTable for Terminal {
fn header(_links: bool) -> &'static [&'static str] {
&["Terminal", "Target", "Command", "Size", "Created"]
}
fn row(self, _links: bool) -> Vec<comfy_table::Cell> {
vec![
Cell::new(self.name).add_attribute(Attribute::Bold),
Cell::new(format!("{:?}", self.target)),
Cell::new(self.command),
Cell::new(if self.stored_size_kb < 1.0 {
format!("{:.1} KiB", self.stored_size_kb)
} else {
format!("{:.} KiB", self.stored_size_kb)
}),
Cell::new(
format_timetamp(self.created_at)
.unwrap_or_else(|_| String::from("Invalid created at")),
),
]
}
}
impl PrintTable for Schedule {
fn header(links: bool) -> &'static [&'static str] {
if links {
@@ -1146,7 +1192,7 @@ impl PrintTable for Schedule {
let next_run = if let Some(ts) = self.next_scheduled_run {
Cell::new(
format_timetamp(ts)
.unwrap_or(String::from("Invalid next ts")),
.unwrap_or_else(|_| String::from("Invalid next ts")),
)
.add_attribute(Attribute::Bold)
} else {

View File

@@ -18,6 +18,7 @@ pub mod container;
pub mod database;
pub mod execute;
pub mod list;
pub mod terminal;
pub mod update;
async fn komodo_client() -> anyhow::Result<&'static KomodoClient> {

View File

@@ -0,0 +1,334 @@
use anyhow::{Context, anyhow};
use colored::Colorize;
use komodo_client::{
api::{
read::{ListAllDockerContainers, ListServers},
terminal::InitTerminal,
},
entities::{
config::cli::args::terminal::{Attach, Connect, Exec},
server::ServerQuery,
terminal::{
ContainerTerminalMode, TerminalRecreateMode,
TerminalResizeMessage, TerminalStdinMessage,
},
},
ws::terminal::TerminalWebsocket,
};
use tokio::io::{AsyncReadExt as _, AsyncWriteExt as _};
use tokio_util::sync::CancellationToken;
pub async fn handle_connect(
Connect {
server,
name,
command,
recreate,
}: &Connect,
) -> anyhow::Result<()> {
handle_terminal_forwarding(async {
super::komodo_client()
.await?
.connect_server_terminal(
server.to_string(),
Some(name.to_string()),
Some(InitTerminal {
command: command.clone(),
recreate: if *recreate {
TerminalRecreateMode::Always
} else {
TerminalRecreateMode::DifferentCommand
},
mode: None,
}),
)
.await
})
.await
}
pub async fn handle_exec(
Exec {
server,
container,
shell,
recreate,
}: &Exec,
) -> anyhow::Result<()> {
let server = get_server(server.clone(), container).await?;
handle_terminal_forwarding(async {
super::komodo_client()
.await?
.connect_container_terminal(
server,
container.to_string(),
None,
Some(InitTerminal {
command: Some(shell.to_string()),
recreate: if *recreate {
TerminalRecreateMode::Always
} else {
TerminalRecreateMode::DifferentCommand
},
mode: Some(ContainerTerminalMode::Exec),
}),
)
.await
})
.await
}
pub async fn handle_attach(
Attach {
server,
container,
recreate,
}: &Attach,
) -> anyhow::Result<()> {
let server = get_server(server.clone(), container).await?;
handle_terminal_forwarding(async {
super::komodo_client()
.await?
.connect_container_terminal(
server,
container.to_string(),
None,
Some(InitTerminal {
command: None,
recreate: if *recreate {
TerminalRecreateMode::Always
} else {
TerminalRecreateMode::DifferentCommand
},
mode: Some(ContainerTerminalMode::Attach),
}),
)
.await
})
.await
}
async fn get_server(
server: Option<String>,
container: &str,
) -> anyhow::Result<String> {
if let Some(server) = server {
return Ok(server);
}
let client = super::komodo_client().await?;
let mut containers = client
.read(ListAllDockerContainers {
servers: Default::default(),
containers: vec![container.to_string()],
})
.await?;
if containers.is_empty() {
return Err(anyhow!(
"Did not find any container matching {container}"
));
}
if containers.len() == 1 {
return containers
.pop()
.context("Shouldn't happen")?
.server_id
.context("Container doesn't have server_id");
}
let servers = containers
.into_iter()
.flat_map(|container| container.server_id)
.collect::<Vec<_>>();
let servers = client
.read(ListServers {
query: ServerQuery::builder().names(servers).build(),
})
.await?
.into_iter()
.map(|server| format!("\t- {}", server.name.bold()))
.collect::<Vec<_>>()
.join("\n");
Err(anyhow!(
"Multiple containers matching '{}' on Servers:\n{servers}",
container.bold(),
))
}
async fn handle_terminal_forwarding<
C: Future<Output = anyhow::Result<TerminalWebsocket>>,
>(
connect: C,
) -> anyhow::Result<()> {
// Need to forward multiple sources into ws write
let (write_tx, mut write_rx) =
tokio::sync::mpsc::channel::<TerminalStdinMessage>(1024);
// ================
// SETUP RESIZING
// ================
// Subscribe to SIGWINCH for resize messages
let mut sigwinch = tokio::signal::unix::signal(
tokio::signal::unix::SignalKind::window_change(),
)
.context("failed to register SIGWINCH handler")?;
// Send first resize messsage, bailing if it fails to get the size.
write_tx.send(resize_message()?).await?;
let cancel = CancellationToken::new();
let forward_resize = async {
while future_or_cancel(sigwinch.recv(), &cancel)
.await
.flatten()
.is_some()
{
if let Ok(resize_message) = resize_message()
&& write_tx.send(resize_message).await.is_err()
{
break;
}
}
cancel.cancel();
};
let forward_stdin = async {
let mut stdin = tokio::io::stdin();
let mut buf = [0u8; 8192];
while let Some(Ok(n)) =
future_or_cancel(stdin.read(&mut buf), &cancel).await
{
// EOF
if n == 0 {
break;
}
let bytes = &buf[..n];
// Check for disconnect sequence (alt + q)
if bytes == [197, 147] {
break;
}
// Forward bytes
if write_tx
.send(TerminalStdinMessage::Forward(bytes.to_vec()))
.await
.is_err()
{
break;
};
}
cancel.cancel();
};
// =====================
// CONNECT AND FORWARD
// =====================
let (mut ws_write, mut ws_read) = connect.await?.split();
let forward_write = async {
while let Some(message) =
future_or_cancel(write_rx.recv(), &cancel).await.flatten()
{
if let Err(e) = ws_write.send_stdin_message(message).await {
cancel.cancel();
return Some(e);
};
}
cancel.cancel();
None
};
let forward_read = async {
let mut stdout = tokio::io::stdout();
while let Some(msg) =
future_or_cancel(ws_read.receive_stdout(), &cancel).await
{
let bytes = match msg {
Ok(Some(bytes)) => bytes,
Ok(None) => break,
Err(e) => {
cancel.cancel();
return Some(e.context("Websocket read error"));
}
};
if let Err(e) = stdout
.write_all(&bytes)
.await
.context("Failed to write text to stdout")
{
cancel.cancel();
return Some(e);
}
let _ = stdout.flush().await;
}
cancel.cancel();
None
};
let guard = RawModeGuard::enable_raw_mode()?;
let (_, _, write_error, read_error) = tokio::join!(
forward_resize,
forward_stdin,
forward_write,
forward_read
);
drop(guard);
if let Some(e) = write_error {
eprintln!("\nFailed to forward stdin | {e:#}");
}
if let Some(e) = read_error {
eprintln!("\nFailed to forward stdout | {e:#}");
}
println!("\n\n{} {}", "connection".bold(), "closed".red().bold());
// It doesn't seem to exit by itself after the raw mode stuff.
std::process::exit(0)
}
fn resize_message() -> anyhow::Result<TerminalStdinMessage> {
let (cols, rows) = crossterm::terminal::size()
.context("Failed to get terminal size")?;
Ok(TerminalStdinMessage::Resize(TerminalResizeMessage {
rows,
cols,
}))
}
struct RawModeGuard;
impl RawModeGuard {
fn enable_raw_mode() -> anyhow::Result<Self> {
crossterm::terminal::enable_raw_mode()
.context("Failed to enable terminal raw mode")?;
Ok(Self)
}
}
impl Drop for RawModeGuard {
fn drop(&mut self) {
if let Err(e) = crossterm::terminal::disable_raw_mode() {
eprintln!("Failed to disable terminal raw mode | {e:?}");
}
}
}
async fn future_or_cancel<T, F: Future<Output = T>>(
fut: F,
cancel: &CancellationToken,
) -> Option<T> {
tokio::select! {
res = fut => Some(res),
_ = cancel.cancelled() => None
}
}

View File

@@ -26,6 +26,9 @@ pub async fn update(
UpdateUserCommand::SuperAdmin { enabled, yes } => {
update_super_admin(username, *enabled, *yes).await
}
UpdateUserCommand::Clear2fa { yes } => {
clear_2fa(username, *yes).await
}
}
}
@@ -120,3 +123,20 @@ async fn update_super_admin(
Ok(())
}
async fn clear_2fa(username: &str, yes: bool) -> anyhow::Result<()> {
println!("\n{}: Clear 2FA Methods\n", "Mode".dimmed());
println!(" - {}: {username}", "Username".dimmed());
crate::command::wait_for_enter("clear user 2FA methods", yes)?;
info!("Clearing 2FA methods...");
let db = database::Client::new(&cli_config().database).await?;
db.clear_user_2fa_methods(username).await?;
info!("2FA methods cleared ✅");
Ok(())
}

View File

@@ -261,12 +261,18 @@ pub fn cli_config() -> &'static CliConfig {
.komodo_cli_logging_pretty
.unwrap_or(config.cli_logging.pretty),
location: false,
ansi: env
.komodo_cli_logging_ansi
.unwrap_or(config.cli_logging.ansi),
otlp_endpoint: env
.komodo_cli_logging_otlp_endpoint
.unwrap_or(config.cli_logging.otlp_endpoint),
opentelemetry_service_name: env
.komodo_cli_logging_opentelemetry_service_name
.unwrap_or(config.cli_logging.opentelemetry_service_name),
opentelemetry_scope_name: env
.komodo_cli_logging_opentelemetry_scope_name
.unwrap_or(config.cli_logging.opentelemetry_scope_name),
},
profile: config.profile,
}

View File

@@ -2,6 +2,7 @@
extern crate tracing;
use anyhow::Context;
use colored::Colorize;
use komodo_client::entities::config::cli::args;
use crate::config::cli_config;
@@ -41,12 +42,6 @@ async fn app() -> anyhow::Result<()> {
}
Ok(())
}
args::Command::Key { command } => {
noise::key::command::handle(command).await
}
args::Command::Database { command } => {
command::database::handle(command).await
}
args::Command::Container(container) => {
command::container::handle(container).await
}
@@ -60,6 +55,21 @@ async fn app() -> anyhow::Result<()> {
args::Command::Update { command } => {
command::update::handle(command).await
}
args::Command::Connect(connect) => {
command::terminal::handle_connect(connect).await
}
args::Command::Exec(exec) => {
command::terminal::handle_exec(exec).await
}
args::Command::Attach(attach) => {
command::terminal::handle_attach(attach).await
}
args::Command::Key { command } => {
noise::key::command::handle(command).await
}
args::Command::Database { command } => {
command::database::handle(command).await
}
}
}
@@ -69,7 +79,18 @@ async fn main() -> anyhow::Result<()> {
tokio::signal::unix::SignalKind::terminate(),
)?;
tokio::select! {
res = tokio::spawn(app()) => res?,
_ = term_signal.recv() => Ok(()),
res = tokio::spawn(app()) => match res {
Ok(Err(e)) => {
eprintln!("{}: {e}", "ERROR".red());
std::process::exit(1)
}
Err(e) => {
eprintln!("{}: {e}", "ERROR".red());
std::process::exit(1)
},
Ok(_) => {}
},
_ = term_signal.recv() => {},
}
Ok(())
}

View File

@@ -20,7 +20,9 @@ periphery_client.workspace = true
environment_file.workspace = true
interpolate.workspace = true
secret_file.workspace = true
validations.workspace = true
formatting.workspace = true
rate_limit.workspace = true
transport.workspace = true
database.workspace = true
encoding.workspace = true
@@ -43,43 +45,47 @@ svi.workspace = true
# external
aws-credential-types.workspace = true
english-to-cron.workspace = true
tower-sessions.workspace = true
openidconnect.workspace = true
data-encoding.workspace = true
serde_yaml_ng.workspace = true
jsonwebtoken.workspace = true
futures-util.workspace = true
axum-server.workspace = true
urlencoding.workspace = true
aws-sdk-ec2.workspace = true
urlencoding.workspace = true
webauthn-rs.workspace = true
aws-config.workspace = true
tokio-util.workspace = true
axum-extra.workspace = true
tower-http.workspace = true
serde_json.workspace = true
serde_yaml_ng.workspace = true
typeshare.workspace = true
chrono-tz.workspace = true
indexmap.workspace = true
wildcard.workspace = true
arc-swap.workspace = true
serde_qs.workspace = true
colored.workspace = true
dashmap.workspace = true
tracing.workspace = true
reqwest.workspace = true
futures.workspace = true
dotenvy.workspace = true
totp-rs.workspace = true
anyhow.workspace = true
croner.workspace = true
chrono.workspace = true
bcrypt.workspace = true
base64.workspace = true
rustls.workspace = true
bytes.workspace = true
tokio.workspace = true
serde.workspace = true
strum.workspace = true
regex.workspace = true
axum.workspace = true
toml.workspace = true
uuid.workspace = true
envy.workspace = true
rand.workspace = true
hmac.workspace = true
sha2.workspace = true
hex.workspace = true

View File

@@ -1,7 +1,7 @@
## All in one, multi stage compile + runtime Docker build for your architecture.
# Build Core
FROM rust:1.90.0-bullseye AS core-builder
FROM rust:1.91.1-trixie AS core-builder
RUN cargo install cargo-strip
WORKDIR /builder
@@ -26,7 +26,7 @@ RUN cd client && yarn && yarn build && yarn link
RUN cd frontend && yarn link komodo_client && yarn && yarn build
# Final Image
FROM debian:bullseye-slim
FROM debian:trixie-slim
COPY ./bin/core/starship.toml /starship.toml
COPY ./bin/core/debian-deps.sh .
@@ -48,6 +48,9 @@ RUN mkdir /action-cache && \
cd /action-cache && \
deno install jsr:@std/yaml jsr:@std/toml
COPY ./bin/entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
# Hint at the port
EXPOSE 9120
@@ -55,7 +58,7 @@ ENV KOMODO_CLI_CONFIG_PATHS="/config"
# This ensures any `komodo.cli.*` takes precedence over the Core `/config/*config.*`
ENV KOMODO_CLI_CONFIG_KEYWORDS="*config.*,*komodo.cli*.*"
CMD [ "core" ]
CMD [ "/bin/bash", "-c", "update-ca-certificates && core" ]
# Label to prevent Komodo from stopping with StopAllContainers
LABEL komodo.skip="true"

View File

@@ -13,7 +13,7 @@ FROM ${AARCH64_BINARIES} AS aarch64
FROM ${FRONTEND_IMAGE} AS frontend
# Final Image
FROM debian:bullseye-slim
FROM debian:trixie-slim
COPY ./bin/core/starship.toml /starship.toml
COPY ./bin/core/debian-deps.sh .
@@ -28,7 +28,7 @@ COPY --from=x86_64 /core /app/core/linux/amd64
COPY --from=aarch64 /core /app/core/linux/arm64
RUN mv /app/core/${TARGETPLATFORM} /usr/local/bin/core && rm -r /app/core
# Same for util
# Same for km
COPY --from=x86_64 /km /app/km/linux/amd64
COPY --from=aarch64 /km /app/km/linux/arm64
RUN mv /app/km/${TARGETPLATFORM} /usr/local/bin/km && rm -r /app/km
@@ -44,6 +44,9 @@ RUN mkdir /action-cache && \
cd /action-cache && \
deno install jsr:@std/yaml jsr:@std/toml
COPY ./bin/entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
# Hint at the port
EXPOSE 9120
@@ -51,6 +54,7 @@ ENV KOMODO_CLI_CONFIG_PATHS="/config"
# This ensures any `komodo.cli.*` takes precedence over the Core `/config/*config.*`
ENV KOMODO_CLI_CONFIG_KEYWORDS="*config.*,*komodo.cli*.*"
ENTRYPOINT [ "entrypoint.sh" ]
CMD [ "core" ]
# Label to prevent Komodo from stopping with StopAllContainers

View File

@@ -14,7 +14,7 @@ COPY ./client/core/ts ./client
RUN cd client && yarn && yarn build && yarn link
RUN cd frontend && yarn link komodo_client && yarn && yarn build
FROM debian:bullseye-slim
FROM debian:trixie-slim
COPY ./bin/core/starship.toml /starship.toml
COPY ./bin/core/debian-deps.sh .
@@ -33,6 +33,9 @@ RUN mkdir /action-cache && \
cd /action-cache && \
deno install jsr:@std/yaml jsr:@std/toml
COPY ./bin/entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
# Hint at the port
EXPOSE 9120
@@ -40,6 +43,7 @@ ENV KOMODO_CLI_CONFIG_PATHS="/config"
# This ensures any `komodo.cli.*` takes precedence over the Core `/config/*config.*`
ENV KOMODO_CLI_CONFIG_KEYWORDS="*config.*,*komodo.cli*.*"
ENTRYPOINT [ "entrypoint.sh" ]
CMD [ "core" ]
# Label to prevent Komodo from stopping with StopAllContainers

View File

@@ -4,7 +4,6 @@ use serde::Serialize;
use super::*;
#[instrument(level = "debug")]
pub async fn send_alert(
url: &str,
alert: &Alert,
@@ -233,15 +232,15 @@ pub async fn send_alert(
format!(
"{level} | {message}{}",
if details.is_empty() {
format_args!("")
String::new()
} else {
format_args!("\n{details}")
format!("\n{details}")
}
)
}
AlertData::None {} => Default::default(),
};
if content.is_empty() {
return Ok(());
}

View File

@@ -1,7 +1,7 @@
use anyhow::{Context, anyhow};
use database::mungos::{find::find_collect, mongodb::bson::doc};
use derive_variants::ExtractVariant;
use futures::future::join_all;
use futures_util::future::join_all;
use interpolate::Interpolator;
use komodo_client::entities::{
ResourceTargetVariant,
@@ -11,7 +11,6 @@ use komodo_client::entities::{
komodo_timestamp,
stack::StackState,
};
use tracing::Instrument;
use crate::helpers::query::get_variables_and_secrets;
use crate::helpers::{
@@ -24,40 +23,32 @@ mod ntfy;
mod pushover;
mod slack;
#[instrument(level = "debug")]
pub async fn send_alerts(alerts: &[Alert]) {
if alerts.is_empty() {
return;
}
let span =
info_span!("send_alerts", alerts = format!("{alerts:?}"));
async {
let Ok(alerters) = find_collect(
&db_client().alerters,
doc! { "config.enabled": true },
None,
)
.await
.inspect_err(|e| {
error!(
let Ok(alerters) = find_collect(
&db_client().alerters,
doc! { "config.enabled": true },
None,
)
.await
.inspect_err(|e| {
error!(
"ERROR sending alerts | failed to get alerters from db | {e:#}"
)
}) else {
return;
};
}) else {
return;
};
let handles = alerts
.iter()
.map(|alert| send_alert_to_alerters(&alerters, alert));
let handles = alerts
.iter()
.map(|alert| send_alert_to_alerters(&alerters, alert));
join_all(handles).await;
}
.instrument(span)
.await
join_all(handles).await;
}
#[instrument(level = "debug")]
async fn send_alert_to_alerters(alerters: &[Alerter], alert: &Alert) {
if alerters.is_empty() {
return;
@@ -161,7 +152,6 @@ pub async fn send_alert_to_alerter(
}
}
#[instrument(level = "debug")]
async fn send_custom_alert(
url: &str,
alert: &Alert,
@@ -476,9 +466,9 @@ fn standard_alert_content(alert: &Alert) -> String {
format!(
"{level} | {message}{}",
if details.is_empty() {
format_args!("")
String::new()
} else {
format_args!("\n{details}")
format!("\n{details}")
}
)
}

View File

@@ -2,7 +2,6 @@ use std::sync::OnceLock;
use super::*;
#[instrument(level = "debug")]
pub async fn send_alert(
url: &str,
email: Option<&str>,

View File

@@ -2,7 +2,6 @@ use std::sync::OnceLock;
use super::*;
#[instrument(level = "debug")]
pub async fn send_alert(
url: &str,
alert: &Alert,

View File

@@ -2,7 +2,6 @@ use ::slack::types::OwnedBlock as Block;
use super::*;
#[instrument(level = "debug")]
pub async fn send_alert(
url: &str,
alert: &Alert,
@@ -482,7 +481,7 @@ pub async fn send_alert(
let slack = ::slack::Client::new(url_interpolated);
slack
.send_owned_message_single(&text, blocks.as_deref())
.send_owned_message_single(&text, None, blocks.as_deref())
.await
.map_err(|e| {
let replacers = interpolator

View File

@@ -1,34 +1,61 @@
use std::{sync::OnceLock, time::Instant};
use std::{
net::{IpAddr, SocketAddr},
sync::OnceLock,
time::Instant,
};
use axum::{Router, extract::Path, http::HeaderMap, routing::post};
use anyhow::{Context, anyhow};
use axum::{
Router,
extract::{ConnectInfo, Path},
http::HeaderMap,
routing::post,
};
use data_encoding::BASE32_NOPAD;
use database::{
bson::{doc, to_bson},
mungos::by_id::update_one_by_id,
};
use derive_variants::{EnumVariants, ExtractVariant};
use komodo_client::{api::auth::*, entities::user::User};
use rate_limit::WithFailureRateLimit;
use reqwest::StatusCode;
use resolver_api::Resolve;
use response::Response;
use serde::{Deserialize, Serialize};
use serde_json::json;
use serror::{AddStatusCode, Json};
use serror::{AddStatusCode, AddStatusCodeError, Json};
use tower_sessions::Session;
use typeshare::typeshare;
use uuid::Uuid;
use webauthn_rs::prelude::PasskeyAuthentication;
use crate::{
api::{
SESSION_KEY_PASSKEY_LOGIN, SESSION_KEY_TOTP_LOGIN,
SESSION_KEY_USER_ID, memory_session_layer,
},
auth::{
get_user_id_from_headers,
github::{self, client::github_oauth_client},
google::{self, client::google_oauth_client},
oidc::{self, client::oidc_client},
totp::make_totp,
},
config::core_config,
helpers::query::get_user,
state::jwt_client,
state::{auth_rate_limiter, db_client, jwt_client, webauthn},
};
use super::Variant;
#[derive(Default)]
pub struct AuthArgs {
pub headers: HeaderMap,
/// Prefer extracting IP from headers.
/// This IP will be the IP of reverse proxy itself.
pub ip: IpAddr,
/// Per-client session state
pub session: Option<Session>,
}
#[typeshare]
@@ -46,6 +73,8 @@ pub enum AuthRequest {
SignUpLocalUser(SignUpLocalUser),
LoginLocalUser(LoginLocalUser),
ExchangeForJwt(ExchangeForJwt),
CompleteTotpLogin(CompleteTotpLogin),
CompletePasskeyLogin(CompletePasskeyLogin),
GetUser(GetUser),
}
@@ -73,11 +102,13 @@ pub fn router() -> Router {
router = router.nest("/oidc", oidc::router())
}
router
router.layer(memory_session_layer(60))
}
async fn variant_handler(
headers: HeaderMap,
session: Session,
info: ConnectInfo<SocketAddr>,
Path(Variant { variant }): Path<Variant>,
Json(params): Json<serde_json::Value>,
) -> serror::Result<axum::response::Response> {
@@ -85,12 +116,13 @@ async fn variant_handler(
"type": variant,
"params": params,
}))?;
handler(headers, Json(req)).await
handler(headers, session, info, Json(req)).await
}
#[instrument(name = "AuthHandler", level = "debug", skip(headers))]
async fn handler(
headers: HeaderMap,
session: Session,
ConnectInfo(info): ConnectInfo<SocketAddr>,
Json(request): Json<AuthRequest>,
) -> serror::Result<axum::response::Response> {
let timer = Instant::now();
@@ -99,7 +131,13 @@ async fn handler(
"/auth request {req_id} | METHOD: {:?}",
request.extract_variant()
);
let res = request.resolve(&AuthArgs { headers }).await;
let res = request
.resolve(&AuthArgs {
headers,
ip: info.ip(),
session: Some(session),
})
.await;
if let Err(e) = &res {
debug!("/auth request {req_id} | error: {:#}", e.error);
}
@@ -125,7 +163,6 @@ fn login_options_reponse() -> &'static GetLoginOptionsResponse {
}
impl Resolve<AuthArgs> for GetLoginOptions {
#[instrument(name = "GetLoginOptions", level = "debug", skip(self))]
async fn resolve(
self,
_: &AuthArgs,
@@ -135,29 +172,191 @@ impl Resolve<AuthArgs> for GetLoginOptions {
}
impl Resolve<AuthArgs> for ExchangeForJwt {
#[instrument(name = "ExchangeForJwt", level = "debug", skip(self))]
async fn resolve(
self,
_: &AuthArgs,
AuthArgs {
headers,
ip,
session,
}: &AuthArgs,
) -> serror::Result<ExchangeForJwtResponse> {
jwt_client()
.redeem_exchange_token(&self.token)
async {
let session = session.as_ref().context(
"Method called in invalid context. This should not happen",
)?;
let user_id = session
.remove::<String>(SESSION_KEY_USER_ID)
.await
.map_err(Into::into)
.context("Internal session type error")?
.context("Authentication steps must be completed before JWT can be retrieved")?;
jwt_client().encode(user_id).map_err(Into::into)
}
.with_failure_rate_limit_using_headers(
auth_rate_limiter(),
headers,
Some(*ip),
)
.await
}
}
impl Resolve<AuthArgs> for CompleteTotpLogin {
async fn resolve(
self,
AuthArgs {
headers,
ip,
session,
}: &AuthArgs,
) -> serror::Result<CompleteTotpLoginResponse> {
async {
let session = session.as_ref().context(
"Method called in invalid context. This should not happen",
)?;
let user_id = session
.get::<String>(SESSION_KEY_TOTP_LOGIN)
.await
.context("Internal session type error")?
.context(
"Totp login has not been initiated for this session",
)?;
let user = get_user(&user_id)
.await
.status_code(StatusCode::UNAUTHORIZED)?;
if user.totp.secret.is_empty() {
return Err(
anyhow!("User is not enrolled in totp")
.status_code(StatusCode::BAD_REQUEST),
);
}
let secret_bytes = BASE32_NOPAD
.decode(user.totp.secret.as_bytes())
.context("Failed to decode totp secret to bytes")?;
let totp = make_totp(secret_bytes, None)?;
let valid = totp
.check_current(&self.code)
.context("Failed to check TOTP code validity")?;
if !valid {
return Err(
anyhow!("Invalid totp code")
.status_code(StatusCode::UNAUTHORIZED),
);
}
jwt_client().encode(user_id).map_err(Into::into)
}
.with_failure_rate_limit_using_headers(
auth_rate_limiter(),
headers,
Some(*ip),
)
.await
}
}
impl Resolve<AuthArgs> for CompletePasskeyLogin {
async fn resolve(
self,
AuthArgs {
headers,
ip,
session,
}: &AuthArgs,
) -> serror::Result<CompletePasskeyLoginResponse> {
async {
let session = session.as_ref().context(
"Method called in invalid context. This should not happen",
)?;
let webauthn = webauthn().context(
"No webauthn provider available, invalid KOMODO_HOST config",
)?;
let (user_id, server_state) = session
.get::<(String, PasskeyAuthentication)>(
SESSION_KEY_PASSKEY_LOGIN,
)
.await
.context("Internal session type error")?
.context(
"Passkey login has not been initiated for this session",
)?;
// The result of this call must be used to
// update the stored passkey info on database.
let update = webauthn
.finish_passkey_authentication(
&self.credential,
&server_state,
)
.context("Failed to validate passkey")?;
let mut passkey = get_user(&user_id)
.await?
.passkey
.passkey
.context("Could not find passkey on database.")?;
passkey.update_credential(&update);
let passkey = to_bson(&passkey)
.context("Failed to serialize passkey to BSON")?;
let update = doc! { "$set": { "passkey.passkey": passkey } };
let _ =
update_one_by_id(&db_client().users, &user_id, update, None)
.await
.context(
"Failed to update user passkey on database after login",
)
.inspect_err(|e| warn!("{e:#}"));
jwt_client().encode(user_id).map_err(Into::into)
}
.with_failure_rate_limit_using_headers(
auth_rate_limiter(),
headers,
Some(*ip),
)
.await
}
}
impl Resolve<AuthArgs> for GetUser {
#[instrument(name = "GetUser", level = "debug", skip(self))]
async fn resolve(
self,
AuthArgs { headers }: &AuthArgs,
AuthArgs {
headers,
ip,
session: _,
}: &AuthArgs,
) -> serror::Result<User> {
let user_id = get_user_id_from_headers(headers)
.await
.status_code(StatusCode::UNAUTHORIZED)?;
get_user(&user_id)
.await
.status_code(StatusCode::UNAUTHORIZED)
async {
let user_id = get_user_id_from_headers(headers)
.await
.status_code(StatusCode::UNAUTHORIZED)?;
let mut user = get_user(&user_id)
.await
.status_code(StatusCode::UNAUTHORIZED)?;
// Sanitize before sending to client.
user.sanitize();
Ok(user)
}
.with_failure_rate_limit_using_headers(
auth_rate_limiter(),
headers,
Some(*ip),
)
.await
}
}

View File

@@ -1,12 +1,11 @@
use std::{
collections::HashSet,
path::{Path, PathBuf},
str::FromStr,
sync::OnceLock,
};
use anyhow::Context;
use command::run_komodo_command;
use command::run_komodo_standard_command;
use config::merge_objects;
use database::mungos::{
by_id::update_one_by_id, mongodb::bson::to_document,
@@ -24,6 +23,7 @@ use komodo_client::{
config::core::CoreConfig,
komodo_timestamp,
permission::PermissionLevel,
random_string,
update::Update,
user::action_user,
},
@@ -38,7 +38,6 @@ use crate::{
config::core_config,
helpers::{
query::{VariablesAndSecrets, get_variables_and_secrets},
random_string,
update::update_update,
},
permission::get_check_permissions,
@@ -59,10 +58,18 @@ impl super::BatchExecute for BatchRunAction {
}
impl Resolve<ExecuteArgs> for BatchRunAction {
#[instrument(name = "BatchRunAction", skip(self, user), fields(user_id = user.id))]
#[instrument(
"BatchRunAction",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
async fn resolve(
self,
ExecuteArgs { user, .. }: &ExecuteArgs,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchRunAction>(&self.pattern, user)
@@ -72,10 +79,19 @@ impl Resolve<ExecuteArgs> for BatchRunAction {
}
impl Resolve<ExecuteArgs> for RunAction {
#[instrument(name = "RunAction", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"RunAction",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
action = self.action,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut action = get_check_permissions::<Action>(
&self.action,
@@ -125,6 +141,7 @@ impl Resolve<ExecuteArgs> for RunAction {
}
.resolve(&UserArgs {
user: action_user().to_owned(),
session: None,
})
.await?;
@@ -162,7 +179,7 @@ impl Resolve<ExecuteArgs> for RunAction {
""
};
let mut res = run_komodo_command(
let mut res = run_komodo_standard_command(
// Keep this stage name as is, the UI will find the latest update log by matching the stage name
"Execute Action",
None,
@@ -183,6 +200,7 @@ impl Resolve<ExecuteArgs> for RunAction {
if let Err(e) = (DeleteApiKey { key })
.resolve(&UserArgs {
user: action_user().to_owned(),
session: None,
})
.await
{
@@ -213,7 +231,6 @@ impl Resolve<ExecuteArgs> for RunAction {
update_update(update.clone()).await?;
if !update.success && action.config.failure_alert {
warn!("action unsuccessful, alerting...");
let target = update.target.clone();
tokio::spawn(async move {
let alert = Alert {
@@ -236,6 +253,7 @@ impl Resolve<ExecuteArgs> for RunAction {
}
}
#[instrument("Interpolate", skip(contents, update, secret))]
async fn interpolate(
contents: &mut String,
update: &mut Update,
@@ -321,6 +339,7 @@ main()
/// Cleans up file at given path.
/// ALSO if $DENO_DIR is set,
/// will clean up the generated file matching "file"
#[instrument("CleanupRun")]
async fn cleanup_run(file: String, path: &Path) {
if let Err(e) = fs::remove_file(path).await {
warn!(
@@ -340,7 +359,7 @@ fn deno_dir() -> Option<&'static Path> {
DENO_DIR
.get_or_init(|| {
let deno_dir = std::env::var("DENO_DIR").ok()?;
PathBuf::from_str(&deno_dir).ok()
Some(PathBuf::from(&deno_dir))
})
.as_deref()
}

View File

@@ -1,6 +1,8 @@
use anyhow::{Context, anyhow};
use formatting::format_serror;
use futures::{TryStreamExt, stream::FuturesUnordered};
use futures_util::{
StreamExt, TryStreamExt, stream::FuturesUnordered,
};
use komodo_client::{
api::execute::{SendAlert, TestAlerter},
entities::{
@@ -22,10 +24,19 @@ use crate::{
use super::ExecuteArgs;
impl Resolve<ExecuteArgs> for TestAlerter {
#[instrument(name = "TestAlerter", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"TestAlerter",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
alerter = self.alerter,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
let alerter = get_check_permissions::<Alerter>(
&self.alerter,
@@ -79,15 +90,24 @@ impl Resolve<ExecuteArgs> for TestAlerter {
//
impl Resolve<ExecuteArgs> for SendAlert {
#[instrument(name = "SendAlert", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"SendAlert",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
request = format!("{self:?}"),
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
let alerters = list_full_for_user::<Alerter>(
Default::default(),
user,
PermissionLevel::Execute.into(),
PermissionLevel::Read.into(),
&[],
)
.await?
@@ -102,6 +122,28 @@ impl Resolve<ExecuteArgs> for SendAlert {
})
.collect::<Vec<_>>();
let alerters = if user.admin {
alerters
} else {
// Only keep alerters with execute permissions
alerters
.into_iter()
.map(|alerter| async move {
get_check_permissions::<Alerter>(
&alerter.id,
user,
PermissionLevel::Execute.into(),
)
.await
})
.collect::<FuturesUnordered<_>>()
.collect::<Vec<_>>()
.await
.into_iter()
.flatten()
.collect()
};
if alerters.is_empty() {
return Err(anyhow!(
"Could not find any valid alerters to send to, this required Execute permissions on the Alerter"

View File

@@ -14,12 +14,15 @@ use database::mungos::{
},
};
use formatting::format_serror;
use futures::future::join_all;
use futures_util::future::join_all;
use interpolate::Interpolator;
use komodo_client::{
api::execute::{
BatchExecutionResponse, BatchRunBuild, CancelBuild, Deploy,
RunBuild,
api::{
execute::{
BatchExecutionResponse, BatchRunBuild, CancelBuild, Deploy,
RunBuild,
},
write::RefreshBuildCache,
},
entities::{
alert::{Alert, AlertData, SeverityLevel},
@@ -37,12 +40,14 @@ use komodo_client::{
use periphery_client::api;
use resolver_api::Resolve;
use tokio_util::sync::CancellationToken;
use uuid::Uuid;
use crate::{
alert::send_alerts,
api::write::WriteArgs,
helpers::{
build_git_token,
builder::{cleanup_builder_instance, get_builder_periphery},
builder::{cleanup_builder_instance, connect_builder_periphery},
channel::build_cancel_channel,
query::{
VariablesAndSecrets, get_deployment_state,
@@ -66,10 +71,18 @@ impl super::BatchExecute for BatchRunBuild {
}
impl Resolve<ExecuteArgs> for BatchRunBuild {
#[instrument(name = "BatchRunBuild", skip(user), fields(user_id = user.id))]
#[instrument(
"BatchRunBuild",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
async fn resolve(
self,
ExecuteArgs { user, .. }: &ExecuteArgs,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchRunBuild>(&self.pattern, user)
@@ -79,10 +92,19 @@ impl Resolve<ExecuteArgs> for BatchRunBuild {
}
impl Resolve<ExecuteArgs> for RunBuild {
#[instrument(name = "RunBuild", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"RunBuild",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
build = self.build,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut build = get_check_permissions::<Build>(
&self.build,
@@ -168,7 +190,7 @@ impl Resolve<ExecuteArgs> for RunBuild {
update.finalize();
let id = update.id.clone();
if let Err(e) = update_update(update).await {
warn!("failed to modify Update {id} on db | {e:#}");
warn!("Failed to modify Update {id} on db | {e:#}");
}
if !is_server_builder {
cancel_clone.cancel();
@@ -186,7 +208,7 @@ impl Resolve<ExecuteArgs> for RunBuild {
});
// GET BUILDER PERIPHERY
let (periphery, cleanup_data) = match get_builder_periphery(
let (periphery, cleanup_data) = match connect_builder_periphery(
build.name.clone(),
Some(build.config.version),
builder,
@@ -197,12 +219,12 @@ impl Resolve<ExecuteArgs> for RunBuild {
Ok(builder) => builder,
Err(e) => {
warn!(
"failed to get builder for build {} | {e:#}",
"Failed to get Builder for Build {} | {e:#}",
build.name
);
update.logs.push(Log::error(
"get builder",
format_serror(&e.context("failed to get builder").into()),
"Get Builder",
format_serror(&e.context("Failed to get Builder").into()),
));
return handle_early_return(
update, build.id, build.name, false,
@@ -247,18 +269,18 @@ impl Resolve<ExecuteArgs> for RunBuild {
replacers: Default::default(),
}) => res,
_ = cancel.cancelled() => {
debug!("build cancelled during clone, cleaning up builder");
update.push_error_log("build cancelled", String::from("user cancelled build during repo clone"));
debug!("Build cancelled during clone, cleaning up builder");
update.push_error_log("Build cancelled", String::from("user cancelled build during repo clone"));
cleanup_builder_instance(periphery, cleanup_data, &mut update)
.await;
info!("builder cleaned up");
info!("Builder cleaned up");
return handle_early_return(update, build.id, build.name, true).await
},
};
let commit_message = match res {
Ok(res) => {
debug!("finished repo clone");
debug!("Finished repo clone");
update.logs.extend(res.res.logs);
update.commit_hash =
res.res.commit_hash.unwrap_or_default().to_string();
@@ -294,10 +316,10 @@ impl Resolve<ExecuteArgs> for RunBuild {
commit_hash: optional_string(&update.commit_hash),
// Unused for now
additional_tags: Default::default(),
}) => res.context("failed at call to periphery to build"),
}) => res.context("Failed at call to Periphery to build"),
_ = cancel.cancelled() => {
info!("build cancelled during build, cleaning up builder");
update.push_error_log("build cancelled", String::from("user cancelled build during docker build"));
info!("Build cancelled during build, cleaning up builder");
update.push_error_log("Build cancelled", String::from("User cancelled build during docker build"));
cleanup_builder_instance(periphery, cleanup_data, &mut update)
.await;
return handle_early_return(update, build.id, build.name, true).await
@@ -310,10 +332,10 @@ impl Resolve<ExecuteArgs> for RunBuild {
update.logs.extend(logs);
}
Err(e) => {
warn!("error in build | {e:#}");
warn!("Error in build | {e:#}");
update.push_error_log(
"build",
format_serror(&e.context("failed to build").into()),
"Build Error",
format_serror(&e.context("Failed to build").into()),
)
}
};
@@ -364,13 +386,15 @@ impl Resolve<ExecuteArgs> for RunBuild {
update_update(update.clone()).await?;
let Build { id, name, .. } = build;
if update.success {
// don't hold response up for user
tokio::spawn(async move {
handle_post_build_redeploy(&build.id).await;
handle_post_build_redeploy(&id).await;
});
} else {
warn!("build unsuccessful, alerting...");
let name = name.clone();
let target = update.target.clone();
let version = update.version;
tokio::spawn(async move {
@@ -381,21 +405,27 @@ impl Resolve<ExecuteArgs> for RunBuild {
resolved_ts: Some(komodo_timestamp()),
resolved: true,
level: SeverityLevel::Warning,
data: AlertData::BuildFailed {
id: build.id,
name: build.name,
version,
},
data: AlertData::BuildFailed { id, name, version },
};
send_alerts(&[alert]).await
});
}
if let Err(e) = (RefreshBuildCache { build: name })
.resolve(&WriteArgs { user: user.clone() })
.await
{
update.push_error_log(
"Refresh build cache",
format_serror(&e.error.into()),
);
}
Ok(update.clone())
}
}
#[instrument(skip(update))]
#[instrument("HandleEarlyReturn", skip(update))]
async fn handle_early_return(
mut update: Update,
build_id: String,
@@ -419,7 +449,6 @@ async fn handle_early_return(
}
update_update(update.clone()).await?;
if !update.success && !is_cancel {
warn!("build unsuccessful, alerting...");
let target = update.target.clone();
let version = update.version;
tokio::spawn(async move {
@@ -489,10 +518,19 @@ pub async fn validate_cancel_build(
}
impl Resolve<ExecuteArgs> for CancelBuild {
#[instrument(name = "CancelBuild", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"CancelBuild",
skip(user, update),
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
build = self.build,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let build = get_check_permissions::<Build>(
&self.build,
@@ -540,7 +578,7 @@ impl Resolve<ExecuteArgs> for CancelBuild {
.await
{
warn!(
"failed to set CancelBuild Update status Complete after timeout | {e:#}"
"Failed to set CancelBuild Update status Complete after timeout | {e:#}"
)
}
});
@@ -549,7 +587,7 @@ impl Resolve<ExecuteArgs> for CancelBuild {
}
}
#[instrument]
#[instrument("PostBuildRedeploy")]
async fn handle_post_build_redeploy(build_id: &str) {
let Ok(redeploy_deployments) = find_collect(
&db_client().deployments,
@@ -585,7 +623,11 @@ async fn handle_post_build_redeploy(build_id: &str) {
stop_signal: None,
stop_time: None,
}
.resolve(&ExecuteArgs { user, update })
.resolve(&ExecuteArgs {
user,
update,
id: Uuid::new_v4(),
})
.await
}
.await;
@@ -611,6 +653,7 @@ async fn handle_post_build_redeploy(build_id: &str) {
/// This will make sure that a build with non-none image registry has an account attached,
/// and will check the core config for a token matching requirements.
/// Otherwise it is left to periphery.
#[instrument("ValidateRegistryTokens")]
async fn validate_account_extract_registry_tokens(
Build {
config: BuildConfig { image_registry, .. },

View File

@@ -7,7 +7,7 @@ use interpolate::Interpolator;
use komodo_client::{
api::execute::*,
entities::{
Version,
SwarmOrServer, Version,
build::{Build, ImageRegistryConfig},
deployment::{
Deployment, DeploymentImage, extract_registry_domain,
@@ -16,22 +16,23 @@ use komodo_client::{
permission::PermissionLevel,
server::Server,
update::{Log, Update},
user::User,
},
};
use periphery_client::api;
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError;
use crate::{
helpers::{
periphery_client,
query::{VariablesAndSecrets, get_variables_and_secrets},
registry_token,
swarm::swarm_request,
update::update_update,
},
monitor::update_cache_for_server,
permission::get_check_permissions,
resource,
monitor::{update_cache_for_server, update_cache_for_swarm},
resource::{self, setup_deployment_execution},
state::action_states,
};
@@ -49,10 +50,18 @@ impl super::BatchExecute for BatchDeploy {
}
impl Resolve<ExecuteArgs> for BatchDeploy {
#[instrument(name = "BatchDeploy", skip(user), fields(user_id = user.id))]
#[instrument(
"BatchDeploy",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
async fn resolve(
self,
ExecuteArgs { user, .. }: &ExecuteArgs,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchDeploy>(&self.pattern, user)
@@ -61,39 +70,30 @@ impl Resolve<ExecuteArgs> for BatchDeploy {
}
}
async fn setup_deployment_execution(
deployment: &str,
user: &User,
) -> anyhow::Result<(Deployment, Server)> {
let deployment = get_check_permissions::<Deployment>(
deployment,
user,
PermissionLevel::Execute.into(),
)
.await?;
if deployment.config.server_id.is_empty() {
return Err(anyhow!("Deployment has no Server configured"));
}
let server =
resource::get::<Server>(&deployment.config.server_id).await?;
if !server.config.enabled {
return Err(anyhow!("Attached Server is not enabled"));
}
Ok((deployment, server))
}
impl Resolve<ExecuteArgs> for Deploy {
#[instrument(name = "Deploy", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"Deploy",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
deployment = self.deployment,
stop_signal = format!("{:?}", self.stop_signal),
stop_time = self.stop_time,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let (mut deployment, server) =
setup_deployment_execution(&self.deployment, user).await?;
let (mut deployment, swarm_or_server) =
setup_deployment_execution(
&self.deployment,
user,
PermissionLevel::Execute.into(),
)
.await?;
// get the action state for the deployment (or insert default).
let action_state = action_states()
@@ -203,27 +203,55 @@ impl Resolve<ExecuteArgs> for Deploy {
update.version = version;
update_update(update.clone()).await?;
match periphery_client(&server)
.await?
.request(api::container::Deploy {
deployment,
stop_signal: self.stop_signal,
stop_time: self.stop_time,
registry_token,
replacers: secret_replacers.into_iter().collect(),
})
.await
{
Ok(log) => update.logs.push(log),
Err(e) => {
update.push_error_log(
"Deploy Container",
format_serror(&e.into()),
);
match swarm_or_server {
SwarmOrServer::Swarm(swarm) => {
match swarm_request(
&swarm.config.server_ids,
api::swarm::CreateSwarmService {
deployment,
registry_token,
replacers: secret_replacers.into_iter().collect(),
},
)
.await
{
Ok(logs) => {
update_cache_for_swarm(&swarm, true).await;
update.logs.extend(logs)
}
Err(e) => {
update.push_error_log(
"Create Swarm Service",
format_serror(&e.into()),
);
}
};
}
};
update_cache_for_server(&server, true).await;
SwarmOrServer::Server(server) => {
match periphery_client(&server)
.await?
.request(api::container::RunContainer {
deployment,
stop_signal: self.stop_signal,
stop_time: self.stop_time,
registry_token,
replacers: secret_replacers.into_iter().collect(),
})
.await
{
Ok(log) => {
update_cache_for_server(&server, true).await;
update.logs.push(log)
}
Err(e) => {
update.push_error_log(
"Deploy Container",
format_serror(&e.into()),
);
}
};
}
}
update.finalize();
update_update(update.clone()).await?;
@@ -243,6 +271,14 @@ fn pull_cache() -> &'static PullCache {
PULL_CACHE.get_or_init(Default::default)
}
#[instrument(
"PullDeploymentInner",
skip_all,
fields(
deployment = deployment.id,
server = server.id
)
)]
pub async fn pull_deployment_inner(
deployment: Deployment,
server: &Server,
@@ -358,13 +394,33 @@ pub async fn pull_deployment_inner(
}
impl Resolve<ExecuteArgs> for PullDeployment {
#[instrument(name = "PullDeployment", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"PullDeployment",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
deployment = self.deployment,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&self.deployment, user).await?;
let (deployment, swarm_or_server) = setup_deployment_execution(
&self.deployment,
user,
PermissionLevel::Execute.into(),
)
.await?;
let SwarmOrServer::Server(server) = swarm_or_server else {
return Err(
anyhow!("PullDeployment should not be called for Deployment in Swarm Mode")
.status_code(StatusCode::BAD_REQUEST),
);
};
// get the action state for the deployment (or insert default).
let action_state = action_states()
@@ -392,13 +448,33 @@ impl Resolve<ExecuteArgs> for PullDeployment {
}
impl Resolve<ExecuteArgs> for StartDeployment {
#[instrument(name = "StartDeployment", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"StartDeployment",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
deployment = self.deployment,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&self.deployment, user).await?;
let (deployment, swarm_or_server) = setup_deployment_execution(
&self.deployment,
user,
PermissionLevel::Execute.into(),
)
.await?;
let SwarmOrServer::Server(server) = swarm_or_server else {
return Err(
anyhow!("StartDeployment should not be called for Deployment in Swarm Mode")
.status_code(StatusCode::BAD_REQUEST),
);
};
// get the action state for the deployment (or insert default).
let action_state = action_states()
@@ -440,13 +516,33 @@ impl Resolve<ExecuteArgs> for StartDeployment {
}
impl Resolve<ExecuteArgs> for RestartDeployment {
#[instrument(name = "RestartDeployment", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"RestartDeployment",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
deployment = self.deployment,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&self.deployment, user).await?;
let (deployment, swarm_or_server) = setup_deployment_execution(
&self.deployment,
user,
PermissionLevel::Execute.into(),
)
.await?;
let SwarmOrServer::Server(server) = swarm_or_server else {
return Err(
anyhow!("RestartDeployment should not be called for Deployment in Swarm Mode")
.status_code(StatusCode::BAD_REQUEST),
);
};
// get the action state for the deployment (or insert default).
let action_state = action_states()
@@ -490,13 +586,33 @@ impl Resolve<ExecuteArgs> for RestartDeployment {
}
impl Resolve<ExecuteArgs> for PauseDeployment {
#[instrument(name = "PauseDeployment", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"PauseDeployment",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
deployment = self.deployment,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&self.deployment, user).await?;
let (deployment, swarm_or_server) = setup_deployment_execution(
&self.deployment,
user,
PermissionLevel::Execute.into(),
)
.await?;
let SwarmOrServer::Server(server) = swarm_or_server else {
return Err(
anyhow!("PauseDeployment should not be called for Deployment in Swarm Mode")
.status_code(StatusCode::BAD_REQUEST),
);
};
// get the action state for the deployment (or insert default).
let action_state = action_states()
@@ -538,13 +654,33 @@ impl Resolve<ExecuteArgs> for PauseDeployment {
}
impl Resolve<ExecuteArgs> for UnpauseDeployment {
#[instrument(name = "UnpauseDeployment", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"UnpauseDeployment",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
deployment = self.deployment,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&self.deployment, user).await?;
let (deployment, swarm_or_server) = setup_deployment_execution(
&self.deployment,
user,
PermissionLevel::Execute.into(),
)
.await?;
let SwarmOrServer::Server(server) = swarm_or_server else {
return Err(
anyhow!("UnpauseDeployment should not be called for Deployment in Swarm Mode")
.status_code(StatusCode::BAD_REQUEST),
);
};
// get the action state for the deployment (or insert default).
let action_state = action_states()
@@ -588,13 +724,35 @@ impl Resolve<ExecuteArgs> for UnpauseDeployment {
}
impl Resolve<ExecuteArgs> for StopDeployment {
#[instrument(name = "StopDeployment", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"StopDeployment",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
deployment = self.deployment,
signal = format!("{:?}", self.signal),
time = self.time,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&self.deployment, user).await?;
let (deployment, swarm_or_server) = setup_deployment_execution(
&self.deployment,
user,
PermissionLevel::Execute.into(),
)
.await?;
let SwarmOrServer::Server(server) = swarm_or_server else {
return Err(
anyhow!("StopDeployment should not be called for Deployment in Swarm Mode")
.status_code(StatusCode::BAD_REQUEST),
);
};
// get the action state for the deployment (or insert default).
let action_state = action_states()
@@ -655,10 +813,18 @@ impl super::BatchExecute for BatchDestroyDeployment {
}
impl Resolve<ExecuteArgs> for BatchDestroyDeployment {
#[instrument(name = "BatchDestroyDeployment", skip(user), fields(user_id = user.id))]
#[instrument(
"BatchDestroyDeployment",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
async fn resolve(
self,
ExecuteArgs { user, .. }: &ExecuteArgs,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchDestroyDeployment>(
@@ -671,13 +837,28 @@ impl Resolve<ExecuteArgs> for BatchDestroyDeployment {
}
impl Resolve<ExecuteArgs> for DestroyDeployment {
#[instrument(name = "DestroyDeployment", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"DestroyDeployment",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
deployment = self.deployment,
signal = format!("{:?}", self.signal),
time = self.time,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&self.deployment, user).await?;
let (deployment, swarm_or_server) = setup_deployment_execution(
&self.deployment,
user,
PermissionLevel::Execute.into(),
)
.await?;
// get the action state for the deployment (or insert default).
let action_state = action_states()
@@ -695,31 +876,61 @@ impl Resolve<ExecuteArgs> for DestroyDeployment {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let log = match periphery_client(&server)
.await?
.request(api::container::RemoveContainer {
name: deployment.name,
signal: self
.signal
.unwrap_or(deployment.config.termination_signal)
.into(),
time: self
.time
.unwrap_or(deployment.config.termination_timeout)
.into(),
})
.await
{
Ok(log) => log,
Err(e) => Log::error(
"stop container",
format_serror(&e.context("failed to stop container").into()),
),
let log = match swarm_or_server {
SwarmOrServer::Swarm(swarm) => {
match swarm_request(
&swarm.config.server_ids,
api::swarm::RemoveSwarmServices {
services: vec![deployment.name],
},
)
.await
{
Ok(log) => {
update_cache_for_swarm(&swarm, true).await;
log
}
Err(e) => Log::error(
"Remove Swarm Service",
format_serror(
&e.context("Failed to remove swarm service").into(),
),
),
}
}
SwarmOrServer::Server(server) => {
match periphery_client(&server)
.await?
.request(api::container::RemoveContainer {
name: deployment.name,
signal: self
.signal
.unwrap_or(deployment.config.termination_signal)
.into(),
time: self
.time
.unwrap_or(deployment.config.termination_timeout)
.into(),
})
.await
{
Ok(log) => {
update_cache_for_server(&server, true).await;
log
}
Err(e) => Log::error(
"Destroy Container",
format_serror(
&e.context("Failed to destroy container").into(),
),
),
}
}
};
update.logs.push(log);
update.finalize();
update_cache_for_server(&server, true).await;
update_update(update.clone()).await?;
Ok(update)

View File

@@ -1,13 +1,13 @@
use std::{fmt::Write as _, sync::OnceLock};
use anyhow::{Context, anyhow};
use command::run_komodo_command;
use command::run_komodo_standard_command;
use database::{
bson::{Document, doc},
mungos::find::find_collect,
};
use formatting::{bold, format_serror};
use futures::{StreamExt, stream::FuturesOrdered};
use futures_util::{StreamExt, stream::FuturesOrdered};
use komodo_client::{
api::execute::{
BackupCoreDatabase, ClearRepoCache, GlobalAutoUpdate,
@@ -45,13 +45,17 @@ fn clear_repo_cache_lock() -> &'static Mutex<()> {
impl Resolve<ExecuteArgs> for ClearRepoCache {
#[instrument(
name = "ClearRepoCache",
skip(user, update),
fields(user_id = user.id, update_id = update.id)
"ClearRepoCache",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
if !user.admin {
return Err(
@@ -120,13 +124,17 @@ fn backup_database_lock() -> &'static Mutex<()> {
impl Resolve<ExecuteArgs> for BackupCoreDatabase {
#[instrument(
name = "BackupCoreDatabase",
skip(user, update),
fields(user_id = user.id, update_id = update.id)
"BackupCoreDatabase",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
if !user.admin {
return Err(
@@ -143,7 +151,7 @@ impl Resolve<ExecuteArgs> for BackupCoreDatabase {
update_update(update.clone()).await?;
let res = run_komodo_command(
let res = run_komodo_standard_command(
"Backup Core Database",
None,
"km database backup --yes",
@@ -169,13 +177,17 @@ fn global_update_lock() -> &'static Mutex<()> {
impl Resolve<ExecuteArgs> for GlobalAutoUpdate {
#[instrument(
name = "GlobalAutoUpdate",
skip(user, update),
fields(user_id = user.id, update_id = update.id)
"GlobalAutoUpdate",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
if !user.admin {
return Err(
@@ -335,13 +347,17 @@ fn global_rotate_lock() -> &'static Mutex<()> {
impl Resolve<ExecuteArgs> for RotateAllServerKeys {
#[instrument(
name = "RotateAllServerKeys",
skip(user, update),
fields(user_id = user.id, update_id = update.id)
"RotateAllServerKeys",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
if !user.admin {
return Err(
@@ -445,13 +461,18 @@ impl Resolve<ExecuteArgs> for RotateAllServerKeys {
impl Resolve<ExecuteArgs> for RotateCoreKeys {
#[instrument(
name = "RotateCoreKeys",
skip(user, update),
fields(user_id = user.id, update_id = update.id)
"RotateCoreKeys",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
force = self.force,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
if !user.admin {
return Err(

View File

@@ -1,4 +1,4 @@
use std::{pin::Pin, time::Instant};
use std::pin::Pin;
use anyhow::Context;
use axum::{
@@ -8,7 +8,7 @@ use axum_extra::{TypedHeader, headers::ContentType};
use database::mungos::by_id::find_one_by_id;
use derive_variants::{EnumVariants, ExtractVariant};
use formatting::format_serror;
use futures::future::join_all;
use futures_util::future::join_all;
use komodo_client::{
api::execute::*,
entities::{
@@ -23,6 +23,7 @@ use response::JsonString;
use serde::{Deserialize, Serialize};
use serde_json::json;
use serror::Json;
use strum::Display;
use typeshare::typeshare;
use uuid::Uuid;
@@ -42,6 +43,7 @@ mod procedure;
mod repo;
mod server;
mod stack;
mod swarm;
mod sync;
use super::Variant;
@@ -51,6 +53,9 @@ pub use {
};
pub struct ExecuteArgs {
/// The execution id.
/// Unique for every /execute call.
pub id: Uuid,
pub user: User,
pub update: Update,
}
@@ -59,35 +64,12 @@ pub struct ExecuteArgs {
#[derive(
Serialize, Deserialize, Debug, Clone, Resolve, EnumVariants,
)]
#[variant_derive(Debug)]
#[variant_derive(Debug, Display)]
#[args(ExecuteArgs)]
#[response(JsonString)]
#[error(serror::Error)]
#[serde(tag = "type", content = "params")]
pub enum ExecuteRequest {
// ==== SERVER ====
StartContainer(StartContainer),
RestartContainer(RestartContainer),
PauseContainer(PauseContainer),
UnpauseContainer(UnpauseContainer),
StopContainer(StopContainer),
DestroyContainer(DestroyContainer),
StartAllContainers(StartAllContainers),
RestartAllContainers(RestartAllContainers),
PauseAllContainers(PauseAllContainers),
UnpauseAllContainers(UnpauseAllContainers),
StopAllContainers(StopAllContainers),
PruneContainers(PruneContainers),
DeleteNetwork(DeleteNetwork),
PruneNetworks(PruneNetworks),
DeleteImage(DeleteImage),
PruneImages(PruneImages),
DeleteVolume(DeleteVolume),
PruneVolumes(PruneVolumes),
PruneDockerBuilders(PruneDockerBuilders),
PruneBuildx(PruneBuildx),
PruneSystem(PruneSystem),
// ==== STACK ====
DeployStack(DeployStack),
BatchDeployStack(BatchDeployStack),
@@ -138,12 +120,42 @@ pub enum ExecuteRequest {
RunAction(RunAction),
BatchRunAction(BatchRunAction),
// ==== SYNC ====
RunSync(RunSync),
// ==== ALERTER ====
TestAlerter(TestAlerter),
SendAlert(SendAlert),
// ==== SYNC ====
RunSync(RunSync),
// ==== SERVER ====
StartContainer(StartContainer),
RestartContainer(RestartContainer),
PauseContainer(PauseContainer),
UnpauseContainer(UnpauseContainer),
StopContainer(StopContainer),
DestroyContainer(DestroyContainer),
StartAllContainers(StartAllContainers),
RestartAllContainers(RestartAllContainers),
PauseAllContainers(PauseAllContainers),
UnpauseAllContainers(UnpauseAllContainers),
StopAllContainers(StopAllContainers),
PruneContainers(PruneContainers),
DeleteNetwork(DeleteNetwork),
PruneNetworks(PruneNetworks),
DeleteImage(DeleteImage),
PruneImages(PruneImages),
DeleteVolume(DeleteVolume),
PruneVolumes(PruneVolumes),
PruneDockerBuilders(PruneDockerBuilders),
PruneBuildx(PruneBuildx),
PruneSystem(PruneSystem),
// ==== SWARM ====
RemoveSwarmNodes(RemoveSwarmNodes),
RemoveSwarmStacks(RemoveSwarmStacks),
RemoveSwarmServices(RemoveSwarmServices),
RemoveSwarmConfigs(RemoveSwarmConfigs),
RemoveSwarmSecrets(RemoveSwarmSecrets),
// ==== MAINTENANCE ====
ClearRepoCache(ClearRepoCache),
@@ -203,7 +215,7 @@ pub fn inner_handler(
>,
> {
Box::pin(async move {
let req_id = Uuid::new_v4();
let task_id = Uuid::new_v4();
// Need to validate no cancel is active before any update is created.
// This ensures no double update created if Cancel is called more than once for the same request.
@@ -219,14 +231,14 @@ pub fn inner_handler(
// here either.
if update.operation == Operation::None {
return Ok(ExecutionResult::Batch(
task(req_id, request, user, update).await?,
task(task_id, request, user, update).await?,
));
}
// Spawn a task for the execution which continues
// running after this method returns.
let handle =
tokio::spawn(task(req_id, request, user, update.clone()));
tokio::spawn(task(task_id, request, user, update.clone()));
// Spawns another task to monitor the first for failures,
// and add the log to Update about it (which primary task can't do because it errored out)
@@ -235,11 +247,11 @@ pub fn inner_handler(
async move {
let log = match handle.await {
Ok(Err(e)) => {
warn!("/execute request {req_id} task error: {e:#}",);
warn!("/execute request {task_id} task error: {e:#}",);
Log::error("Task Error", format_serror(&e.into()))
}
Err(e) => {
warn!("/execute request {req_id} spawn error: {e:?}",);
warn!("/execute request {task_id} spawn error: {e:?}",);
Log::error("Spawn Error", format!("{e:#?}"))
}
_ => return,
@@ -273,40 +285,33 @@ pub fn inner_handler(
})
}
#[instrument(
name = "ExecuteRequest",
skip(user, update),
fields(
user_id = user.id,
update_id = update.id,
request = format!("{:?}", request.extract_variant()))
)
]
async fn task(
req_id: Uuid,
id: Uuid,
request: ExecuteRequest,
user: User,
update: Update,
) -> anyhow::Result<String> {
info!("/execute request {req_id} | user: {}", user.username);
let timer = Instant::now();
let variant = request.extract_variant();
let res = match request.resolve(&ExecuteArgs { user, update }).await
{
Err(e) => Err(e.error),
Ok(JsonString::Err(e)) => Err(
anyhow::Error::from(e).context("failed to serialize response"),
),
Ok(JsonString::Ok(res)) => Ok(res),
};
info!(
"/execute request {id} | {variant} | user: {}",
user.username
);
let res =
match request.resolve(&ExecuteArgs { user, update, id }).await {
Err(e) => Err(e.error),
Ok(JsonString::Err(e)) => Err(
anyhow::Error::from(e)
.context("failed to serialize response"),
),
Ok(JsonString::Ok(res)) => Ok(res),
};
if let Err(e) = &res {
warn!("/execute request {req_id} error: {e:#}");
warn!("/execute request {id} error: {e:#}");
}
let elapsed = timer.elapsed();
debug!("/execute request {req_id} | resolve time: {elapsed:?}");
res
}
@@ -315,6 +320,7 @@ trait BatchExecute {
fn single_request(name: String) -> ExecuteRequest;
}
#[instrument("BatchExecute", skip(user))]
async fn batch_execute<E: BatchExecute>(
pattern: &str,
user: &User,
@@ -327,6 +333,7 @@ async fn batch_execute<E: BatchExecute>(
&[],
)
.await?;
let futures = resources.into_iter().map(|resource| {
let user = user.clone();
async move {

View File

@@ -38,7 +38,11 @@ impl super::BatchExecute for BatchRunProcedure {
}
impl Resolve<ExecuteArgs> for BatchRunProcedure {
#[instrument(name = "BatchRunProcedure", skip(user), fields(user_id = user.id))]
#[instrument(
"BatchRunProcedure",
skip_all,
fields(operator = user.id)
)]
async fn resolve(
self,
ExecuteArgs { user, .. }: &ExecuteArgs,
@@ -51,10 +55,19 @@ impl Resolve<ExecuteArgs> for BatchRunProcedure {
}
impl Resolve<ExecuteArgs> for RunProcedure {
#[instrument(name = "RunProcedure", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"RunProcedure",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
procedure = self.procedure,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
Ok(
resolve_inner(self.procedure, user.clone(), update.clone())
@@ -146,7 +159,6 @@ fn resolve_inner(
update_update(update.clone()).await?;
if !update.success && procedure.config.failure_alert {
warn!("procedure unsuccessful, alerting...");
let target = update.target.clone();
tokio::spawn(async move {
let alert = Alert {

View File

@@ -30,7 +30,7 @@ use crate::{
alert::send_alerts,
api::write::WriteArgs,
helpers::{
builder::{cleanup_builder_instance, get_builder_periphery},
builder::{cleanup_builder_instance, connect_builder_periphery},
channel::repo_cancel_channel,
git_token, periphery_client,
query::{VariablesAndSecrets, get_variables_and_secrets},
@@ -51,10 +51,18 @@ impl super::BatchExecute for BatchCloneRepo {
}
impl Resolve<ExecuteArgs> for BatchCloneRepo {
#[instrument(name = "BatchCloneRepo", skip( user), fields(user_id = user.id))]
#[instrument(
"BatchCloneRepo",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchCloneRepo>(&self.pattern, user)
@@ -64,10 +72,19 @@ impl Resolve<ExecuteArgs> for BatchCloneRepo {
}
impl Resolve<ExecuteArgs> for CloneRepo {
#[instrument(name = "CloneRepo", skip( user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"CloneRepo",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
repo = self.repo,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut repo = get_check_permissions::<Repo>(
&self.repo,
@@ -165,10 +182,18 @@ impl super::BatchExecute for BatchPullRepo {
}
impl Resolve<ExecuteArgs> for BatchPullRepo {
#[instrument(name = "BatchPullRepo", skip(user), fields(user_id = user.id))]
#[instrument(
"BatchPullRepo",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern
)
)]
async fn resolve(
self,
ExecuteArgs { user, .. }: &ExecuteArgs,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchPullRepo>(&self.pattern, user)
@@ -178,10 +203,19 @@ impl Resolve<ExecuteArgs> for BatchPullRepo {
}
impl Resolve<ExecuteArgs> for PullRepo {
#[instrument(name = "PullRepo", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"PullRepo",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
repo = self.repo,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut repo = get_check_permissions::<Repo>(
&self.repo,
@@ -275,7 +309,11 @@ impl Resolve<ExecuteArgs> for PullRepo {
}
}
#[instrument(skip_all, fields(update_id = update.id))]
#[instrument(
"HandleRepoEarlyReturn",
skip_all,
fields(update_id = update.id)
)]
async fn handle_repo_update_return(
update: Update,
) -> serror::Result<Update> {
@@ -297,7 +335,7 @@ async fn handle_repo_update_return(
Ok(update)
}
#[instrument]
#[instrument("UpdateLastPulledTime")]
async fn update_last_pulled_time(repo_name: &str) {
let res = db_client()
.repos
@@ -321,10 +359,18 @@ impl super::BatchExecute for BatchBuildRepo {
}
impl Resolve<ExecuteArgs> for BatchBuildRepo {
#[instrument(name = "BatchBuildRepo", skip(user), fields(user_id = user.id))]
#[instrument(
"BatchBuildRepo",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
async fn resolve(
self,
ExecuteArgs { user, .. }: &ExecuteArgs,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchBuildRepo>(&self.pattern, user)
@@ -334,10 +380,19 @@ impl Resolve<ExecuteArgs> for BatchBuildRepo {
}
impl Resolve<ExecuteArgs> for BuildRepo {
#[instrument(name = "BuildRepo", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"BuildRepo",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
repo = self.repo,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut repo = get_check_permissions::<Repo>(
&self.repo,
@@ -419,7 +474,7 @@ impl Resolve<ExecuteArgs> for BuildRepo {
// GET BUILDER PERIPHERY
let (periphery, cleanup_data) = match get_builder_periphery(
let (periphery, cleanup_data) = match connect_builder_periphery(
repo.name.clone(),
None,
builder,
@@ -531,7 +586,6 @@ impl Resolve<ExecuteArgs> for BuildRepo {
update_update(update.clone()).await?;
if !update.success {
warn!("repo build unsuccessful, alerting...");
let target = update.target.clone();
tokio::spawn(async move {
let alert = Alert {
@@ -554,7 +608,7 @@ impl Resolve<ExecuteArgs> for BuildRepo {
}
}
#[instrument(skip(update))]
#[instrument("HandleRepoBuildEarlyReturn", skip(update))]
async fn handle_builder_early_return(
mut update: Update,
repo_id: String,
@@ -578,7 +632,6 @@ async fn handle_builder_early_return(
}
update_update(update.clone()).await?;
if !update.success && !is_cancel {
warn!("repo build unsuccessful, alerting...");
let target = update.target.clone();
tokio::spawn(async move {
let alert = Alert {
@@ -599,7 +652,6 @@ async fn handle_builder_early_return(
Ok(update)
}
#[instrument(skip_all)]
pub async fn validate_cancel_repo_build(
request: &ExecuteRequest,
) -> anyhow::Result<()> {
@@ -649,10 +701,19 @@ pub async fn validate_cancel_repo_build(
}
impl Resolve<ExecuteArgs> for CancelRepoBuild {
#[instrument(name = "CancelRepoBuild", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"CancelRepoBuild",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
repo = self.repo,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let repo = get_check_permissions::<Repo>(
&self.repo,
@@ -709,6 +770,13 @@ impl Resolve<ExecuteArgs> for CancelRepoBuild {
}
}
#[instrument(
"Interpolate",
skip_all,
fields(
skip_secret_interp = repo.config.skip_secret_interp
)
)]
async fn interpolate(
repo: &mut Repo,
update: &mut Update,

View File

@@ -22,10 +22,20 @@ use crate::{
use super::ExecuteArgs;
impl Resolve<ExecuteArgs> for StartContainer {
#[instrument(name = "StartContainer", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"StartContainer",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
container = self.container,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -60,8 +70,8 @@ impl Resolve<ExecuteArgs> for StartContainer {
{
Ok(log) => log,
Err(e) => Log::error(
"start container",
format_serror(&e.context("failed to start container").into()),
"Start Container",
format_serror(&e.context("Failed to start container").into()),
),
};
@@ -76,10 +86,20 @@ impl Resolve<ExecuteArgs> for StartContainer {
}
impl Resolve<ExecuteArgs> for RestartContainer {
#[instrument(name = "RestartContainer", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"RestartContainer",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
container = self.container,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -114,9 +134,9 @@ impl Resolve<ExecuteArgs> for RestartContainer {
{
Ok(log) => log,
Err(e) => Log::error(
"restart container",
"Restart Container",
format_serror(
&e.context("failed to restart container").into(),
&e.context("Failed to restart container").into(),
),
),
};
@@ -132,10 +152,20 @@ impl Resolve<ExecuteArgs> for RestartContainer {
}
impl Resolve<ExecuteArgs> for PauseContainer {
#[instrument(name = "PauseContainer", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"PauseContainer",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
container = self.container,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -170,8 +200,8 @@ impl Resolve<ExecuteArgs> for PauseContainer {
{
Ok(log) => log,
Err(e) => Log::error(
"pause container",
format_serror(&e.context("failed to pause container").into()),
"Pause Container",
format_serror(&e.context("Failed to pause container").into()),
),
};
@@ -186,10 +216,20 @@ impl Resolve<ExecuteArgs> for PauseContainer {
}
impl Resolve<ExecuteArgs> for UnpauseContainer {
#[instrument(name = "UnpauseContainer", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"UnpauseContainer",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
container = self.container,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -224,9 +264,9 @@ impl Resolve<ExecuteArgs> for UnpauseContainer {
{
Ok(log) => log,
Err(e) => Log::error(
"unpause container",
"Unpause Container",
format_serror(
&e.context("failed to unpause container").into(),
&e.context("Failed to unpause container").into(),
),
),
};
@@ -242,10 +282,22 @@ impl Resolve<ExecuteArgs> for UnpauseContainer {
}
impl Resolve<ExecuteArgs> for StopContainer {
#[instrument(name = "StopContainer", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"StopContainer",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
container = self.container,
signal = format!("{:?}", self.signal),
time = self.time,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -282,8 +334,8 @@ impl Resolve<ExecuteArgs> for StopContainer {
{
Ok(log) => log,
Err(e) => Log::error(
"stop container",
format_serror(&e.context("failed to stop container").into()),
"Stop Container",
format_serror(&e.context("Failed to stop container").into()),
),
};
@@ -298,10 +350,22 @@ impl Resolve<ExecuteArgs> for StopContainer {
}
impl Resolve<ExecuteArgs> for DestroyContainer {
#[instrument(name = "DestroyContainer", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"DestroyContainer",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
container = self.container,
signal = format!("{:?}", self.signal),
time = self.time,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let DestroyContainer {
server,
@@ -344,8 +408,10 @@ impl Resolve<ExecuteArgs> for DestroyContainer {
{
Ok(log) => log,
Err(e) => Log::error(
"stop container",
format_serror(&e.context("failed to stop container").into()),
"Remove Container",
format_serror(
&e.context("Failed to remove container").into(),
),
),
};
@@ -360,10 +426,19 @@ impl Resolve<ExecuteArgs> for DestroyContainer {
}
impl Resolve<ExecuteArgs> for StartAllContainers {
#[instrument(name = "StartAllContainers", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"StartAllContainers",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -391,13 +466,13 @@ impl Resolve<ExecuteArgs> for StartAllContainers {
.await?
.request(api::container::StartAllContainers {})
.await
.context("failed to start all containers on host")?;
.context("Failed to start all containers on host")?;
update.logs.extend(logs);
if all_logs_success(&update.logs) {
update.push_simple_log(
"start all containers",
"Start All Containers",
String::from("All containers have been started on the host."),
);
}
@@ -411,10 +486,19 @@ impl Resolve<ExecuteArgs> for StartAllContainers {
}
impl Resolve<ExecuteArgs> for RestartAllContainers {
#[instrument(name = "RestartAllContainers", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"RestartAllContainers",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -442,13 +526,13 @@ impl Resolve<ExecuteArgs> for RestartAllContainers {
.await?
.request(api::container::RestartAllContainers {})
.await
.context("failed to restart all containers on host")?;
.context("Failed to restart all containers on host")?;
update.logs.extend(logs);
if all_logs_success(&update.logs) {
update.push_simple_log(
"restart all containers",
"Restart All Containers",
String::from(
"All containers have been restarted on the host.",
),
@@ -464,10 +548,19 @@ impl Resolve<ExecuteArgs> for RestartAllContainers {
}
impl Resolve<ExecuteArgs> for PauseAllContainers {
#[instrument(name = "PauseAllContainers", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"PauseAllContainers",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -495,13 +588,13 @@ impl Resolve<ExecuteArgs> for PauseAllContainers {
.await?
.request(api::container::PauseAllContainers {})
.await
.context("failed to pause all containers on host")?;
.context("Failed to pause all containers on host")?;
update.logs.extend(logs);
if all_logs_success(&update.logs) {
update.push_simple_log(
"pause all containers",
"Pause All Containers",
String::from("All containers have been paused on the host."),
);
}
@@ -515,10 +608,19 @@ impl Resolve<ExecuteArgs> for PauseAllContainers {
}
impl Resolve<ExecuteArgs> for UnpauseAllContainers {
#[instrument(name = "UnpauseAllContainers", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"UnpauseAllContainers",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -546,13 +648,13 @@ impl Resolve<ExecuteArgs> for UnpauseAllContainers {
.await?
.request(api::container::UnpauseAllContainers {})
.await
.context("failed to unpause all containers on host")?;
.context("Failed to unpause all containers on host")?;
update.logs.extend(logs);
if all_logs_success(&update.logs) {
update.push_simple_log(
"unpause all containers",
"Unpause All Containers",
String::from(
"All containers have been unpaused on the host.",
),
@@ -568,10 +670,19 @@ impl Resolve<ExecuteArgs> for UnpauseAllContainers {
}
impl Resolve<ExecuteArgs> for StopAllContainers {
#[instrument(name = "StopAllContainers", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"StopAllContainers",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -599,13 +710,13 @@ impl Resolve<ExecuteArgs> for StopAllContainers {
.await?
.request(api::container::StopAllContainers {})
.await
.context("failed to stop all containers on host")?;
.context("Failed to stop all containers on host")?;
update.logs.extend(logs);
if all_logs_success(&update.logs) {
update.push_simple_log(
"stop all containers",
"Stop All Containers",
String::from("All containers have been stopped on the host."),
);
}
@@ -619,10 +730,19 @@ impl Resolve<ExecuteArgs> for StopAllContainers {
}
impl Resolve<ExecuteArgs> for PruneContainers {
#[instrument(name = "PruneContainers", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"PruneContainers",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -652,14 +772,14 @@ impl Resolve<ExecuteArgs> for PruneContainers {
.request(api::container::PruneContainers {})
.await
.context(format!(
"failed to prune containers on server {}",
"Failed to prune containers on server {}",
server.name
)) {
Ok(log) => log,
Err(e) => Log::error(
"prune containers",
"Prune Containers",
format_serror(
&e.context("failed to prune containers").into(),
&e.context("Failed to prune containers").into(),
),
),
};
@@ -675,10 +795,20 @@ impl Resolve<ExecuteArgs> for PruneContainers {
}
impl Resolve<ExecuteArgs> for DeleteNetwork {
#[instrument(name = "DeleteNetwork", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"DeleteNetwork",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
network = self.name
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -699,15 +829,15 @@ impl Resolve<ExecuteArgs> for DeleteNetwork {
})
.await
.context(format!(
"failed to delete network {} on server {}",
"Failed to delete network {} on server {}",
self.name, server.name
)) {
Ok(log) => log,
Err(e) => Log::error(
"delete network",
"Delete Network",
format_serror(
&e.context(format!(
"failed to delete network {}",
"Failed to delete network {}",
self.name
))
.into(),
@@ -726,10 +856,19 @@ impl Resolve<ExecuteArgs> for DeleteNetwork {
}
impl Resolve<ExecuteArgs> for PruneNetworks {
#[instrument(name = "PruneNetworks", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"PruneNetworks",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -759,13 +898,13 @@ impl Resolve<ExecuteArgs> for PruneNetworks {
.request(api::docker::PruneNetworks {})
.await
.context(format!(
"failed to prune networks on server {}",
"Failed to prune networks on server {}",
server.name
)) {
Ok(log) => log,
Err(e) => Log::error(
"prune networks",
format_serror(&e.context("failed to prune networks").into()),
"Prune Networks",
format_serror(&e.context("Failed to prune networks").into()),
),
};
@@ -780,10 +919,20 @@ impl Resolve<ExecuteArgs> for PruneNetworks {
}
impl Resolve<ExecuteArgs> for DeleteImage {
#[instrument(name = "DeleteImage", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"DeleteImage",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
image = self.name,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -804,14 +953,14 @@ impl Resolve<ExecuteArgs> for DeleteImage {
})
.await
.context(format!(
"failed to delete image {} on server {}",
"Failed to delete image {} on server {}",
self.name, server.name
)) {
Ok(log) => log,
Err(e) => Log::error(
"delete image",
format_serror(
&e.context(format!("failed to delete image {}", self.name))
&e.context(format!("Failed to delete image {}", self.name))
.into(),
),
),
@@ -828,10 +977,19 @@ impl Resolve<ExecuteArgs> for DeleteImage {
}
impl Resolve<ExecuteArgs> for PruneImages {
#[instrument(name = "PruneImages", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"PruneImages",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -861,9 +1019,9 @@ impl Resolve<ExecuteArgs> for PruneImages {
match periphery.request(api::docker::PruneImages {}).await {
Ok(log) => log,
Err(e) => Log::error(
"prune images",
"Prune Images",
format!(
"failed to prune images on server {} | {e:#?}",
"Failed to prune images on server {} | {e:#?}",
server.name
),
),
@@ -880,10 +1038,20 @@ impl Resolve<ExecuteArgs> for PruneImages {
}
impl Resolve<ExecuteArgs> for DeleteVolume {
#[instrument(name = "DeleteVolume", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"DeleteVolume",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
volume = self.name,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -904,7 +1072,7 @@ impl Resolve<ExecuteArgs> for DeleteVolume {
})
.await
.context(format!(
"failed to delete volume {} on server {}",
"Failed to delete volume {} on server {}",
self.name, server.name
)) {
Ok(log) => log,
@@ -912,7 +1080,7 @@ impl Resolve<ExecuteArgs> for DeleteVolume {
"delete volume",
format_serror(
&e.context(format!(
"failed to delete volume {}",
"Failed to delete volume {}",
self.name
))
.into(),
@@ -931,10 +1099,19 @@ impl Resolve<ExecuteArgs> for DeleteVolume {
}
impl Resolve<ExecuteArgs> for PruneVolumes {
#[instrument(name = "PruneVolumes", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"PruneVolumes",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -964,9 +1141,9 @@ impl Resolve<ExecuteArgs> for PruneVolumes {
match periphery.request(api::docker::PruneVolumes {}).await {
Ok(log) => log,
Err(e) => Log::error(
"prune volumes",
"Prune Volumes",
format!(
"failed to prune volumes on server {} | {e:#?}",
"Failed to prune volumes on server {} | {e:#?}",
server.name
),
),
@@ -983,10 +1160,19 @@ impl Resolve<ExecuteArgs> for PruneVolumes {
}
impl Resolve<ExecuteArgs> for PruneDockerBuilders {
#[instrument(name = "PruneDockerBuilders", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"PruneDockerBuilders",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -1016,9 +1202,9 @@ impl Resolve<ExecuteArgs> for PruneDockerBuilders {
match periphery.request(api::build::PruneBuilders {}).await {
Ok(log) => log,
Err(e) => Log::error(
"prune builders",
"Prune Builders",
format!(
"failed to docker builder prune on server {} | {e:#?}",
"Failed to docker builder prune on server {} | {e:#?}",
server.name
),
),
@@ -1035,10 +1221,19 @@ impl Resolve<ExecuteArgs> for PruneDockerBuilders {
}
impl Resolve<ExecuteArgs> for PruneBuildx {
#[instrument(name = "PruneBuildx", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"PruneBuildx",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -1068,9 +1263,9 @@ impl Resolve<ExecuteArgs> for PruneBuildx {
match periphery.request(api::build::PruneBuildx {}).await {
Ok(log) => log,
Err(e) => Log::error(
"prune buildx",
"Prune Buildx",
format!(
"failed to docker buildx prune on server {} | {e:#?}",
"Failed to docker buildx prune on server {} | {e:#?}",
server.name
),
),
@@ -1087,10 +1282,19 @@ impl Resolve<ExecuteArgs> for PruneBuildx {
}
impl Resolve<ExecuteArgs> for PruneSystem {
#[instrument(name = "PruneSystem", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"PruneSystem",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -1119,9 +1323,9 @@ impl Resolve<ExecuteArgs> for PruneSystem {
let log = match periphery.request(api::PruneSystem {}).await {
Ok(log) => log,
Err(e) => Log::error(
"prune system",
"Prune System",
format!(
"failed to docker system prune on server {} | {e:#?}",
"Failed to docker system prune on server {} | {e:#?}",
server.name
),
),

View File

@@ -1,6 +1,6 @@
use std::{collections::HashSet, str::FromStr};
use anyhow::Context;
use anyhow::{Context, anyhow};
use database::mungos::mongodb::bson::{
doc, oid::ObjectId, to_bson, to_document,
};
@@ -9,7 +9,7 @@ use interpolate::Interpolator;
use komodo_client::{
api::{execute::*, write::RefreshStackCache},
entities::{
FileContents,
FileContents, SwarmOrServer,
permission::PermissionLevel,
repo::Repo,
server::Server,
@@ -20,8 +20,13 @@ use komodo_client::{
user::User,
},
};
use periphery_client::api::compose::*;
use periphery_client::api::{
DeployStackResponse, compose::*, swarm::DeploySwarmStack,
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError as _;
use uuid::Uuid;
use crate::{
api::write::WriteArgs,
@@ -29,14 +34,20 @@ use crate::{
periphery_client,
query::{VariablesAndSecrets, get_variables_and_secrets},
stack_git_token,
swarm::swarm_request,
update::{
add_update_without_send, init_execution_update, update_update,
},
},
monitor::update_cache_for_server,
monitor::{update_cache_for_server, update_cache_for_swarm},
permission::get_check_permissions,
resource,
stack::{execute::execute_compose, get_stack_and_server},
stack::{
execute::{
execute_compose, execute_compose_with_stack_and_server,
},
setup_stack_execution,
},
state::{action_states, db_client},
};
@@ -54,10 +65,18 @@ impl super::BatchExecute for BatchDeployStack {
}
impl Resolve<ExecuteArgs> for BatchDeployStack {
#[instrument(name = "BatchDeployStack", skip(user), fields(user_id = user.id))]
#[instrument(
"BatchDeployStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
async fn resolve(
self,
ExecuteArgs { user, .. }: &ExecuteArgs,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchDeployStack>(&self.pattern, user)
@@ -67,16 +86,26 @@ impl Resolve<ExecuteArgs> for BatchDeployStack {
}
impl Resolve<ExecuteArgs> for DeployStack {
#[instrument(name = "DeployStack", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"DeployStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
services = format!("{:?}", self.services),
stop_time = self.stop_time,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let (mut stack, server) = get_stack_and_server(
let (mut stack, swarm_or_server) = setup_stack_execution(
&self.stack,
user,
PermissionLevel::Execute.into(),
true,
)
.await?;
@@ -145,27 +174,44 @@ impl Resolve<ExecuteArgs> for DeployStack {
Default::default()
};
let ComposeUpResponse {
let DeployStackResponse {
logs,
deployed,
services,
file_contents,
missing_files,
remote_errors,
compose_config,
merged_config,
commit_hash,
commit_message,
} = periphery_client(&server)
.await?
.request(ComposeUp {
stack: stack.clone(),
services: self.services,
repo,
git_token,
registry_token,
replacers: secret_replacers.into_iter().collect(),
})
.await?;
} = match &swarm_or_server {
SwarmOrServer::Swarm(swarm) => {
swarm_request(
&swarm.config.server_ids,
DeploySwarmStack {
stack: stack.clone(),
repo,
git_token,
registry_token,
replacers: secret_replacers.into_iter().collect(),
},
)
.await?
}
SwarmOrServer::Server(server) => {
periphery_client(server)
.await?
.request(ComposeUp {
stack: stack.clone(),
services: self.services,
repo,
git_token,
registry_token,
replacers: secret_replacers.into_iter().collect(),
})
.await?
}
};
update.logs.extend(logs);
@@ -199,7 +245,7 @@ impl Resolve<ExecuteArgs> for DeployStack {
})
.collect(),
),
compose_config,
merged_config,
commit_hash.clone(),
commit_message.clone(),
)
@@ -237,7 +283,7 @@ impl Resolve<ExecuteArgs> for DeployStack {
};
let info = to_document(&info)
.context("failed to serialize stack info to bson")?;
.context("Failed to serialize stack info to bson")?;
db_client()
.stacks
@@ -246,22 +292,29 @@ impl Resolve<ExecuteArgs> for DeployStack {
doc! { "$set": { "info": info } },
)
.await
.context("failed to update stack info on db")?;
.context("Failed to update stack info on db")?;
anyhow::Ok(())
};
// This will be weird with single service deploys. Come back to it.
if let Err(e) = update_info.await {
update.push_error_log(
"refresh stack info",
"Refresh Stack Info",
format_serror(
&e.context("failed to refresh stack info on db").into(),
&e.context("Failed to refresh stack info on db").into(),
),
)
}
// Ensure cached stack state up to date by updating server cache
update_cache_for_server(&server, true).await;
match swarm_or_server {
SwarmOrServer::Swarm(swarm) => {
update_cache_for_swarm(&swarm, true).await;
}
SwarmOrServer::Server(server) => {
update_cache_for_server(&server, true).await;
}
}
update.finalize();
update_update(update.clone()).await?;
@@ -281,10 +334,18 @@ impl super::BatchExecute for BatchDeployStackIfChanged {
}
impl Resolve<ExecuteArgs> for BatchDeployStackIfChanged {
#[instrument(name = "BatchDeployStackIfChanged", skip(user), fields(user_id = user.id))]
#[instrument(
"BatchDeployStackIfChanged",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
async fn resolve(
self,
ExecuteArgs { user, .. }: &ExecuteArgs,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchDeployStackIfChanged>(
@@ -297,10 +358,20 @@ impl Resolve<ExecuteArgs> for BatchDeployStackIfChanged {
}
impl Resolve<ExecuteArgs> for DeployStackIfChanged {
#[instrument(name = "DeployStackIfChanged", skip(user, update), fields(user_id = user.id))]
#[instrument(
"DeployStackIfChanged",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
stop_time = self.stop_time,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let stack = get_check_permissions::<Stack>(
&self.stack,
@@ -358,6 +429,7 @@ impl Resolve<ExecuteArgs> for DeployStackIfChanged {
.resolve(&ExecuteArgs {
user: user.clone(),
update,
id: *id,
})
.await
}
@@ -467,6 +539,14 @@ impl Resolve<ExecuteArgs> for DeployStackIfChanged {
}
}
#[instrument(
"DeployStackServices",
skip_all,
fields(
stack = stack,
services = format!("{services:?}")
)
)]
async fn deploy_services(
stack: String,
services: Vec<String>,
@@ -488,10 +568,19 @@ async fn deploy_services(
.resolve(&ExecuteArgs {
user: user.clone(),
update,
id: Uuid::new_v4(),
})
.await
}
#[instrument(
"RestartStackServices",
skip_all,
fields(
stack = stack,
services = format!("{services:?}")
)
)]
async fn restart_services(
stack: String,
services: Vec<String>,
@@ -510,6 +599,7 @@ async fn restart_services(
.resolve(&ExecuteArgs {
user: user.clone(),
update,
id: Uuid::new_v4(),
})
.await
}
@@ -526,6 +616,11 @@ async fn restart_services(
/// Changes to config files after restart is applied should
/// be taken as the deployed contents, otherwise next changed check
/// will restart service again for no reason.
#[instrument(
"UpdateStackDeployedContents",
skip_all,
fields(stack = id)
)]
async fn update_deployed_contents_with_latest(
id: &str,
contents: Option<Vec<StackRemoteFileContents>>,
@@ -663,10 +758,18 @@ impl super::BatchExecute for BatchPullStack {
}
impl Resolve<ExecuteArgs> for BatchPullStack {
#[instrument(name = "BatchPullStack", skip(user), fields(user_id = user.id))]
#[instrument(
"BatchPullStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
async fn resolve(
self,
ExecuteArgs { user, .. }: &ExecuteArgs,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchPullStack>(&self.pattern, user)
@@ -700,6 +803,14 @@ async fn maybe_pull_stack(
Ok(())
}
#[instrument(
"PullStackInner",
skip_all,
fields(
stack = stack.id,
services = format!("{services:?}"),
)
)]
pub async fn pull_stack_inner(
mut stack: Stack,
services: Vec<String>,
@@ -769,19 +880,37 @@ pub async fn pull_stack_inner(
}
impl Resolve<ExecuteArgs> for PullStack {
#[instrument(name = "PullStack", skip(user, update), fields(user_id = user.id))]
#[instrument(
"PullStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
services = format!("{:?}", self.services),
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let (stack, server) = get_stack_and_server(
let (stack, swarm_or_server) = setup_stack_execution(
&self.stack,
user,
PermissionLevel::Execute.into(),
true,
)
.await?;
let SwarmOrServer::Server(server) = swarm_or_server else {
return Err(
anyhow!(
"PullStack should not be called for Stack in Swarm Mode"
)
.status_code(StatusCode::BAD_REQUEST),
);
};
let repo = if !stack.config.files_on_host
&& !stack.config.linked_repo.is_empty()
{
@@ -822,10 +951,20 @@ impl Resolve<ExecuteArgs> for PullStack {
}
impl Resolve<ExecuteArgs> for StartStack {
#[instrument(name = "StartStack", skip(user, update), fields(user_id = user.id))]
#[instrument(
"StartStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
services = format!("{:?}", self.services),
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
execute_compose::<StartStack>(
&self.stack,
@@ -841,10 +980,20 @@ impl Resolve<ExecuteArgs> for StartStack {
}
impl Resolve<ExecuteArgs> for RestartStack {
#[instrument(name = "RestartStack", skip(user, update), fields(user_id = user.id))]
#[instrument(
"RestartStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
services = format!("{:?}", self.services),
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
execute_compose::<RestartStack>(
&self.stack,
@@ -862,10 +1011,20 @@ impl Resolve<ExecuteArgs> for RestartStack {
}
impl Resolve<ExecuteArgs> for PauseStack {
#[instrument(name = "PauseStack", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"PauseStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
services = format!("{:?}", self.services),
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
execute_compose::<PauseStack>(
&self.stack,
@@ -881,10 +1040,20 @@ impl Resolve<ExecuteArgs> for PauseStack {
}
impl Resolve<ExecuteArgs> for UnpauseStack {
#[instrument(name = "UnpauseStack", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"UnpauseStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
services = format!("{:?}", self.services),
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
execute_compose::<UnpauseStack>(
&self.stack,
@@ -900,10 +1069,20 @@ impl Resolve<ExecuteArgs> for UnpauseStack {
}
impl Resolve<ExecuteArgs> for StopStack {
#[instrument(name = "StopStack", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"StopStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
services = format!("{:?}", self.services),
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
execute_compose::<StopStack>(
&self.stack,
@@ -931,10 +1110,18 @@ impl super::BatchExecute for BatchDestroyStack {
}
impl Resolve<ExecuteArgs> for BatchDestroyStack {
#[instrument(name = "BatchDestroyStack", skip(user), fields(user_id = user.id))]
#[instrument(
"BatchDestroyStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
async fn resolve(
self,
ExecuteArgs { user, .. }: &ExecuteArgs,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
super::batch_execute::<BatchDestroyStack>(&self.pattern, user)
.await
@@ -943,38 +1130,130 @@ impl Resolve<ExecuteArgs> for BatchDestroyStack {
}
impl Resolve<ExecuteArgs> for DestroyStack {
#[instrument(name = "DestroyStack", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"DestroyStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
services = format!("{:?}", self.services),
remove_orphans = self.remove_orphans,
stop_time = self.stop_time,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
execute_compose::<DestroyStack>(
let (stack, swarm_or_server) = setup_stack_execution(
&self.stack,
self.services,
user,
|state| state.destroying = true,
update.clone(),
(self.stop_time, self.remove_orphans),
PermissionLevel::Execute.into(),
)
.await
.map_err(Into::into)
.await?;
match swarm_or_server {
SwarmOrServer::Swarm(swarm) => {
if !self.services.is_empty() {
return Err(
anyhow!("Cannot destroy specific Stack services when in Swarm mode.")
.status_code(StatusCode::BAD_REQUEST)
);
}
// get the action state for the stack (or insert default).
let action_state = action_states()
.stack
.get_or_insert_default(&stack.id)
.await;
// Will check to ensure stack not already busy before updating, and return Err if so.
// The returned guard will set the action state back to default when dropped.
let _action_guard =
action_state.update(|state| state.destroying = true)?;
let mut update = update.clone();
// Send update here for frontend to recheck action state
update_update(update.clone()).await?;
match swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::RemoveSwarmStacks {
stacks: vec![stack.project_name(false)],
detach: false,
},
)
.await
{
Ok(log) => update.logs.push(log),
Err(e) => update.push_error_log(
"Destroy Stack",
format_serror(
&e.context("Failed to 'docker stack rm' on swarm")
.into(),
),
),
}
// Ensure cached stack state up to date by updating swarm cache
update_cache_for_swarm(&swarm, true).await;
update.finalize();
update_update(update.clone()).await?;
Ok(update)
}
SwarmOrServer::Server(server) => {
execute_compose_with_stack_and_server::<DestroyStack>(
stack,
server,
self.services,
|state| state.destroying = true,
update.clone(),
(self.stop_time, self.remove_orphans),
)
.await
.map_err(Into::into)
}
}
}
}
impl Resolve<ExecuteArgs> for RunStackService {
#[instrument(name = "RunStackService", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"RunStackService",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
service = self.service,
request = format!("{self:?}"),
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let (mut stack, server) = get_stack_and_server(
let (mut stack, swarm_or_server) = setup_stack_execution(
&self.stack,
user,
PermissionLevel::Execute.into(),
true,
)
.await?;
let SwarmOrServer::Server(server) = swarm_or_server else {
return Err(
anyhow!(
"RunStackService should not be called for Stack in Swarm Mode"
)
.status_code(StatusCode::BAD_REQUEST),
);
};
let mut repo = if !stack.config.files_on_host
&& !stack.config.linked_repo.is_empty()
{

View File

@@ -0,0 +1,274 @@
use formatting::format_serror;
use komodo_client::{
api::execute::{
RemoveSwarmConfigs, RemoveSwarmNodes, RemoveSwarmSecrets,
RemoveSwarmServices, RemoveSwarmStacks,
},
entities::{permission::PermissionLevel, swarm::Swarm},
};
use resolver_api::Resolve;
use crate::{
api::execute::ExecuteArgs,
helpers::{swarm::swarm_request, update::update_update},
permission::get_check_permissions,
};
impl Resolve<ExecuteArgs> for RemoveSwarmNodes {
#[instrument(
"RemoveSwarmNodes",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
swarm = self.swarm,
nodes = serde_json::to_string(&self.nodes).unwrap_or_else(|e| e.to_string()),
force = self.force,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Execute.into(),
)
.await?;
update_update(update.clone()).await?;
let mut update = update.clone();
match swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::RemoveSwarmNodes {
nodes: self.nodes,
force: self.force,
},
)
.await
{
Ok(log) => update.logs.push(log),
Err(e) => update.push_error_log(
"Remove Swarm Nodes",
format_serror(
&e.context("Failed to remove swarm nodes").into(),
),
),
};
update.finalize();
update_update(update.clone()).await?;
Ok(update)
}
}
impl Resolve<ExecuteArgs> for RemoveSwarmStacks {
#[instrument(
"RemoveSwarmStacks",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
swarm = self.swarm,
stacks = serde_json::to_string(&self.stacks).unwrap_or_else(|e| e.to_string()),
detach = self.detach,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Execute.into(),
)
.await?;
update_update(update.clone()).await?;
let mut update = update.clone();
match swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::RemoveSwarmStacks {
stacks: self.stacks,
detach: self.detach,
},
)
.await
{
Ok(log) => update.logs.push(log),
Err(e) => update.push_error_log(
"Remove Swarm Stacks",
format_serror(
&e.context("Failed to remove swarm stacks").into(),
),
),
};
update.finalize();
update_update(update.clone()).await?;
Ok(update)
}
}
impl Resolve<ExecuteArgs> for RemoveSwarmServices {
#[instrument(
"RemoveSwarmServices",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
swarm = self.swarm,
services = serde_json::to_string(&self.services).unwrap_or_else(|e| e.to_string()),
)
)]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Execute.into(),
)
.await?;
update_update(update.clone()).await?;
let mut update = update.clone();
match swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::RemoveSwarmServices {
services: self.services,
},
)
.await
{
Ok(log) => update.logs.push(log),
Err(e) => update.push_error_log(
"Remove Swarm Services",
format_serror(
&e.context("Failed to remove swarm services").into(),
),
),
};
update.finalize();
update_update(update.clone()).await?;
Ok(update)
}
}
impl Resolve<ExecuteArgs> for RemoveSwarmConfigs {
#[instrument(
"RemoveSwarmConfigs",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
swarm = self.swarm,
configs = serde_json::to_string(&self.configs).unwrap_or_else(|e| e.to_string()),
)
)]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Execute.into(),
)
.await?;
update_update(update.clone()).await?;
let mut update = update.clone();
match swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::RemoveSwarmConfigs {
configs: self.configs,
},
)
.await
{
Ok(log) => update.logs.push(log),
Err(e) => update.push_error_log(
"Remove Swarm Configs",
format_serror(
&e.context("Failed to remove swarm configs").into(),
),
),
};
update.finalize();
update_update(update.clone()).await?;
Ok(update)
}
}
impl Resolve<ExecuteArgs> for RemoveSwarmSecrets {
#[instrument(
"RemoveSwarmSecrets",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
swarm = self.swarm,
secrets = serde_json::to_string(&self.secrets).unwrap_or_else(|e| e.to_string()),
)
)]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Execute.into(),
)
.await?;
update_update(update.clone()).await?;
let mut update = update.clone();
match swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::RemoveSwarmSecrets {
secrets: self.secrets,
},
)
.await
{
Ok(log) => update.logs.push(log),
Err(e) => update.push_error_log(
"Remove Swarm Secrets",
format_serror(
&e.context("Failed to remove swarm secrets").into(),
),
),
};
update.finalize();
update_update(update.clone()).await?;
Ok(update)
}
}

View File

@@ -49,10 +49,21 @@ use crate::{
use super::ExecuteArgs;
impl Resolve<ExecuteArgs> for RunSync {
#[instrument(name = "RunSync", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
"RunSync",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
sync = self.sync,
resource_type = format!("{:?}", self.resource_type),
resources = format!("{:?}", self.resources),
)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> serror::Result<Update> {
let RunSync {
sync,
@@ -125,34 +136,10 @@ impl Resolve<ExecuteArgs> for RunSync {
};
match ObjectId::from_str(&name_or_id) {
Ok(_) => match resource_type {
ResourceTargetVariant::Alerter => all_resources
.alerters
ResourceTargetVariant::Swarm => all_resources
.swarms
.get(&name_or_id)
.map(|a| a.name.clone()),
ResourceTargetVariant::Build => all_resources
.builds
.get(&name_or_id)
.map(|b| b.name.clone()),
ResourceTargetVariant::Builder => all_resources
.builders
.get(&name_or_id)
.map(|b| b.name.clone()),
ResourceTargetVariant::Deployment => all_resources
.deployments
.get(&name_or_id)
.map(|d| d.name.clone()),
ResourceTargetVariant::Procedure => all_resources
.procedures
.get(&name_or_id)
.map(|p| p.name.clone()),
ResourceTargetVariant::Action => all_resources
.actions
.get(&name_or_id)
.map(|p| p.name.clone()),
ResourceTargetVariant::Repo => all_resources
.repos
.get(&name_or_id)
.map(|r| r.name.clone()),
.map(|s| s.name.clone()),
ResourceTargetVariant::Server => all_resources
.servers
.get(&name_or_id)
@@ -161,10 +148,38 @@ impl Resolve<ExecuteArgs> for RunSync {
.stacks
.get(&name_or_id)
.map(|s| s.name.clone()),
ResourceTargetVariant::Deployment => all_resources
.deployments
.get(&name_or_id)
.map(|d| d.name.clone()),
ResourceTargetVariant::Build => all_resources
.builds
.get(&name_or_id)
.map(|b| b.name.clone()),
ResourceTargetVariant::Repo => all_resources
.repos
.get(&name_or_id)
.map(|r| r.name.clone()),
ResourceTargetVariant::Procedure => all_resources
.procedures
.get(&name_or_id)
.map(|p| p.name.clone()),
ResourceTargetVariant::Action => all_resources
.actions
.get(&name_or_id)
.map(|p| p.name.clone()),
ResourceTargetVariant::ResourceSync => all_resources
.syncs
.get(&name_or_id)
.map(|s| s.name.clone()),
ResourceTargetVariant::Builder => all_resources
.builders
.get(&name_or_id)
.map(|b| b.name.clone()),
ResourceTargetVariant::Alerter => all_resources
.alerters
.get(&name_or_id)
.map(|a| a.name.clone()),
ResourceTargetVariant::System => None,
},
Err(_) => Some(name_or_id),

View File

@@ -5,10 +5,9 @@ use hmac::{Hmac, Mac};
use serde::Deserialize;
use sha2::Sha256;
use crate::{
config::core_config,
listener::{ExtractBranch, VerifySecret},
};
use crate::config::core_config;
use super::{ExtractBranch, VerifySecret};
type HmacSha256 = Hmac<Sha256>;
@@ -18,7 +17,7 @@ pub struct Github;
impl VerifySecret for Github {
#[instrument("VerifyGithubSecret", skip_all)]
fn verify_secret(
headers: HeaderMap,
headers: &HeaderMap,
body: &str,
custom_secret: &str,
) -> anyhow::Result<()> {

View File

@@ -1,10 +1,10 @@
use anyhow::{Context, anyhow};
use axum::http::HeaderMap;
use serde::Deserialize;
use crate::{
config::core_config,
listener::{ExtractBranch, VerifySecret},
};
use crate::config::core_config;
use super::{ExtractBranch, VerifySecret};
/// Listener implementation for Gitlab type API
pub struct Gitlab;
@@ -12,7 +12,7 @@ pub struct Gitlab;
impl VerifySecret for Gitlab {
#[instrument("VerifyGitlabSecret", skip_all)]
fn verify_secret(
headers: axum::http::HeaderMap,
headers: &HeaderMap,
_body: &str,
custom_secret: &str,
) -> anyhow::Result<()> {

View File

@@ -0,0 +1,4 @@
pub mod github;
pub mod gitlab;
use super::{ExtractBranch, VerifySecret};

View File

@@ -32,7 +32,7 @@ trait CustomSecret: KomodoResource {
/// Implemented on the integration struct, eg [integrations::github::Github]
trait VerifySecret {
fn verify_secret(
headers: HeaderMap,
headers: &HeaderMap,
body: &str,
custom_secret: &str,
) -> anyhow::Result<()>;

View File

@@ -14,6 +14,7 @@ use komodo_client::{
use resolver_api::Resolve;
use serde::Deserialize;
use serde_json::json;
use uuid::Uuid;
use crate::{
api::{
@@ -21,6 +22,7 @@ use crate::{
write::WriteArgs,
},
helpers::update::init_execution_update,
resource,
};
use super::{ANY_BRANCH, ListenerLockCache};
@@ -54,7 +56,18 @@ pub async fn handle_build_webhook<B: super::ExtractBranch>(
let lock = build_locks().get_or_insert_default(&build.id).await;
let _lock = lock.lock().await;
B::verify_branch(&body, &build.config.branch)?;
// Use the correct target branch when using linked repo.
let branch = if build.config.linked_repo.is_empty() {
build.config.branch
} else {
resource::get::<Repo>(&build.config.linked_repo)
.await
.context("Failed to find 'linked_repo'")?
.config
.branch
};
B::verify_branch(&body, &branch)?;
let user = git_webhook_user().to_owned();
let req = ExecuteRequest::RunBuild(RunBuild { build: build.id });
@@ -63,7 +76,11 @@ pub async fn handle_build_webhook<B: super::ExtractBranch>(
unreachable!()
};
req
.resolve(&ExecuteArgs { user, update })
.resolve(&ExecuteArgs {
user,
update,
id: Uuid::new_v4(),
})
.await
.map_err(|e| e.error)?;
Ok(())
@@ -101,7 +118,11 @@ impl RepoExecution for CloneRepo {
unreachable!()
};
req
.resolve(&ExecuteArgs { user, update })
.resolve(&ExecuteArgs {
user,
update,
id: Uuid::new_v4(),
})
.await
.map_err(|e| e.error)?;
Ok(())
@@ -121,7 +142,11 @@ impl RepoExecution for PullRepo {
unreachable!()
};
req
.resolve(&ExecuteArgs { user, update })
.resolve(&ExecuteArgs {
user,
update,
id: Uuid::new_v4(),
})
.await
.map_err(|e| e.error)?;
Ok(())
@@ -141,7 +166,11 @@ impl RepoExecution for BuildRepo {
unreachable!()
};
req
.resolve(&ExecuteArgs { user, update })
.resolve(&ExecuteArgs {
user,
update,
id: Uuid::new_v4(),
})
.await
.map_err(|e| e.error)?;
Ok(())
@@ -240,7 +269,11 @@ impl StackExecution for DeployStack {
unreachable!()
};
req
.resolve(&ExecuteArgs { user, update })
.resolve(&ExecuteArgs {
user,
update,
id: Uuid::new_v4(),
})
.await
.map_err(|e| e.error)?;
} else {
@@ -254,7 +287,11 @@ impl StackExecution for DeployStack {
unreachable!()
};
req
.resolve(&ExecuteArgs { user, update })
.resolve(&ExecuteArgs {
user,
update,
id: Uuid::new_v4(),
})
.await
.map_err(|e| e.error)?;
}
@@ -303,7 +340,18 @@ pub async fn handle_stack_webhook_inner<
let lock = stack_locks().get_or_insert_default(&stack.id).await;
let _lock = lock.lock().await;
B::verify_branch(&body, &stack.config.branch)?;
// Use the correct target branch when using linked repo.
let branch = if stack.config.linked_repo.is_empty() {
stack.config.branch.clone()
} else {
resource::get::<Repo>(&stack.config.linked_repo)
.await
.context("Failed to find 'linked_repo'")?
.config
.branch
};
B::verify_branch(&body, &branch)?;
E::resolve(stack).await.map_err(|e| e.error)
}
@@ -352,7 +400,11 @@ impl SyncExecution for RunSync {
unreachable!()
};
req
.resolve(&ExecuteArgs { user, update })
.resolve(&ExecuteArgs {
user,
update,
id: Uuid::new_v4(),
})
.await
.map_err(|e| e.error)?;
Ok(())
@@ -401,7 +453,18 @@ async fn handle_sync_webhook_inner<
let lock = sync_locks().get_or_insert_default(&sync.id).await;
let _lock = lock.lock().await;
B::verify_branch(&body, &sync.config.branch)?;
// Use the correct target branch when using linked repo.
let branch = if sync.config.linked_repo.is_empty() {
sync.config.branch.clone()
} else {
resource::get::<Repo>(&sync.config.linked_repo)
.await
.context("Failed to find 'linked_repo'")?
.config
.branch
};
B::verify_branch(&body, &branch)?;
E::resolve(sync).await
}
@@ -451,7 +514,11 @@ pub async fn handle_procedure_webhook<B: super::ExtractBranch>(
unreachable!()
};
req
.resolve(&ExecuteArgs { user, update })
.resolve(&ExecuteArgs {
user,
update,
id: Uuid::new_v4(),
})
.await
.map_err(|e| e.error)?;
Ok(())
@@ -513,7 +580,11 @@ pub async fn handle_action_webhook<B: super::ExtractBranch>(
unreachable!()
};
req
.resolve(&ExecuteArgs { user, update })
.resolve(&ExecuteArgs {
user,
update,
id: Uuid::new_v4(),
})
.await
.map_err(|e| e.error)?;
Ok(())

View File

@@ -1,14 +1,22 @@
use axum::{Router, extract::Path, http::HeaderMap, routing::post};
use std::net::{IpAddr, SocketAddr};
use axum::{
Router,
extract::{ConnectInfo, Path},
http::HeaderMap,
routing::post,
};
use komodo_client::entities::{
action::Action, build::Build, procedure::Procedure, repo::Repo,
resource::Resource, stack::Stack, sync::ResourceSync,
};
use rate_limit::WithFailureRateLimit;
use reqwest::StatusCode;
use serde::Deserialize;
use serror::AddStatusCode;
use tracing::Instrument;
use crate::resource::KomodoResource;
use crate::{resource::KomodoResource, state::auth_rate_limiter};
use super::{
CustomSecret, ExtractBranch, VerifySecret,
@@ -47,9 +55,9 @@ pub fn router<P: VerifySecret + ExtractBranch>() -> Router {
.route(
"/build/{id}",
post(
|Path(Id { id }), headers: HeaderMap, body: String| async move {
|Path(Id { id }), headers: HeaderMap, ConnectInfo(info): ConnectInfo<SocketAddr>, body: String| async move {
let build =
auth_webhook::<P, Build>(&id, headers, &body).await?;
auth_webhook::<P, Build>(&id, &headers, info.ip(), &body).await?;
tokio::spawn(async move {
let span = info_span!("BuildWebhook", id);
async {
@@ -73,9 +81,9 @@ pub fn router<P: VerifySecret + ExtractBranch>() -> Router {
.route(
"/repo/{id}/{option}",
post(
|Path(IdAndOption::<RepoWebhookOption> { id, option }), headers: HeaderMap, body: String| async move {
|Path(IdAndOption::<RepoWebhookOption> { id, option }), headers: HeaderMap, ConnectInfo(info): ConnectInfo<SocketAddr>, body: String| async move {
let repo =
auth_webhook::<P, Repo>(&id, headers, &body).await?;
auth_webhook::<P, Repo>(&id, &headers, info.ip(), &body).await?;
tokio::spawn(async move {
let span = info_span!("RepoWebhook", id);
async {
@@ -99,9 +107,9 @@ pub fn router<P: VerifySecret + ExtractBranch>() -> Router {
.route(
"/stack/{id}/{option}",
post(
|Path(IdAndOption::<StackWebhookOption> { id, option }), headers: HeaderMap, body: String| async move {
|Path(IdAndOption::<StackWebhookOption> { id, option }), headers: HeaderMap, ConnectInfo(info): ConnectInfo<SocketAddr>, body: String| async move {
let stack =
auth_webhook::<P, Stack>(&id, headers, &body).await?;
auth_webhook::<P, Stack>(&id, &headers, info.ip(), &body).await?;
tokio::spawn(async move {
let span = info_span!("StackWebhook", id);
async {
@@ -125,9 +133,9 @@ pub fn router<P: VerifySecret + ExtractBranch>() -> Router {
.route(
"/sync/{id}/{option}",
post(
|Path(IdAndOption::<SyncWebhookOption> { id, option }), headers: HeaderMap, body: String| async move {
|Path(IdAndOption::<SyncWebhookOption> { id, option }), headers: HeaderMap, ConnectInfo(info): ConnectInfo<SocketAddr>, body: String| async move {
let sync =
auth_webhook::<P, ResourceSync>(&id, headers, &body).await?;
auth_webhook::<P, ResourceSync>(&id, &headers, info.ip(), &body).await?;
tokio::spawn(async move {
let span = info_span!("ResourceSyncWebhook", id);
async {
@@ -151,9 +159,9 @@ pub fn router<P: VerifySecret + ExtractBranch>() -> Router {
.route(
"/procedure/{id}/{branch}",
post(
|Path(IdAndBranch { id, branch }), headers: HeaderMap, body: String| async move {
|Path(IdAndBranch { id, branch }), headers: HeaderMap, ConnectInfo(info): ConnectInfo<SocketAddr>, body: String| async move {
let procedure =
auth_webhook::<P, Procedure>(&id, headers, &body).await?;
auth_webhook::<P, Procedure>(&id, &headers, info.ip(), &body).await?;
tokio::spawn(async move {
let span = info_span!("ProcedureWebhook", id);
async {
@@ -177,9 +185,9 @@ pub fn router<P: VerifySecret + ExtractBranch>() -> Router {
.route(
"/action/{id}/{branch}",
post(
|Path(IdAndBranch { id, branch }), headers: HeaderMap, body: String| async move {
|Path(IdAndBranch { id, branch }), headers: HeaderMap, ConnectInfo(info): ConnectInfo<SocketAddr>, body: String| async move {
let action =
auth_webhook::<P, Action>(&id, headers, &body).await?;
auth_webhook::<P, Action>(&id, &headers, info.ip(), &body).await?;
tokio::spawn(async move {
let span = info_span!("ActionWebhook", id);
async {
@@ -204,17 +212,26 @@ pub fn router<P: VerifySecret + ExtractBranch>() -> Router {
async fn auth_webhook<P, R>(
id: &str,
headers: HeaderMap,
headers: &HeaderMap,
ip: IpAddr,
body: &str,
) -> serror::Result<Resource<R::Config, R::Info>>
where
P: VerifySecret,
R: KomodoResource + CustomSecret,
{
let resource = crate::resource::get::<R>(id)
.await
.status_code(StatusCode::BAD_REQUEST)?;
P::verify_secret(headers, body, R::custom_secret(&resource))
.status_code(StatusCode::UNAUTHORIZED)?;
Ok(resource)
async {
let resource = crate::resource::get::<R>(id)
.await
.status_code(StatusCode::BAD_REQUEST)?;
P::verify_secret(headers, body, R::custom_secret(&resource))
.status_code(StatusCode::UNAUTHORIZED)?;
serror::Result::Ok(resource)
}
.with_failure_rate_limit_using_headers(
auth_rate_limiter(),
headers,
Some(ip),
)
.await
}

View File

@@ -1,11 +1,92 @@
use axum::{
Router,
http::{HeaderName, HeaderValue},
routing::get,
};
use tower_http::{
services::{ServeDir, ServeFile},
set_header::SetResponseHeaderLayer,
};
use tower_sessions::{
Expiry, MemoryStore, SessionManagerLayer, cookie::time::Duration,
};
use crate::{
config::{core_config, core_host, cors_layer},
ts_client,
};
pub mod auth;
pub mod execute;
pub mod read;
pub mod terminal;
pub mod user;
pub mod write;
mod listener;
mod terminal;
mod ws;
#[derive(serde::Deserialize)]
struct Variant {
variant: String,
}
pub fn app() -> Router {
let config = core_config();
// Setup static frontend services
let frontend_path = &config.frontend_path;
let frontend_index =
ServeFile::new(format!("{frontend_path}/index.html"));
let serve_frontend = ServeDir::new(frontend_path)
.not_found_service(frontend_index.clone());
Router::new()
.route("/version", get(|| async { env!("CARGO_PKG_VERSION") }))
.nest("/auth", auth::router())
.nest("/user", user::router())
.nest("/read", read::router())
.nest("/write", write::router())
.nest("/execute", execute::router())
.nest("/terminal", terminal::router())
.nest("/listener", listener::router())
.nest("/ws", ws::router())
.nest("/client", ts_client::router())
.fallback_service(serve_frontend)
.layer(cors_layer())
.layer(SetResponseHeaderLayer::overriding(
HeaderName::from_static("x-content-type-options"),
HeaderValue::from_static("nosniff"),
))
.layer(SetResponseHeaderLayer::overriding(
HeaderName::from_static("x-frame-options"),
HeaderValue::from_static("DENY"),
))
.layer(SetResponseHeaderLayer::overriding(
HeaderName::from_static("x-xss-protection"),
HeaderValue::from_static("1; mode=block"),
))
.layer(SetResponseHeaderLayer::overriding(
HeaderName::from_static("referrer-policy"),
HeaderValue::from_static("strict-origin-when-cross-origin"),
))
}
fn memory_session_layer(
expiry: i64,
) -> SessionManagerLayer<MemoryStore> {
let config = core_config();
let mut layer = SessionManagerLayer::new(MemoryStore::default())
.with_expiry(Expiry::OnInactivity(Duration::seconds(expiry)))
.with_secure(config.host.starts_with("https://"));
if let Some(domain) = core_host().and_then(|url| url.domain()) {
layer = layer.with_domain(domain);
}
layer
}
pub const SESSION_KEY_USER_ID: &str = "user-id";
pub const SESSION_KEY_TOTP_LOGIN: &str = "totp-user-id";
pub const SESSION_KEY_TOTP_ENROLLMENT: &str = "totp-enrollment";
pub const SESSION_KEY_PASSKEY_LOGIN: &str = "passkey-user-id";
pub const SESSION_KEY_PASSKEY_ENROLLMENT: &str = "passkey-enrollment";

View File

@@ -8,15 +8,15 @@ use komodo_client::{
api::read::{
GetAlert, GetAlertResponse, ListAlerts, ListAlertsResponse,
},
entities::{
deployment::Deployment, server::Server, stack::Stack,
sync::ResourceSync,
},
entities::permission::PermissionLevel,
};
use resolver_api::Resolve;
use crate::{
config::core_config, permission::get_resource_ids_for_user,
config::core_config,
permission::{
check_user_target_access, user_resource_target_query,
},
state::db_client,
};
@@ -29,25 +29,10 @@ impl Resolve<ReadArgs> for ListAlerts {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListAlertsResponse> {
let mut query = self.query.unwrap_or_default();
if !user.admin && !core_config().transparent_mode {
let server_ids =
get_resource_ids_for_user::<Server>(user).await?;
let stack_ids =
get_resource_ids_for_user::<Stack>(user).await?;
let deployment_ids =
get_resource_ids_for_user::<Deployment>(user).await?;
let sync_ids =
get_resource_ids_for_user::<ResourceSync>(user).await?;
query.extend(doc! {
"$or": [
{ "target.type": "Server", "target.id": { "$in": &server_ids } },
{ "target.type": "Stack", "target.id": { "$in": &stack_ids } },
{ "target.type": "Deployment", "target.id": { "$in": &deployment_ids } },
{ "target.type": "ResourceSync", "target.id": { "$in": &sync_ids } },
]
});
}
// Alerts
let query = user_resource_target_query(user, self.query)
.await?
.unwrap_or_default();
let alerts = find_collect(
&db_client().alerts,
@@ -76,13 +61,21 @@ impl Resolve<ReadArgs> for ListAlerts {
impl Resolve<ReadArgs> for GetAlert {
async fn resolve(
self,
_: &ReadArgs,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetAlertResponse> {
Ok(
find_one_by_id(&db_client().alerts, &self.id)
.await
.context("failed to query db for alert")?
.context("no alert found with given id")?,
let alert = find_one_by_id(&db_client().alerts, &self.id)
.await
.context("failed to query db for alert")?
.context("no alert found with given id")?;
if user.admin || core_config().transparent_mode {
return Ok(alert);
}
check_user_target_access(
&alert.target,
user,
PermissionLevel::Read.into(),
)
.await?;
Ok(alert)
}
}

View File

@@ -11,8 +11,10 @@ use komodo_client::{
use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags, permission::get_check_permissions,
resource, state::db_client,
helpers::query::get_all_tags,
permission::{get_check_permissions, list_resource_ids_for_user},
resource,
state::db_client,
};
use super::ReadArgs;
@@ -82,9 +84,11 @@ impl Resolve<ReadArgs> for GetAlertersSummary {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetAlertersSummaryResponse> {
let query = match resource::get_resource_object_ids_for_user::<
Alerter,
>(user)
let query = match list_resource_ids_for_user::<Alerter>(
None,
user,
PermissionLevel::Read.into(),
)
.await?
{
Some(ids) => doc! {

View File

@@ -6,7 +6,7 @@ use database::mungos::{
find::find_collect,
mongodb::{bson::doc, options::FindOptions},
};
use futures::TryStreamExt;
use futures_util::TryStreamExt;
use komodo_client::{
api::read::*,
entities::{

View File

@@ -11,8 +11,10 @@ use komodo_client::{
use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags, permission::get_check_permissions,
resource, state::db_client,
helpers::query::get_all_tags,
permission::{get_check_permissions, list_resource_ids_for_user},
resource,
state::db_client,
};
use super::ReadArgs;
@@ -82,9 +84,11 @@ impl Resolve<ReadArgs> for GetBuildersSummary {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetBuildersSummaryResponse> {
let query = match resource::get_resource_object_ids_for_user::<
Builder,
>(user)
let query = match list_resource_ids_for_user::<Builder>(
None,
user,
PermissionLevel::Read.into(),
)
.await?
{
Some(ids) => doc! {

View File

@@ -4,23 +4,31 @@ use anyhow::{Context, anyhow};
use komodo_client::{
api::read::*,
entities::{
SwarmOrServer,
deployment::{
Deployment, DeploymentActionState, DeploymentConfig,
DeploymentListItem, DeploymentState,
},
docker::container::{Container, ContainerStats},
docker::{
container::{Container, ContainerStats},
service::SwarmService,
},
permission::PermissionLevel,
server::{Server, ServerState},
update::Log,
},
};
use periphery_client::api::{self, container::InspectContainer};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError as _;
use crate::{
helpers::{periphery_client, query::get_all_tags},
helpers::{
periphery_client, query::get_all_tags, swarm::swarm_request,
},
permission::get_check_permissions,
resource,
resource::{self, setup_deployment_execution},
state::{
action_states, deployment_status_cache, server_status_cache,
},
@@ -131,30 +139,40 @@ impl Resolve<ReadArgs> for GetDeploymentLog {
tail,
timestamps,
} = self;
let Deployment {
name,
config: DeploymentConfig { server_id, .. },
..
} = get_check_permissions::<Deployment>(
let (deployment, swarm_or_server) = setup_deployment_execution(
&deployment,
user,
PermissionLevel::Read.logs(),
)
.await?;
if server_id.is_empty() {
return Ok(Log::default());
}
let server = resource::get::<Server>(&server_id).await?;
let res = periphery_client(&server)
.await?
.request(api::container::GetContainerLog {
name,
tail: cmp::min(tail, MAX_LOG_LENGTH),
timestamps,
})
let log = match swarm_or_server {
SwarmOrServer::Swarm(swarm) => swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::GetSwarmServiceLog {
service: deployment.name,
tail,
timestamps,
no_task_ids: false,
no_resolve: false,
details: false,
},
)
.await
.context("failed at call to periphery")?;
Ok(res)
.context("Failed to get service log from swarm")?,
SwarmOrServer::Server(server) => periphery_client(&server)
.await?
.request(api::container::GetContainerLog {
name: deployment.name,
tail: cmp::min(tail, MAX_LOG_LENGTH),
timestamps,
})
.await
.context("failed at call to periphery")?,
};
Ok(log)
}
}
@@ -170,32 +188,44 @@ impl Resolve<ReadArgs> for SearchDeploymentLog {
invert,
timestamps,
} = self;
let Deployment {
name,
config: DeploymentConfig { server_id, .. },
..
} = get_check_permissions::<Deployment>(
let (deployment, swarm_or_server) = setup_deployment_execution(
&deployment,
user,
PermissionLevel::Read.logs(),
)
.await?;
if server_id.is_empty() {
return Ok(Log::default());
}
let server = resource::get::<Server>(&server_id).await?;
let res = periphery_client(&server)
.await?
.request(api::container::GetContainerLogSearch {
name,
terms,
combinator,
invert,
timestamps,
})
let log = match swarm_or_server {
SwarmOrServer::Swarm(swarm) => swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::GetSwarmServiceLogSearch {
service: deployment.name,
terms,
combinator,
invert,
timestamps,
no_task_ids: false,
no_resolve: false,
details: false,
},
)
.await
.context("failed at call to periphery")?;
Ok(res)
.context("Failed to search service log from swarm")?,
SwarmOrServer::Server(server) => periphery_client(&server)
.await?
.request(api::container::GetContainerLogSearch {
name: deployment.name,
terms,
combinator,
invert,
timestamps,
})
.await
.context("Failed to search container log from server")?,
};
Ok(log)
}
}
@@ -205,42 +235,78 @@ impl Resolve<ReadArgs> for InspectDeploymentContainer {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Container> {
let InspectDeploymentContainer { deployment } = self;
let Deployment {
name,
config: DeploymentConfig { server_id, .. },
..
} = get_check_permissions::<Deployment>(
let (deployment, swarm_or_server) = setup_deployment_execution(
&deployment,
user,
PermissionLevel::Read.inspect(),
)
.await?;
if server_id.is_empty() {
let SwarmOrServer::Server(server) = swarm_or_server else {
return Err(
anyhow!(
"Cannot inspect deployment, not attached to any server"
"InspectDeploymentContainer should not be called for Deployment in Swarm Mode"
)
.into(),
.status_code(StatusCode::BAD_REQUEST),
);
}
let server = resource::get::<Server>(&server_id).await?;
};
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if cache.state != ServerState::Ok {
return Err(
anyhow!(
"Cannot inspect container: server is {:?}",
"Cannot inspect container: Server is {:?}",
cache.state
)
.into(),
);
}
let res = periphery_client(&server)
periphery_client(&server)
.await?
.request(InspectContainer { name })
.await?;
Ok(res)
.request(InspectContainer {
name: deployment.name,
})
.await
.context("Failed to inspect container on server")
.map_err(Into::into)
}
}
impl Resolve<ReadArgs> for InspectDeploymentSwarmService {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<SwarmService> {
let InspectDeploymentSwarmService { deployment } = self;
let (deployment, swarm_or_server) = setup_deployment_execution(
&deployment,
user,
PermissionLevel::Read.logs(),
)
.await?;
let SwarmOrServer::Swarm(swarm) = swarm_or_server else {
return Err(
anyhow!(
"InspectDeploymentSwarmService should only be called for Deployment in Swarm Mode"
)
.status_code(StatusCode::BAD_REQUEST),
);
};
swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::InspectSwarmService {
service: deployment.name,
},
)
.await
.context("Failed to inspect service on swarm")
.map_err(Into::into)
}
}

View File

@@ -49,8 +49,10 @@ mod repo;
mod schedule;
mod server;
mod stack;
mod swarm;
mod sync;
mod tag;
mod terminal;
mod toml;
mod update;
mod user;
@@ -74,36 +76,28 @@ enum ReadRequest {
ListGitProvidersFromConfig(ListGitProvidersFromConfig),
ListDockerRegistriesFromConfig(ListDockerRegistriesFromConfig),
// ==== USER ====
GetUsername(GetUsername),
GetPermission(GetPermission),
FindUser(FindUser),
ListUsers(ListUsers),
ListApiKeys(ListApiKeys),
ListApiKeysForServiceUser(ListApiKeysForServiceUser),
ListPermissions(ListPermissions),
ListUserTargetPermissions(ListUserTargetPermissions),
// ==== USER GROUP ====
GetUserGroup(GetUserGroup),
ListUserGroups(ListUserGroups),
// ==== PROCEDURE ====
GetProceduresSummary(GetProceduresSummary),
GetProcedure(GetProcedure),
GetProcedureActionState(GetProcedureActionState),
ListProcedures(ListProcedures),
ListFullProcedures(ListFullProcedures),
// ==== ACTION ====
GetActionsSummary(GetActionsSummary),
GetAction(GetAction),
GetActionActionState(GetActionActionState),
ListActions(ListActions),
ListFullActions(ListFullActions),
// ==== SCHEDULE ====
ListSchedules(ListSchedules),
// ==== SWARM ====
GetSwarmsSummary(GetSwarmsSummary),
GetSwarm(GetSwarm),
GetSwarmActionState(GetSwarmActionState),
ListSwarms(ListSwarms),
InspectSwarm(InspectSwarm),
ListFullSwarms(ListFullSwarms),
ListSwarmNodes(ListSwarmNodes),
InspectSwarmNode(InspectSwarmNode),
ListSwarmConfigs(ListSwarmConfigs),
InspectSwarmConfig(InspectSwarmConfig),
ListSwarmSecrets(ListSwarmSecrets),
InspectSwarmSecret(InspectSwarmSecret),
ListSwarmStacks(ListSwarmStacks),
InspectSwarmStack(InspectSwarmStack),
ListSwarmTasks(ListSwarmTasks),
InspectSwarmTask(InspectSwarmTask),
ListSwarmServices(ListSwarmServices),
InspectSwarmService(InspectSwarmService),
GetSwarmServiceLog(GetSwarmServiceLog),
SearchSwarmServiceLog(SearchSwarmServiceLog),
ListSwarmNetworks(ListSwarmNetworks),
// ==== SERVER ====
GetServersSummary(GetServersSummary),
@@ -111,9 +105,10 @@ enum ReadRequest {
GetServerState(GetServerState),
GetPeripheryInformation(GetPeripheryInformation),
GetServerActionState(GetServerActionState),
GetHistoricalServerStats(GetHistoricalServerStats),
ListServers(ListServers),
ListFullServers(ListFullServers),
// ==== TERMINAL ====
ListTerminals(ListTerminals),
// ==== DOCKER ====
@@ -136,6 +131,7 @@ enum ReadRequest {
// ==== SERVER STATS ====
GetSystemInformation(GetSystemInformation),
GetSystemStats(GetSystemStats),
GetHistoricalServerStats(GetHistoricalServerStats),
ListSystemProcesses(ListSystemProcesses),
// ==== STACK ====
@@ -145,6 +141,7 @@ enum ReadRequest {
GetStackLog(GetStackLog),
SearchStackLog(SearchStackLog),
InspectStackContainer(InspectStackContainer),
InspectStackSwarmService(InspectStackSwarmService),
ListStacks(ListStacks),
ListFullStacks(ListFullStacks),
ListStackServices(ListStackServices),
@@ -160,6 +157,7 @@ enum ReadRequest {
GetDeploymentLog(GetDeploymentLog),
SearchDeploymentLog(SearchDeploymentLog),
InspectDeploymentContainer(InspectDeploymentContainer),
InspectDeploymentSwarmService(InspectDeploymentSwarmService),
ListDeployments(ListDeployments),
ListFullDeployments(ListFullDeployments),
ListCommonDeploymentExtraArgs(ListCommonDeploymentExtraArgs),
@@ -181,6 +179,23 @@ enum ReadRequest {
ListRepos(ListRepos),
ListFullRepos(ListFullRepos),
// ==== PROCEDURE ====
GetProceduresSummary(GetProceduresSummary),
GetProcedure(GetProcedure),
GetProcedureActionState(GetProcedureActionState),
ListProcedures(ListProcedures),
ListFullProcedures(ListFullProcedures),
// ==== ACTION ====
GetActionsSummary(GetActionsSummary),
GetAction(GetAction),
GetActionActionState(GetActionActionState),
ListActions(ListActions),
ListFullActions(ListFullActions),
// ==== SCHEDULE ====
ListSchedules(ListSchedules),
// ==== SYNC ====
GetResourceSyncsSummary(GetResourceSyncsSummary),
GetResourceSync(GetResourceSync),
@@ -208,6 +223,20 @@ enum ReadRequest {
GetTag(GetTag),
ListTags(ListTags),
// ==== USER ====
GetUsername(GetUsername),
GetPermission(GetPermission),
FindUser(FindUser),
ListUsers(ListUsers),
ListApiKeys(ListApiKeys),
ListApiKeysForServiceUser(ListApiKeysForServiceUser),
ListPermissions(ListPermissions),
ListUserTargetPermissions(ListUserTargetPermissions),
// ==== USER GROUP ====
GetUserGroup(GetUserGroup),
ListUserGroups(ListUserGroups),
// ==== UPDATE ====
GetUpdate(GetUpdate),
ListUpdates(ListUpdates),
@@ -249,7 +278,6 @@ async fn variant_handler(
handler(user, Json(req)).await
}
#[instrument(name = "ReadHandler", level = "debug", skip(user), fields(user_id = user.id))]
async fn handler(
Extension(user): Extension<User>,
Json(request): Json<ReadRequest>,

View File

@@ -1,4 +1,4 @@
use futures::future::join_all;
use futures_util::future::join_all;
use komodo_client::{
api::read::*,
entities::{

View File

@@ -25,11 +25,10 @@ use komodo_client::{
network::Network,
volume::Volume,
},
komodo_timestamp,
permission::PermissionLevel,
server::{
Server, ServerActionState, ServerListItem, ServerState,
TerminalInfo,
Server, ServerActionState, ServerListItem, ServerQuery,
ServerState,
},
stack::{Stack, StackServiceNames},
stats::{SystemInformation, SystemProcess},
@@ -50,7 +49,7 @@ use tokio::sync::Mutex;
use crate::{
helpers::{periphery_client, query::get_all_tags},
permission::get_check_permissions,
permission::{get_check_permissions, list_resources_for_user},
resource,
stack::compose_container_match_regex,
state::{action_states, db_client, server_status_cache},
@@ -384,8 +383,8 @@ impl Resolve<ReadArgs> for ListDockerContainers {
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if let Some(containers) = &cache.containers {
Ok(containers.clone())
if let Some(docker) = &cache.docker {
Ok(docker.containers.clone())
} else {
Ok(Vec::new())
}
@@ -398,18 +397,12 @@ impl Resolve<ReadArgs> for ListAllDockerContainers {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListAllDockerContainersResponse> {
let servers = resource::list_for_user::<Server>(
Default::default(),
ServerQuery::builder().names(self.servers.clone()).build(),
user,
PermissionLevel::Read.into(),
&[],
)
.await?
.into_iter()
.filter(|server| {
self.servers.is_empty()
|| self.servers.contains(&server.id)
|| self.servers.contains(&server.name)
});
.await?;
let mut containers = Vec::<ContainerListItem>::new();
@@ -417,9 +410,18 @@ impl Resolve<ReadArgs> for ListAllDockerContainers {
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if let Some(more_containers) = &cache.containers {
containers.extend(more_containers.clone());
}
let Some(docker) = &cache.docker else {
continue;
};
let more = docker
.containers
.iter()
.filter(|container| {
self.containers.is_empty()
|| self.containers.contains(&container.name)
})
.cloned();
containers.extend(more);
}
Ok(containers)
@@ -447,8 +449,8 @@ impl Resolve<ReadArgs> for GetDockerContainersSummary {
.get_or_insert_default(&server.id)
.await;
if let Some(containers) = &cache.containers {
for container in containers {
if let Some(docker) = &cache.docker {
for container in &docker.containers {
res.total += 1;
match container.state {
ContainerStateStatusEnum::Created
@@ -586,12 +588,12 @@ impl Resolve<ReadArgs> for GetResourceMatchingContainer {
}
// then check stacks
let stacks =
resource::list_full_for_user_using_document::<Stack>(
doc! { "config.server_id": &server.id },
user,
)
.await?;
let stacks = list_resources_for_user::<Stack>(
doc! { "config.server_id": &server.id },
user,
PermissionLevel::Read.into(),
)
.await?;
// check matching stack
for stack in stacks {
@@ -640,8 +642,8 @@ impl Resolve<ReadArgs> for ListDockerNetworks {
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if let Some(networks) = &cache.networks {
Ok(networks.clone())
if let Some(docker) = &cache.docker {
Ok(docker.networks.clone())
} else {
Ok(Vec::new())
}
@@ -693,8 +695,8 @@ impl Resolve<ReadArgs> for ListDockerImages {
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if let Some(images) = &cache.images {
Ok(images.clone())
if let Some(docker) = &cache.docker {
Ok(docker.images.clone())
} else {
Ok(Vec::new())
}
@@ -774,8 +776,8 @@ impl Resolve<ReadArgs> for ListDockerVolumes {
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if let Some(volumes) = &cache.volumes {
Ok(volumes.clone())
if let Some(docker) = &cache.docker {
Ok(docker.volumes.clone())
} else {
Ok(Vec::new())
}
@@ -824,76 +826,54 @@ impl Resolve<ReadArgs> for ListComposeProjects {
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if let Some(projects) = &cache.projects {
Ok(projects.clone())
if let Some(docker) = &cache.docker {
Ok(docker.projects.clone())
} else {
Ok(Vec::new())
}
}
}
#[derive(Default)]
struct TerminalCacheItem {
list: Vec<TerminalInfo>,
ttl: i64,
}
// impl Resolve<ReadArgs> for ListAllTerminals {
// async fn resolve(
// self,
// args: &ReadArgs,
// ) -> Result<Self::Response, Self::Error> {
// // match self.tar
// let mut terminals = resource::list_full_for_user::<Server>(
// self.query, &args.user, &all_tags,
// )
// .await?
// .into_iter()
// .map(|server| async move {
// (
// list_terminals_inner(&server, self.fresh).await,
// (server.id, server.name),
// )
// })
// .collect::<FuturesUnordered<_>>()
// .collect::<Vec<_>>()
// .await
// .into_iter()
// .flat_map(|(terminals, server)| {
// let terminals = terminals.ok()?;
// Some((terminals, server))
// })
// .flat_map(|(terminals, (server_id, server_name))| {
// terminals.into_iter().map(move |info| {
// TerminalInfoWithServer::from_terminal_info(
// &server_id,
// &server_name,
// info,
// )
// })
// })
// .collect::<Vec<_>>();
const TERMINAL_CACHE_TIMEOUT: i64 = 30_000;
// terminals.sort_by(|a, b| {
// a.server_name.cmp(&b.server_name).then(a.name.cmp(&b.name))
// });
#[derive(Default)]
struct TerminalCache(
std::sync::Mutex<
HashMap<String, Arc<tokio::sync::Mutex<TerminalCacheItem>>>,
>,
);
impl TerminalCache {
fn get_or_insert(
&self,
server_id: String,
) -> Arc<tokio::sync::Mutex<TerminalCacheItem>> {
if let Some(cached) =
self.0.lock().unwrap().get(&server_id).cloned()
{
return cached;
}
let to_cache =
Arc::new(tokio::sync::Mutex::new(TerminalCacheItem::default()));
self.0.lock().unwrap().insert(server_id, to_cache.clone());
to_cache
}
}
fn terminals_cache() -> &'static TerminalCache {
static TERMINALS: OnceLock<TerminalCache> = OnceLock::new();
TERMINALS.get_or_init(Default::default)
}
impl Resolve<ReadArgs> for ListTerminals {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListTerminalsResponse> {
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read.terminal(),
)
.await?;
let cache = terminals_cache().get_or_insert(server.id.clone());
let mut cache = cache.lock().await;
if self.fresh || komodo_timestamp() > cache.ttl {
cache.list = periphery_client(&server)
.await?
.request(periphery_client::api::terminal::ListTerminals {
container: None,
})
.await
.context("Failed to get fresh terminal list")?;
cache.ttl = komodo_timestamp() + TERMINAL_CACHE_TIMEOUT;
Ok(cache.list.clone())
} else {
Ok(cache.list.clone())
}
}
}
// Ok(terminals)
// }
// }

View File

@@ -4,9 +4,11 @@ use anyhow::{Context, anyhow};
use komodo_client::{
api::read::*,
entities::{
docker::container::Container,
SwarmOrServer,
docker::{
container::Container, service::SwarmService, stack::SwarmStack,
},
permission::PermissionLevel,
server::{Server, ServerState},
stack::{Stack, StackActionState, StackListItem, StackState},
},
};
@@ -14,14 +16,18 @@ use periphery_client::api::{
compose::{GetComposeLog, GetComposeLogSearch},
container::InspectContainer,
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError as _;
use crate::{
helpers::{periphery_client, query::get_all_tags},
helpers::{
periphery_client, query::get_all_tags, swarm::swarm_request,
},
permission::get_check_permissions,
resource,
stack::get_stack_and_server,
state::{action_states, server_status_cache, stack_status_cache},
stack::setup_stack_execution,
state::{action_states, stack_status_cache},
};
use super::ReadArgs;
@@ -73,28 +79,53 @@ impl Resolve<ReadArgs> for GetStackLog {
) -> serror::Result<GetStackLogResponse> {
let GetStackLog {
stack,
services,
mut services,
tail,
timestamps,
} = self;
let (stack, server) = get_stack_and_server(
let (stack, swarm_or_server) = setup_stack_execution(
&stack,
user,
PermissionLevel::Read.logs(),
true,
)
.await?;
let res = periphery_client(&server)
.await?
.request(GetComposeLog {
project: stack.project_name(false),
services,
tail,
timestamps,
})
.await
.context("Failed to get stack log from periphery")?;
Ok(res)
let log = match swarm_or_server {
SwarmOrServer::Swarm(swarm) => {
let service = services.pop().context(
"Must pass single service for Swarm mode Stack logs",
)?;
swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::GetSwarmServiceLog {
// The actual service name on swarm will be stackname_servicename
service: format!(
"{}_{service}",
stack.project_name(false)
),
tail,
timestamps,
no_task_ids: false,
no_resolve: false,
details: false,
},
)
.await
.context("Failed to get stack service log from swarm")?
}
SwarmOrServer::Server(server) => periphery_client(&server)
.await?
.request(GetComposeLog {
project: stack.project_name(false),
services,
tail,
timestamps,
})
.await
.context("Failed to get stack log from periphery")?,
};
Ok(log)
}
}
@@ -105,32 +136,55 @@ impl Resolve<ReadArgs> for SearchStackLog {
) -> serror::Result<SearchStackLogResponse> {
let SearchStackLog {
stack,
services,
mut services,
terms,
combinator,
invert,
timestamps,
} = self;
let (stack, server) = get_stack_and_server(
let (stack, swarm_or_server) = setup_stack_execution(
&stack,
user,
PermissionLevel::Read.logs(),
true,
)
.await?;
let res = periphery_client(&server)
.await?
.request(GetComposeLogSearch {
project: stack.project_name(false),
services,
terms,
combinator,
invert,
timestamps,
})
.await
.context("Failed to search stack log from periphery")?;
Ok(res)
let log = match swarm_or_server {
SwarmOrServer::Swarm(swarm) => {
let service = services.pop().context(
"Must pass single service for Swarm mode Stack logs",
)?;
swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::GetSwarmServiceLogSearch {
service,
terms,
combinator,
invert,
timestamps,
no_task_ids: false,
no_resolve: false,
details: false,
},
)
.await
.context("Failed to get stack service log from swarm")?
}
SwarmOrServer::Server(server) => periphery_client(&server)
.await?
.request(GetComposeLogSearch {
project: stack.project_name(false),
services,
terms,
combinator,
invert,
timestamps,
})
.await
.context("Failed to search stack log from periphery")?,
};
Ok(log)
}
}
@@ -140,38 +194,29 @@ impl Resolve<ReadArgs> for InspectStackContainer {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Container> {
let InspectStackContainer { stack, service } = self;
let stack = get_check_permissions::<Stack>(
let (stack, swarm_or_server) = setup_stack_execution(
&stack,
user,
PermissionLevel::Read.inspect(),
)
.await?;
if stack.config.server_id.is_empty() {
return Err(
anyhow!("Cannot inspect stack, not attached to any server")
.into(),
);
}
let server =
resource::get::<Server>(&stack.config.server_id).await?;
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if cache.state != ServerState::Ok {
let SwarmOrServer::Server(server) = swarm_or_server else {
return Err(
anyhow!(
"Cannot inspect container: server is {:?}",
cache.state
"InspectStackContainer should not be called for Stack in Swarm Mode"
)
.into(),
.status_code(StatusCode::BAD_REQUEST),
);
}
};
let services = &stack_status_cache()
.get(&stack.id)
.await
.unwrap_or_default()
.curr
.services;
let Some(name) = services
.iter()
.find(|s| s.service == service)
@@ -181,14 +226,101 @@ impl Resolve<ReadArgs> for InspectStackContainer {
"No service found matching '{service}'. Was the stack last deployed manually?"
).into());
};
let res = periphery_client(&server)
.await?
.request(InspectContainer { name })
.await?;
.await
.context("Failed to inspect container on server")?;
Ok(res)
}
}
impl Resolve<ReadArgs> for InspectStackSwarmService {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<SwarmService> {
let InspectStackSwarmService { stack, service } = self;
let (stack, swarm_or_server) = setup_stack_execution(
&stack,
user,
PermissionLevel::Read.inspect(),
)
.await?;
let SwarmOrServer::Swarm(swarm) = swarm_or_server else {
return Err(
anyhow!(
"InspectStackSwarmService should only be called for Stack in Swarm Mode"
)
.status_code(StatusCode::BAD_REQUEST),
);
};
let services = &stack_status_cache()
.get(&stack.id)
.await
.unwrap_or_default()
.curr
.services;
let Some(service) = services
.iter()
.find(|s| s.service == service)
.and_then(|s| {
s.swarm_service.as_ref().and_then(|c| c.name.clone())
})
else {
return Err(anyhow!(
"No service found matching '{service}'. Was the stack last deployed manually?"
).into());
};
swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::InspectSwarmService { service },
)
.await
.context("Failed to inspect service on swarm")
.map_err(Into::into)
}
}
impl Resolve<ReadArgs> for InspectStackSwarmInfo {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<SwarmStack> {
let (stack, swarm_or_server) = setup_stack_execution(
&self.stack,
user,
PermissionLevel::Read.inspect(),
)
.await?;
let SwarmOrServer::Swarm(swarm) = swarm_or_server else {
return Err(
anyhow!(
"InspectStackSwarmInfo should only be called for Stack in Swarm Mode"
)
.status_code(StatusCode::BAD_REQUEST),
);
};
swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::InspectSwarmStack {
stack: stack.project_name(false),
},
)
.await
.context("Failed to inspect stack info on swarm")
.map_err(Into::into)
}
}
impl Resolve<ReadArgs> for ListCommonStackExtraArgs {
async fn resolve(
self,
@@ -206,7 +338,7 @@ impl Resolve<ReadArgs> for ListCommonStackExtraArgs {
&all_tags,
)
.await
.context("failed to get resources matching query")?;
.context("Failed to get resources matching query")?;
// first collect with guaranteed uniqueness
let mut res = HashSet::<String>::new();
@@ -240,7 +372,7 @@ impl Resolve<ReadArgs> for ListCommonStackBuildExtraArgs {
&all_tags,
)
.await
.context("failed to get resources matching query")?;
.context("Failed to get resources matching query")?;
// first collect with guaranteed uniqueness
let mut res = HashSet::<String>::new();
@@ -348,7 +480,7 @@ impl Resolve<ReadArgs> for GetStacksSummary {
&[],
)
.await
.context("failed to get stacks from db")?;
.context("Failed to get stacks from database")?;
let mut res = GetStacksSummaryResponse::default();

View File

@@ -0,0 +1,519 @@
use anyhow::{Context, anyhow};
use komodo_client::{
api::read::*,
entities::{
permission::PermissionLevel,
swarm::{Swarm, SwarmActionState, SwarmListItem, SwarmState},
},
};
use resolver_api::Resolve;
use crate::{
helpers::{query::get_all_tags, swarm::swarm_request},
permission::get_check_permissions,
resource,
state::{action_states, server_status_cache, swarm_status_cache},
};
use super::ReadArgs;
impl Resolve<ReadArgs> for GetSwarm {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Swarm> {
Ok(
get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.into(),
)
.await?,
)
}
}
impl Resolve<ReadArgs> for ListSwarms {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Vec<SwarmListItem>> {
let all_tags = if self.query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
Ok(
resource::list_for_user::<Swarm>(
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
}
}
impl Resolve<ReadArgs> for ListFullSwarms {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListFullSwarmsResponse> {
let all_tags = if self.query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
Ok(
resource::list_full_for_user::<Swarm>(
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
}
}
impl Resolve<ReadArgs> for GetSwarmActionState {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<SwarmActionState> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.into(),
)
.await?;
let action_state = action_states()
.swarm
.get(&swarm.id)
.await
.unwrap_or_default()
.get()?;
Ok(action_state)
}
}
impl Resolve<ReadArgs> for GetSwarmsSummary {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetSwarmsSummaryResponse> {
let swarms = resource::list_full_for_user::<Swarm>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await
.context("failed to get swarms from db")?;
let mut res = GetSwarmsSummaryResponse::default();
let cache = swarm_status_cache();
for swarm in swarms {
res.total += 1;
match cache
.get(&swarm.id)
.await
.map(|status| status.state)
.unwrap_or_default()
{
SwarmState::Unknown => {
res.unknown += 1;
}
SwarmState::Healthy => {
res.healthy += 1;
}
SwarmState::Unhealthy => {
res.unhealthy += 1;
}
}
}
Ok(res)
}
}
impl Resolve<ReadArgs> for InspectSwarm {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<InspectSwarmResponse> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.inspect(),
)
.await?;
let cache =
swarm_status_cache().get_or_insert_default(&swarm.id).await;
let inspect = cache
.inspect
.as_ref()
.cloned()
.context("SwarmInspectInfo not available")?;
Ok(inspect)
}
}
impl Resolve<ReadArgs> for ListSwarmNodes {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListSwarmNodesResponse> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.into(),
)
.await?;
let cache =
swarm_status_cache().get_or_insert_default(&swarm.id).await;
if let Some(lists) = &cache.lists {
Ok(lists.nodes.clone())
} else {
Ok(Vec::new())
}
}
}
impl Resolve<ReadArgs> for InspectSwarmNode {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<InspectSwarmNodeResponse> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.inspect(),
)
.await?;
swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::InspectSwarmNode {
node: self.node,
},
)
.await
.map_err(Into::into)
}
}
impl Resolve<ReadArgs> for ListSwarmServices {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListSwarmServicesResponse> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.into(),
)
.await?;
let cache =
swarm_status_cache().get_or_insert_default(&swarm.id).await;
if let Some(lists) = &cache.lists {
Ok(lists.services.clone())
} else {
Ok(Vec::new())
}
}
}
impl Resolve<ReadArgs> for InspectSwarmService {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<InspectSwarmServiceResponse> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.inspect(),
)
.await?;
swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::InspectSwarmService {
service: self.service,
},
)
.await
.map_err(Into::into)
}
}
impl Resolve<ReadArgs> for GetSwarmServiceLog {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetSwarmServiceLogResponse> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.logs(),
)
.await?;
swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::GetSwarmServiceLog {
service: self.service,
tail: self.tail,
timestamps: self.timestamps,
no_task_ids: self.no_task_ids,
no_resolve: self.no_resolve,
details: self.details,
},
)
.await
.map_err(Into::into)
}
}
impl Resolve<ReadArgs> for SearchSwarmServiceLog {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<SearchSwarmServiceLogResponse> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.logs(),
)
.await?;
swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::GetSwarmServiceLogSearch {
service: self.service,
terms: self.terms,
combinator: self.combinator,
invert: self.invert,
timestamps: self.timestamps,
no_task_ids: self.no_task_ids,
no_resolve: self.no_resolve,
details: self.details,
},
)
.await
.map_err(Into::into)
}
}
impl Resolve<ReadArgs> for ListSwarmTasks {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListSwarmTasksResponse> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.into(),
)
.await?;
let cache =
swarm_status_cache().get_or_insert_default(&swarm.id).await;
if let Some(lists) = &cache.lists {
Ok(lists.tasks.clone())
} else {
Ok(Vec::new())
}
}
}
impl Resolve<ReadArgs> for InspectSwarmTask {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<InspectSwarmTaskResponse> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.inspect(),
)
.await?;
swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::InspectSwarmTask {
task: self.task,
},
)
.await
.map_err(Into::into)
}
}
impl Resolve<ReadArgs> for ListSwarmSecrets {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListSwarmSecretsResponse> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.into(),
)
.await?;
let cache =
swarm_status_cache().get_or_insert_default(&swarm.id).await;
if let Some(lists) = &cache.lists {
Ok(lists.secrets.clone())
} else {
Ok(Vec::new())
}
}
}
impl Resolve<ReadArgs> for InspectSwarmSecret {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<InspectSwarmSecretResponse> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.inspect(),
)
.await?;
swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::InspectSwarmSecret {
secret: self.secret,
},
)
.await
.map_err(Into::into)
}
}
impl Resolve<ReadArgs> for ListSwarmConfigs {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListSwarmConfigsResponse> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.into(),
)
.await?;
let cache =
swarm_status_cache().get_or_insert_default(&swarm.id).await;
if let Some(lists) = &cache.lists {
Ok(lists.configs.clone())
} else {
Ok(Vec::new())
}
}
}
impl Resolve<ReadArgs> for InspectSwarmConfig {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<InspectSwarmConfigResponse> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.inspect(),
)
.await?;
swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::InspectSwarmConfig {
config: self.config,
},
)
.await
.map_err(Into::into)
}
}
impl Resolve<ReadArgs> for ListSwarmStacks {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListSwarmStacksResponse> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.into(),
)
.await?;
let cache =
swarm_status_cache().get_or_insert_default(&swarm.id).await;
if let Some(lists) = &cache.lists {
Ok(lists.stacks.clone())
} else {
Ok(Vec::new())
}
}
}
impl Resolve<ReadArgs> for InspectSwarmStack {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<InspectSwarmStackResponse> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.inspect(),
)
.await?;
swarm_request(
&swarm.config.server_ids,
periphery_client::api::swarm::InspectSwarmStack {
stack: self.stack,
},
)
.await
.map_err(Into::into)
}
}
impl Resolve<ReadArgs> for ListSwarmNetworks {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> Result<Self::Response, Self::Error> {
let swarm = get_check_permissions::<Swarm>(
&self.swarm,
user,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache();
for server_id in swarm.config.server_ids {
let Some(status) = cache.get(&server_id).await else {
continue;
};
let Some(docker) = &status.docker else {
continue;
};
let networks = docker
.networks
.iter()
.filter(|network| {
network.driver.as_deref() == Some("overlay")
})
.cloned()
.collect::<Vec<_>>();
return Ok(networks);
}
Err(
anyhow!(
"Failed to retrieve swarm networks from any manager node."
)
.into(),
)
}
}

View File

@@ -0,0 +1,247 @@
use anyhow::Context as _;
use futures_util::{
FutureExt, StreamExt as _, stream::FuturesUnordered,
};
use komodo_client::{
api::read::{ListTerminals, ListTerminalsResponse},
entities::{
deployment::Deployment,
permission::PermissionLevel,
server::Server,
stack::Stack,
terminal::{Terminal, TerminalTarget},
user::User,
},
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCode;
use crate::{
helpers::periphery_client, permission::get_check_permissions,
resource,
};
use super::ReadArgs;
//
impl Resolve<ReadArgs> for ListTerminals {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListTerminalsResponse> {
let Some(target) = self.target else {
return list_all_terminals_for_user(user, self.use_names).await;
};
match &target {
TerminalTarget::Server { server } => {
let server = server
.as_ref()
.context("Must provide 'target.params.server'")
.status_code(StatusCode::BAD_REQUEST)?;
let server = get_check_permissions::<Server>(
server,
user,
PermissionLevel::Read.terminal(),
)
.await?;
list_terminals_on_server(&server, Some(target)).await
}
TerminalTarget::Container { server, .. } => {
let server = get_check_permissions::<Server>(
server,
user,
PermissionLevel::Read.terminal(),
)
.await?;
list_terminals_on_server(&server, Some(target)).await
}
TerminalTarget::Stack { stack, .. } => {
let server = get_check_permissions::<Stack>(
stack,
user,
PermissionLevel::Read.terminal(),
)
.await?
.config
.server_id;
let server = resource::get::<Server>(&server).await?;
list_terminals_on_server(&server, Some(target)).await
}
TerminalTarget::Deployment { deployment } => {
let server = get_check_permissions::<Deployment>(
deployment,
user,
PermissionLevel::Read.terminal(),
)
.await?
.config
.server_id;
let server = resource::get::<Server>(&server).await?;
list_terminals_on_server(&server, Some(target)).await
}
}
}
}
async fn list_all_terminals_for_user(
user: &User,
use_names: bool,
) -> serror::Result<Vec<Terminal>> {
let (mut servers, stacks, deployments) = tokio::try_join!(
resource::list_full_for_user::<Server>(
Default::default(),
user,
PermissionLevel::Read.terminal(),
&[]
)
.map(|res| res.map(|servers| servers
.into_iter()
// true denotes user actually has permission on this Server.
.map(|server| (server, true))
.collect::<Vec<_>>())),
resource::list_full_for_user::<Stack>(
Default::default(),
user,
PermissionLevel::Read.terminal(),
&[]
),
resource::list_full_for_user::<Deployment>(
Default::default(),
user,
PermissionLevel::Read.terminal(),
&[]
),
)?;
// Ensure any missing servers are present to query
for stack in &stacks {
if !stack.config.server_id.is_empty()
&& !servers
.iter()
.any(|(server, _)| server.id == stack.config.server_id)
{
let server =
resource::get::<Server>(&stack.config.server_id).await?;
servers.push((server, false));
}
}
for deployment in &deployments {
if !deployment.config.server_id.is_empty()
&& !servers
.iter()
.any(|(server, _)| server.id == deployment.config.server_id)
{
let server =
resource::get::<Server>(&deployment.config.server_id).await?;
servers.push((server, false));
}
}
let mut terminals = servers
.into_iter()
.map(|(server, server_permission)| async move {
(
list_terminals_on_server(&server, None).await,
(server.id, server.name, server_permission),
)
})
.collect::<FuturesUnordered<_>>()
.collect::<Vec<_>>()
.await
.into_iter()
.flat_map(
|(terminals, (server_id, server_name, server_permission))| {
let terminals = terminals
.ok()?
.into_iter()
.filter_map(|mut terminal| {
// Only keep terminals with appropriate perms.
match terminal.target.clone() {
TerminalTarget::Server { .. } => server_permission
.then(|| {
terminal.target = TerminalTarget::Server {
server: Some(if use_names {
server_name.clone()
} else {
server_id.clone()
}),
};
terminal
}),
TerminalTarget::Container { container, .. } => {
server_permission.then(|| {
terminal.target = TerminalTarget::Container {
server: if use_names {
server_name.clone()
} else {
server_id.clone()
},
container,
};
terminal
})
}
TerminalTarget::Stack { stack, service } => {
stacks.iter().find(|s| s.id == stack).map(|s| {
terminal.target = TerminalTarget::Stack {
stack: if use_names {
s.name.clone()
} else {
s.id.clone()
},
service,
};
terminal
})
}
TerminalTarget::Deployment { deployment } => {
deployments.iter().find(|d| d.id == deployment).map(
|d| {
terminal.target = TerminalTarget::Deployment {
deployment: if use_names {
d.name.clone()
} else {
d.id.clone()
},
};
terminal
},
)
}
}
})
.collect::<Vec<_>>();
Some(terminals)
},
)
.flatten()
.collect::<Vec<_>>();
terminals.sort_by(|a, b| {
a.target.cmp(&b.target).then(a.name.cmp(&b.name))
});
Ok(terminals)
}
async fn list_terminals_on_server(
server: &Server,
target: Option<TerminalTarget>,
) -> serror::Result<Vec<Terminal>> {
periphery_client(server)
.await?
.request(periphery_client::api::terminal::ListTerminals {
target,
})
.await
.with_context(|| {
format!(
"Failed to get Terminal list from Server {} ({})",
server.name, server.id
)
})
.map_err(Into::into)
}

View File

@@ -11,7 +11,8 @@ use komodo_client::{
builder::Builder, deployment::Deployment,
permission::PermissionLevel, procedure::Procedure, repo::Repo,
resource::ResourceQuery, server::Server, stack::Stack,
sync::ResourceSync, toml::ResourcesToml, user::User,
swarm::Swarm, sync::ResourceSync, toml::ResourcesToml,
user::User,
},
};
use resolver_api::Resolve;
@@ -207,42 +208,21 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
let ReadArgs { user } = args;
for target in targets {
match target {
ResourceTarget::Alerter(id) => {
let mut alerter = get_check_permissions::<Alerter>(
ResourceTarget::Swarm(id) => {
let mut swarm = get_check_permissions::<Swarm>(
&id,
user,
PermissionLevel::Read.into(),
)
.await?;
Alerter::replace_ids(&mut alerter);
res.alerters.push(convert_resource::<Alerter>(
alerter,
Swarm::replace_ids(&mut swarm);
res.swarms.push(convert_resource::<Swarm>(
swarm,
false,
vec![],
&id_to_tags,
))
}
ResourceTarget::ResourceSync(id) => {
let mut sync = get_check_permissions::<ResourceSync>(
&id,
user,
PermissionLevel::Read.into(),
)
.await?;
if sync.config.file_contents.is_empty()
&& (sync.config.files_on_host
|| !sync.config.repo.is_empty()
|| !sync.config.linked_repo.is_empty())
{
ResourceSync::replace_ids(&mut sync);
res.resource_syncs.push(convert_resource::<ResourceSync>(
sync,
false,
vec![],
&id_to_tags,
))
}
}
ResourceTarget::Server(id) => {
let mut server = get_check_permissions::<Server>(
&id,
@@ -258,31 +238,16 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
&id_to_tags,
))
}
ResourceTarget::Builder(id) => {
let mut builder = get_check_permissions::<Builder>(
ResourceTarget::Stack(id) => {
let mut stack = get_check_permissions::<Stack>(
&id,
user,
PermissionLevel::Read.into(),
)
.await?;
Builder::replace_ids(&mut builder);
res.builders.push(convert_resource::<Builder>(
builder,
false,
vec![],
&id_to_tags,
))
}
ResourceTarget::Build(id) => {
let mut build = get_check_permissions::<Build>(
&id,
user,
PermissionLevel::Read.into(),
)
.await?;
Build::replace_ids(&mut build);
res.builds.push(convert_resource::<Build>(
build,
Stack::replace_ids(&mut stack);
res.stacks.push(convert_resource::<Stack>(
stack,
false,
vec![],
&id_to_tags,
@@ -303,6 +268,21 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
&id_to_tags,
))
}
ResourceTarget::Build(id) => {
let mut build = get_check_permissions::<Build>(
&id,
user,
PermissionLevel::Read.into(),
)
.await?;
Build::replace_ids(&mut build);
res.builds.push(convert_resource::<Build>(
build,
false,
vec![],
&id_to_tags,
))
}
ResourceTarget::Repo(id) => {
let mut repo = get_check_permissions::<Repo>(
&id,
@@ -318,21 +298,6 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
&id_to_tags,
))
}
ResourceTarget::Stack(id) => {
let mut stack = get_check_permissions::<Stack>(
&id,
user,
PermissionLevel::Read.into(),
)
.await?;
Stack::replace_ids(&mut stack);
res.stacks.push(convert_resource::<Stack>(
stack,
false,
vec![],
&id_to_tags,
))
}
ResourceTarget::Procedure(id) => {
let mut procedure = get_check_permissions::<Procedure>(
&id,
@@ -363,6 +328,57 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
&id_to_tags,
));
}
ResourceTarget::ResourceSync(id) => {
let mut sync = get_check_permissions::<ResourceSync>(
&id,
user,
PermissionLevel::Read.into(),
)
.await?;
if sync.config.file_contents.is_empty()
&& (sync.config.files_on_host
|| !sync.config.repo.is_empty()
|| !sync.config.linked_repo.is_empty())
{
ResourceSync::replace_ids(&mut sync);
res.resource_syncs.push(convert_resource::<ResourceSync>(
sync,
false,
vec![],
&id_to_tags,
))
}
}
ResourceTarget::Builder(id) => {
let mut builder = get_check_permissions::<Builder>(
&id,
user,
PermissionLevel::Read.into(),
)
.await?;
Builder::replace_ids(&mut builder);
res.builders.push(convert_resource::<Builder>(
builder,
false,
vec![],
&id_to_tags,
))
}
ResourceTarget::Alerter(id) => {
let mut alerter = get_check_permissions::<Alerter>(
&id,
user,
PermissionLevel::Read.into(),
)
.await?;
Alerter::replace_ids(&mut alerter);
res.alerters.push(convert_resource::<Alerter>(
alerter,
false,
vec![],
&id_to_tags,
))
}
ResourceTarget::System(_) => continue,
};
}

View File

@@ -1,6 +1,6 @@
use std::collections::HashMap;
use anyhow::{Context, anyhow};
use anyhow::Context;
use database::mungos::{
by_id::find_one_by_id,
find::find_collect,
@@ -9,18 +9,7 @@ use database::mungos::{
use komodo_client::{
api::read::{GetUpdate, ListUpdates, ListUpdatesResponse},
entities::{
ResourceTarget,
action::Action,
alerter::Alerter,
build::Build,
builder::Builder,
deployment::Deployment,
permission::PermissionLevel,
procedure::Procedure,
repo::Repo,
server::Server,
stack::Stack,
sync::ResourceSync,
update::{Update, UpdateListItem},
user::User,
},
@@ -29,7 +18,9 @@ use resolver_api::Resolve;
use crate::{
config::core_config,
permission::{get_check_permissions, get_resource_ids_for_user},
permission::{
check_user_target_access, user_resource_target_query,
},
state::db_client,
};
@@ -42,120 +33,7 @@ impl Resolve<ReadArgs> for ListUpdates {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListUpdatesResponse> {
let query = if user.admin || core_config().transparent_mode {
self.query
} else {
let server_query = get_resource_ids_for_user::<Server>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Server", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Server" });
let deployment_query =
get_resource_ids_for_user::<Deployment>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Deployment", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Deployment" });
let stack_query = get_resource_ids_for_user::<Stack>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Stack", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Stack" });
let build_query = get_resource_ids_for_user::<Build>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Build", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Build" });
let repo_query = get_resource_ids_for_user::<Repo>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Repo", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Repo" });
let procedure_query =
get_resource_ids_for_user::<Procedure>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Procedure", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Procedure" });
let action_query = get_resource_ids_for_user::<Action>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Action", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Action" });
let builder_query = get_resource_ids_for_user::<Builder>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Builder", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Builder" });
let alerter_query = get_resource_ids_for_user::<Alerter>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Alerter", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Alerter" });
let resource_sync_query = get_resource_ids_for_user::<
ResourceSync,
>(user)
.await?
.map(|ids| {
doc! {
"target.type": "ResourceSync", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "ResourceSync" });
let mut query = self.query.unwrap_or_default();
query.extend(doc! {
"$or": [
server_query,
deployment_query,
stack_query,
build_query,
repo_query,
procedure_query,
action_query,
alerter_query,
builder_query,
resource_sync_query,
]
});
query.into()
};
let query = user_resource_target_query(user, self.query).await?;
let usernames = find_collect(&db_client().users, None, None)
.await
@@ -222,93 +100,12 @@ impl Resolve<ReadArgs> for GetUpdate {
if user.admin || core_config().transparent_mode {
return Ok(update);
}
match &update.target {
ResourceTarget::System(_) => {
return Err(
anyhow!("user must be admin to view system updates").into(),
);
}
ResourceTarget::Server(id) => {
get_check_permissions::<Server>(
id,
user,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Deployment(id) => {
get_check_permissions::<Deployment>(
id,
user,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Build(id) => {
get_check_permissions::<Build>(
id,
user,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Repo(id) => {
get_check_permissions::<Repo>(
id,
user,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Builder(id) => {
get_check_permissions::<Builder>(
id,
user,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Alerter(id) => {
get_check_permissions::<Alerter>(
id,
user,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Procedure(id) => {
get_check_permissions::<Procedure>(
id,
user,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Action(id) => {
get_check_permissions::<Action>(
id,
user,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::ResourceSync(id) => {
get_check_permissions::<ResourceSync>(
id,
user,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Stack(id) => {
get_check_permissions::<Stack>(
id,
user,
PermissionLevel::Read.into(),
)
.await?;
}
}
check_user_target_access(
&update.target,
user,
PermissionLevel::Read.into(),
)
.await?;
Ok(update)
}
}

View File

@@ -1,27 +1,15 @@
use anyhow::Context;
use axum::{Extension, Router, middleware, routing::post};
use komodo_client::{
api::terminal::*,
entities::{
deployment::Deployment, permission::PermissionLevel,
server::Server, stack::Stack, user::User,
},
};
use komodo_client::{api::terminal::*, entities::user::User};
use serror::Json;
use uuid::Uuid;
use crate::{
auth::auth_request, helpers::periphery_client,
permission::get_check_permissions, resource::get,
state::stack_status_cache,
auth::auth_request, helpers::terminal::setup_target_for_user,
};
pub fn router() -> Router {
Router::new()
.route("/execute", post(execute_terminal))
.route("/execute/container", post(execute_container_exec))
.route("/execute/deployment", post(execute_deployment_exec))
.route("/execute/stack", post(execute_stack_exec))
.layer(middleware::from_fn(auth_request))
}
@@ -29,211 +17,34 @@ pub fn router() -> Router {
// ExecuteTerminal
// =================
async fn execute_terminal(
Extension(user): Extension<User>,
Json(request): Json<ExecuteTerminalBody>,
) -> serror::Result<axum::body::Body> {
execute_terminal_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteTerminal",
skip(user),
skip_all,
fields(
user_id = user.id,
operator = user.id,
target,
terminal,
init = format!("{init:?}")
)
)]
async fn execute_terminal_inner(
req_id: Uuid,
ExecuteTerminalBody {
server,
async fn execute_terminal(
Extension(user): Extension<User>,
Json(ExecuteTerminalBody {
target,
terminal,
command,
}: ExecuteTerminalBody,
user: User,
init,
}): Json<ExecuteTerminalBody>,
) -> serror::Result<axum::body::Body> {
info!("/terminal/execute request | user: {}", user.username);
let server = get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let stream = periphery_client(&server)
.await?
.execute_terminal(terminal, command)
.await
.context("Failed to execute command on periphery")?;
Ok(axum::body::Body::from_stream(stream))
}
// ======================
// ExecuteContainerExec
// ======================
async fn execute_container_exec(
Extension(user): Extension<User>,
Json(request): Json<ExecuteContainerExecBody>,
) -> serror::Result<axum::body::Body> {
execute_container_exec_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteContainerExec",
skip(user),
fields(
user_id = user.id,
)
)]
async fn execute_container_exec_inner(
req_id: Uuid,
ExecuteContainerExecBody {
server,
container,
shell,
command,
recreate,
}: ExecuteContainerExecBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!("ExecuteContainerExec request | user: {}", user.username);
let server = get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let periphery = periphery_client(&server).await?;
let (target, terminal, periphery) =
setup_target_for_user(target, terminal, init, &user).await?;
let stream = periphery
.execute_container_exec(container, shell, command, recreate)
.execute_terminal(target, terminal, command)
.await
.context(
"Failed to execute container exec command on periphery",
)?;
Ok(axum::body::Body::from_stream(stream))
}
// =======================
// ExecuteDeploymentExec
// =======================
async fn execute_deployment_exec(
Extension(user): Extension<User>,
Json(request): Json<ExecuteDeploymentExecBody>,
) -> serror::Result<axum::body::Body> {
execute_deployment_exec_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteDeploymentExec",
skip(user),
fields(
user_id = user.id,
)
)]
async fn execute_deployment_exec_inner(
req_id: Uuid,
ExecuteDeploymentExecBody {
deployment,
shell,
command,
recreate,
}: ExecuteDeploymentExecBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!("ExecuteDeploymentExec request | user: {}", user.username);
let deployment = get_check_permissions::<Deployment>(
&deployment,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let server = get::<Server>(&deployment.config.server_id).await?;
let periphery = periphery_client(&server).await?;
let stream = periphery
.execute_container_exec(deployment.name, shell, command, recreate)
.await
.context(
"Failed to execute container exec command on periphery",
)?;
Ok(axum::body::Body::from_stream(stream))
}
// ==================
// ExecuteStackExec
// ==================
async fn execute_stack_exec(
Extension(user): Extension<User>,
Json(request): Json<ExecuteStackExecBody>,
) -> serror::Result<axum::body::Body> {
execute_stack_exec_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteStackExec",
skip(user),
fields(
user_id = user.id,
)
)]
async fn execute_stack_exec_inner(
req_id: Uuid,
ExecuteStackExecBody {
stack,
service,
shell,
command,
recreate,
}: ExecuteStackExecBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!("ExecuteStackExec request | user: {}", user.username);
let stack = get_check_permissions::<Stack>(
&stack,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let server = get::<Server>(&stack.config.server_id).await?;
let container = stack_status_cache()
.get(&stack.id)
.await
.context("could not get stack status")?
.curr
.services
.iter()
.find(|s| s.service == service)
.context("could not find service")?
.container
.as_ref()
.context("could not find service container")?
.name
.clone();
let periphery = periphery_client(&server).await?;
let stream = periphery
.execute_container_exec(container, shell, command, recreate)
.await
.context(
"Failed to execute container exec command on periphery",
)?;
.context("Failed to execute command on Terminal")?;
Ok(axum::body::Body::from_stream(stream))
}

View File

@@ -4,32 +4,47 @@ use anyhow::{Context, anyhow};
use axum::{
Extension, Json, Router, extract::Path, middleware, routing::post,
};
use data_encoding::BASE32_NOPAD;
use database::hash_password;
use database::mongo_indexed::doc;
use database::mungos::{
by_id::update_one_by_id, mongodb::bson::to_bson,
};
use derive_variants::EnumVariants;
use komodo_client::entities::{random_bytes, random_string};
use komodo_client::{
api::user::*,
entities::{api_key::ApiKey, komodo_timestamp, user::User},
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use response::Response;
use serde::{Deserialize, Serialize};
use serde_json::json;
use serror::{AddStatusCode, AddStatusCodeError};
use tower_sessions::Session;
use typeshare::typeshare;
use uuid::Uuid;
use webauthn_rs::prelude::PasskeyRegistration;
use crate::api::{
SESSION_KEY_PASSKEY_ENROLLMENT, SESSION_KEY_TOTP_ENROLLMENT,
memory_session_layer,
};
use crate::auth::totp::make_totp;
use crate::config::core_config;
use crate::helpers::validations::validate_api_key_name;
use crate::state::webauthn;
use crate::{
auth::auth_request,
helpers::{query::get_user, random_string},
state::db_client,
auth::auth_request, helpers::query::get_user, state::db_client,
};
use super::Variant;
pub struct UserArgs {
pub user: User,
/// Per-client session state
pub session: Option<Session>,
}
#[typeshare]
@@ -45,16 +60,24 @@ enum UserRequest {
SetLastSeenUpdate(SetLastSeenUpdate),
CreateApiKey(CreateApiKey),
DeleteApiKey(DeleteApiKey),
BeginTotpEnrollment(BeginTotpEnrollment),
ConfirmTotpEnrollment(ConfirmTotpEnrollment),
UnenrollTotp(UnenrollTotp),
BeginPasskeyEnrollment(BeginPasskeyEnrollment),
ConfirmPasskeyEnrollment(ConfirmPasskeyEnrollment),
UnenrollPasskey(UnenrollPasskey),
}
pub fn router() -> Router {
Router::new()
.route("/", post(handler))
.route("/{variant}", post(variant_handler))
.layer(memory_session_layer(60))
.layer(middleware::from_fn(auth_request))
}
async fn variant_handler(
session: Session,
user: Extension<User>,
Path(Variant { variant }): Path<Variant>,
Json(params): Json<serde_json::Value>,
@@ -63,11 +86,11 @@ async fn variant_handler(
"type": variant,
"params": params,
}))?;
handler(user, Json(req)).await
handler(session, user, Json(req)).await
}
#[instrument(name = "UserHandler", level = "debug", skip(user))]
async fn handler(
session: Session,
Extension(user): Extension<User>,
Json(request): Json<UserRequest>,
) -> serror::Result<axum::response::Response> {
@@ -77,7 +100,12 @@ async fn handler(
"/user request {req_id} | user: {} ({})",
user.username, user.id
);
let res = request.resolve(&UserArgs { user }).await;
let res = request
.resolve(&UserArgs {
user,
session: Some(session),
})
.await;
if let Err(e) = &res {
warn!("/user request {req_id} error: {:#}", e.error);
}
@@ -89,18 +117,16 @@ async fn handler(
const RECENTLY_VIEWED_MAX: usize = 10;
impl Resolve<UserArgs> for PushRecentlyViewed {
#[instrument(
name = "PushRecentlyViewed",
level = "debug",
skip(user)
)]
async fn resolve(
self,
UserArgs { user }: &UserArgs,
UserArgs { user, .. }: &UserArgs,
) -> serror::Result<PushRecentlyViewedResponse> {
let user = get_user(&user.id).await?;
let (resource_type, id) = self.resource.extract_variant_id();
let field = format!("recents.{resource_type}");
let update = match user.recents.get(&resource_type) {
Some(recents) => {
let mut recents = recents
@@ -108,13 +134,16 @@ impl Resolve<UserArgs> for PushRecentlyViewed {
.filter(|_id| !id.eq(*_id))
.take(RECENTLY_VIEWED_MAX - 1)
.collect::<VecDeque<_>>();
recents.push_front(id);
doc! { format!("recents.{resource_type}"): to_bson(&recents)? }
doc! { &field: to_bson(&recents)? }
}
None => {
doc! { format!("recents.{resource_type}"): [id] }
doc! { &field: [id] }
}
};
update_one_by_id(
&db_client().users,
&user.id,
@@ -122,23 +151,16 @@ impl Resolve<UserArgs> for PushRecentlyViewed {
None,
)
.await
.with_context(|| {
format!("failed to update recents.{resource_type}")
})?;
.with_context(|| format!("Failed to update user '{field}'"))?;
Ok(PushRecentlyViewedResponse {})
}
}
impl Resolve<UserArgs> for SetLastSeenUpdate {
#[instrument(
name = "SetLastSeenUpdate",
level = "debug",
skip(user)
)]
async fn resolve(
self,
UserArgs { user }: &UserArgs,
UserArgs { user, .. }: &UserArgs,
) -> serror::Result<SetLastSeenUpdateResponse> {
update_one_by_id(
&db_client().users,
@@ -149,7 +171,8 @@ impl Resolve<UserArgs> for SetLastSeenUpdate {
None,
)
.await
.context("failed to update user last_update_view")?;
.context("Failed to update user 'last_update_view'")?;
Ok(SetLastSeenUpdateResponse {})
}
}
@@ -158,17 +181,24 @@ const SECRET_LENGTH: usize = 40;
const BCRYPT_COST: u32 = 10;
impl Resolve<UserArgs> for CreateApiKey {
#[instrument(name = "CreateApiKey", level = "debug", skip(user))]
#[instrument(
"CreateApiKey",
skip_all,
fields(operator = user.id)
)]
async fn resolve(
self,
UserArgs { user }: &UserArgs,
UserArgs { user, .. }: &UserArgs,
) -> serror::Result<CreateApiKeyResponse> {
let user = get_user(&user.id).await?;
validate_api_key_name(&self.name)
.status_code(StatusCode::BAD_REQUEST)?;
let key = format!("K-{}", random_string(SECRET_LENGTH));
let secret = format!("S-{}", random_string(SECRET_LENGTH));
let secret_hash = bcrypt::hash(&secret, BCRYPT_COST)
.context("failed at hashing secret string")?;
.context("Failed at hashing secret string")?;
let api_key = ApiKey {
name: self.name,
@@ -178,36 +208,316 @@ impl Resolve<UserArgs> for CreateApiKey {
created_at: komodo_timestamp(),
expires: self.expires,
};
db_client()
.api_keys
.insert_one(api_key)
.await
.context("failed to create api key on db")?;
.context("Failed to create api key on database")?;
Ok(CreateApiKeyResponse { key, secret })
}
}
impl Resolve<UserArgs> for DeleteApiKey {
#[instrument(name = "DeleteApiKey", level = "debug", skip(user))]
#[instrument(
"DeleteApiKey",
skip_all,
fields(operator = user.id)
)]
async fn resolve(
self,
UserArgs { user }: &UserArgs,
UserArgs { user, .. }: &UserArgs,
) -> serror::Result<DeleteApiKeyResponse> {
let client = db_client();
let key = client
.api_keys
.find_one(doc! { "key": &self.key })
.await
.context("failed at db query")?
.context("no api key with key found")?;
.context("Failed at database query")?
.context("No api key with key found")?;
if user.id != key.user_id {
return Err(anyhow!("api key does not belong to user").into());
return Err(
anyhow!("Api key does not belong to user")
.status_code(StatusCode::FORBIDDEN),
);
}
client
.api_keys
.delete_one(doc! { "key": key.key })
.await
.context("failed to delete api key from db")?;
.context("Failed to delete api key from database")?;
Ok(DeleteApiKeyResponse {})
}
}
const TOTP_ENROLLMENT_SECRET_LENGTH: usize = 20;
impl Resolve<UserArgs> for BeginTotpEnrollment {
#[instrument(
"BeginTotpEnrollment",
skip_all,
fields(operator = user.id)
)]
async fn resolve(
self,
UserArgs { user, session }: &UserArgs,
) -> serror::Result<BeginTotpEnrollmentResponse> {
for locked_username in &core_config().lock_login_credentials_for {
if *locked_username == user.username {
return Err(
anyhow!("User not allowed to enroll in TOTP 2FA.").into(),
);
}
}
let session = session.as_ref().context(
"Method called in invalid context. This should not happen",
)?;
let secret_bytes = random_bytes(TOTP_ENROLLMENT_SECRET_LENGTH);
let totp = make_totp(secret_bytes.clone(), user.id.clone())?;
let png = totp
.get_qr_base64()
.map_err(|e| anyhow::Error::msg(e))
.context("Failed to generate QR code png")?;
session
.insert(SESSION_KEY_TOTP_ENROLLMENT, secret_bytes)
.await?;
Ok(BeginTotpEnrollmentResponse {
uri: totp.get_url(),
png,
})
}
}
impl Resolve<UserArgs> for ConfirmTotpEnrollment {
#[instrument(
"ConfirmTotpEnrollment",
skip_all,
fields(operator = user.id)
)]
async fn resolve(
self,
UserArgs { user, session }: &UserArgs,
) -> serror::Result<ConfirmTotpEnrollmentResponse> {
let session = session.as_ref().context(
"Method called in invalid context. This should not happen",
)?;
let secret_bytes = session
.remove::<Vec<u8>>(SESSION_KEY_TOTP_ENROLLMENT)
.await
.context("Totp enrollment was not initiated correctly")?
.context(
"Totp enrollment was not initiated correctly or timed out",
)?;
let encoded_secret = BASE32_NOPAD.encode(&secret_bytes);
let totp = make_totp(secret_bytes, None)?;
let valid = totp
.check_current(&self.code)
.context("Failed to check code validity")?;
if !valid {
return Err(anyhow!(
"The provided code was not valid. Please try BeginTotpEnrollment flow again."
).status_code(StatusCode::BAD_REQUEST));
}
let recovery_codes =
(0..10).map(|_| random_string(20)).collect::<Vec<_>>();
let hashed_recovery_codes = recovery_codes
.iter()
.map(|code| hash_password(code))
.collect::<anyhow::Result<Vec<_>>>()
.context("Failed to generate valid recovery codes")?;
update_one_by_id(
&db_client().users,
&user.id,
doc! {
"$set": {
"totp.secret": encoded_secret,
"totp.confirmed_at": komodo_timestamp(),
"totp.recovery_codes": hashed_recovery_codes,
}
},
None,
)
.await
.context("Failed to update user totp fields on database")?;
Ok(ConfirmTotpEnrollmentResponse { recovery_codes })
}
}
impl Resolve<UserArgs> for UnenrollTotp {
#[instrument(
"UnenrollTotp",
skip_all,
fields(operator = user.id)
)]
async fn resolve(
self,
UserArgs { user, .. }: &UserArgs,
) -> serror::Result<UnenrollTotpResponse> {
update_one_by_id(
&db_client().users,
&user.id,
doc! {
"$set": {
"totp.secret": "",
"totp.confirmed_at": 0,
"totp.recovery_codes": [],
}
},
None,
)
.await
.context("Failed to clear user totp fields on database")?;
Ok(UnenrollTotpResponse {})
}
}
//
impl Resolve<UserArgs> for BeginPasskeyEnrollment {
#[instrument(
"BeginPasskeyEnrollment",
skip_all,
fields(operator = user.id)
)]
async fn resolve(
self,
UserArgs { user, session }: &UserArgs,
) -> serror::Result<BeginPasskeyEnrollmentResponse> {
for locked_username in &core_config().lock_login_credentials_for {
if *locked_username == user.username {
return Err(
anyhow!(
"User not allowed to enroll in Passkey authentication."
)
.into(),
);
}
}
let session = session.as_ref().context(
"Method called in invalid context. This should not happen",
)?;
let webauthn = webauthn().context(
"No webauthn provider available, invalid KOMODO_HOST config",
)?;
// Get two parts from this, the first is returned to the client.
// The second must stay server side and is used in confirmation flow.
let (challenge, server_state) = webauthn
.start_passkey_registration(
Uuid::new_v4(),
&user.username,
&user.username,
None,
)?;
session
.insert(
SESSION_KEY_PASSKEY_ENROLLMENT,
(&user.id, server_state),
)
.await
.context(
"Failed to store passkey enrollment state in server side client session",
)?;
Ok(challenge.into())
}
}
//
impl Resolve<UserArgs> for ConfirmPasskeyEnrollment {
#[instrument(
"ConfirmPasskeyEnrollment",
skip_all,
fields(operator = user.id)
)]
async fn resolve(
self,
UserArgs { user, session }: &UserArgs,
) -> serror::Result<ConfirmPasskeyEnrollmentResponse> {
let session = session.as_ref().context(
"Method called in invalid context. This should not happen",
)?;
let webauthn = webauthn().context(
"No webauthn provider available, invalid KOMODO_HOST config",
)?;
let (user_id, server_state) = session
.remove::<(String, PasskeyRegistration)>(
SESSION_KEY_PASSKEY_ENROLLMENT,
)
.await
.context("Passkey enrollment was not initiated correctly")?
.context(
"Passkey enrollment was not initiated correctly or timed out",
)?;
let passkey = webauthn
.finish_passkey_registration(
&self.credential.into(),
&server_state,
)
.context("Failed to finish passkey registration")?;
let passkey = to_bson(&passkey)
.context("Failed to serialize passkey to BSON")?;
let update = doc! {
"$set": {
"passkey.passkey": passkey,
"passkey.created_at": komodo_timestamp()
}
};
update_one_by_id(&db_client().users, &user_id, update, None)
.await
.context("Failed to update user passkey options on database")?;
Ok(ConfirmPasskeyEnrollmentResponse {})
}
}
//
impl Resolve<UserArgs> for UnenrollPasskey {
#[instrument(
"UnenrollPasskey",
skip_all,
fields(operator = user.id)
)]
async fn resolve(
self,
UserArgs { user, .. }: &UserArgs,
) -> serror::Result<UnenrollPasskeyResponse> {
let update = doc! {
"$set": {
"passkey.passkey": null,
"passkey.created_at": 0
}
};
update_one_by_id(&db_client().users, &user.id, update, None)
.await
.context("Failed to update user passkey options on database")?;
Ok(UnenrollPasskeyResponse {})
}
}

View File

@@ -11,7 +11,15 @@ use crate::{permission::get_check_permissions, resource};
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateAction {
#[instrument(name = "CreateAction", skip(user))]
#[instrument(
"CreateAction",
skip_all,
fields(
operator = user.id,
action = self.name,
config = serde_json::to_string(&self.config).unwrap(),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -22,7 +30,15 @@ impl Resolve<WriteArgs> for CreateAction {
}
impl Resolve<WriteArgs> for CopyAction {
#[instrument(name = "CopyAction", skip(user))]
#[instrument(
"CopyAction",
skip_all,
fields(
operator = user.id,
action = self.name,
copy_action = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -39,7 +55,15 @@ impl Resolve<WriteArgs> for CopyAction {
}
impl Resolve<WriteArgs> for UpdateAction {
#[instrument(name = "UpdateAction", skip(user))]
#[instrument(
"UpdateAction",
skip_all,
fields(
operator = user.id,
action = self.id,
update = serde_json::to_string(&self.config).unwrap(),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -49,7 +73,15 @@ impl Resolve<WriteArgs> for UpdateAction {
}
impl Resolve<WriteArgs> for RenameAction {
#[instrument(name = "RenameAction", skip(user))]
#[instrument(
"RenameAction",
skip_all,
fields(
operator = user.id,
action = self.id,
new_name = self.name,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -59,8 +91,18 @@ impl Resolve<WriteArgs> for RenameAction {
}
impl Resolve<WriteArgs> for DeleteAction {
#[instrument(name = "DeleteAction", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Action> {
Ok(resource::delete::<Action>(&self.id, args).await?)
#[instrument(
"DeleteAction",
skip_all,
fields(
operator = user.id,
action = self.id
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Action> {
Ok(resource::delete::<Action>(&self.id, user).await?)
}
}

View File

@@ -10,6 +10,14 @@ use serror::AddStatusCodeError;
use crate::{api::write::WriteArgs, state::db_client};
impl Resolve<WriteArgs> for CloseAlert {
#[instrument(
"CloseAlert",
skip_all,
fields(
operator = admin.id,
alert_id = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,

View File

@@ -11,7 +11,15 @@ use crate::{permission::get_check_permissions, resource};
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateAlerter {
#[instrument(name = "CreateAlerter", skip(user))]
#[instrument(
"CreateAlerter",
skip_all,
fields(
operator = user.id,
alerter = self.name,
config = serde_json::to_string(&self.config).unwrap(),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -22,7 +30,15 @@ impl Resolve<WriteArgs> for CreateAlerter {
}
impl Resolve<WriteArgs> for CopyAlerter {
#[instrument(name = "CopyAlerter", skip(user))]
#[instrument(
"CopyAlerter",
skip_all,
fields(
operator = user.id,
alerter = self.name,
copy_alerter = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -39,17 +55,32 @@ impl Resolve<WriteArgs> for CopyAlerter {
}
impl Resolve<WriteArgs> for DeleteAlerter {
#[instrument(name = "DeleteAlerter", skip(args))]
#[instrument(
"DeleteAlerter",
skip_all,
fields(
operator = user.id,
alerter = self.id,
)
)]
async fn resolve(
self,
args: &WriteArgs,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Alerter> {
Ok(resource::delete::<Alerter>(&self.id, args).await?)
Ok(resource::delete::<Alerter>(&self.id, user).await?)
}
}
impl Resolve<WriteArgs> for UpdateAlerter {
#[instrument(name = "UpdateAlerter", skip(user))]
#[instrument(
"UpdateAlerter",
skip_all,
fields(
operator = user.id,
alerter = self.id,
update = serde_json::to_string(&self.config).unwrap()
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -62,7 +93,15 @@ impl Resolve<WriteArgs> for UpdateAlerter {
}
impl Resolve<WriteArgs> for RenameAlerter {
#[instrument(name = "RenameAlerter", skip(user))]
#[instrument(
"RenameAlerter",
skip_all,
fields(
operator = user.id,
alerter = self.id,
new_name = self.name,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,

View File

@@ -1,4 +1,4 @@
use std::{path::PathBuf, str::FromStr, time::Duration};
use std::{path::PathBuf, time::Duration};
use anyhow::{Context, anyhow};
use database::mungos::mongodb::bson::to_document;
@@ -42,7 +42,15 @@ use crate::{
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateBuild {
#[instrument(name = "CreateBuild", skip(user))]
#[instrument(
"CreateBuild",
skip_all,
fields(
operator = user.id,
build = self.name,
config = serde_json::to_string(&self.config).unwrap(),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -53,7 +61,15 @@ impl Resolve<WriteArgs> for CreateBuild {
}
impl Resolve<WriteArgs> for CopyBuild {
#[instrument(name = "CopyBuild", skip(user))]
#[instrument(
"CopyBuild",
skip_all,
fields(
operator = user.id,
build = self.name,
copy_build = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -72,14 +88,32 @@ impl Resolve<WriteArgs> for CopyBuild {
}
impl Resolve<WriteArgs> for DeleteBuild {
#[instrument(name = "DeleteBuild", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Build> {
Ok(resource::delete::<Build>(&self.id, args).await?)
#[instrument(
"DeleteBuild",
skip_all,
fields(
operator = user.id,
build = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Build> {
Ok(resource::delete::<Build>(&self.id, user).await?)
}
}
impl Resolve<WriteArgs> for UpdateBuild {
#[instrument(name = "UpdateBuild", skip(user))]
#[instrument(
"UpdateBuild",
skip_all,
fields(
operator = user.id,
build = self.id,
update = serde_json::to_string(&self.config).unwrap(),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -89,7 +123,15 @@ impl Resolve<WriteArgs> for UpdateBuild {
}
impl Resolve<WriteArgs> for RenameBuild {
#[instrument(name = "RenameBuild", skip(user))]
#[instrument(
"RenameBuild",
skip_all,
fields(
operator = user.id,
build = self.id,
new_name = self.name,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -99,7 +141,14 @@ impl Resolve<WriteArgs> for RenameBuild {
}
impl Resolve<WriteArgs> for WriteBuildFileContents {
#[instrument(name = "WriteBuildFileContents", skip(args))]
#[instrument(
"WriteBuildFileContents",
skip_all,
fields(
operator = args.user.id,
build = self.build,
)
)]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Update> {
let build = get_check_permissions::<Build>(
&self.build,
@@ -171,6 +220,7 @@ impl Resolve<WriteArgs> for WriteBuildFileContents {
}
}
#[instrument("WriteDockerfileContentsGit", skip_all)]
async fn write_dockerfile_contents_git(
req: WriteBuildFileContents,
args: &WriteArgs,
@@ -317,11 +367,6 @@ async fn write_dockerfile_contents_git(
}
impl Resolve<WriteArgs> for RefreshBuildCache {
#[instrument(
name = "RefreshBuildCache",
level = "debug",
skip(user)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -345,23 +390,28 @@ impl Resolve<WriteArgs> for RefreshBuildCache {
None
};
let (
remote_path,
remote_contents,
remote_error,
latest_hash,
latest_message,
) = if build.config.files_on_host {
let RemoteDockerfileContents {
path,
contents,
error,
hash,
message,
} = if build.config.files_on_host {
// =============
// FILES ON HOST
// =============
match get_on_host_dockerfile(&build).await {
Ok(FileContents { path, contents }) => {
(Some(path), Some(contents), None, None, None)
}
Err(e) => {
(None, None, Some(format_serror(&e.into())), None, None)
RemoteDockerfileContents {
path: Some(path),
contents: Some(contents),
..Default::default()
}
}
Err(e) => RemoteDockerfileContents {
error: Some(format_serror(&e.into())),
..Default::default()
},
}
} else if let Some(repo) = &repo {
let Some(res) = get_git_remote(&build, repo.into()).await?
@@ -381,7 +431,7 @@ impl Resolve<WriteArgs> for RefreshBuildCache {
// =============
// UI BASED FILE
// =============
(None, None, None, None, None)
RemoteDockerfileContents::default()
};
let info = BuildInfo {
@@ -389,11 +439,11 @@ impl Resolve<WriteArgs> for RefreshBuildCache {
built_hash: build.info.built_hash,
built_message: build.info.built_message,
built_contents: build.info.built_contents,
remote_path,
remote_contents,
remote_error,
latest_hash,
latest_message,
remote_path: path,
remote_contents: contents,
remote_error: error,
latest_hash: hash,
latest_message: message,
};
let info = to_document(&info)
@@ -485,15 +535,7 @@ async fn get_on_host_dockerfile(
async fn get_git_remote(
build: &Build,
mut clone_args: RepoExecutionArgs,
) -> anyhow::Result<
Option<(
Option<String>,
Option<String>,
Option<String>,
Option<String>,
Option<String>,
)>,
> {
) -> anyhow::Result<Option<RemoteDockerfileContents>> {
if clone_args.provider.is_empty() {
// Nothing to do here
return Ok(None);
@@ -520,10 +562,19 @@ async fn get_git_remote(
access_token,
)
.await
.context("failed to clone build repo")?;
.context("Failed to clone Build repo")?;
let relative_path = PathBuf::from_str(&build.config.build_path)
.context("Invalid build path")?
// Ensure clone / pull successful,
// propogate error log -> 'errored' and return.
if let Some(failure) = res.logs.iter().find(|log| !log.success) {
return Ok(Some(RemoteDockerfileContents {
path: Some(format!("Failed at: {}", failure.stage)),
error: Some(failure.combined()),
..Default::default()
}));
}
let relative_path = PathBuf::from(&build.config.build_path)
.join(&build.config.dockerfile_path);
let full_path = repo_path.join(&relative_path);
@@ -534,11 +585,20 @@ async fn get_git_remote(
Ok(contents) => (Some(contents), None),
Err(e) => (None, Some(format_serror(&e.into()))),
};
Ok(Some((
Some(relative_path.display().to_string()),
Ok(Some(RemoteDockerfileContents {
path: Some(relative_path.display().to_string()),
contents,
error,
res.commit_hash,
res.commit_message,
)))
hash: res.commit_hash,
message: res.commit_message,
}))
}
#[derive(Default)]
pub struct RemoteDockerfileContents {
pub path: Option<String>,
pub contents: Option<String>,
pub error: Option<String>,
pub hash: Option<String>,
pub message: Option<String>,
}

View File

@@ -11,7 +11,15 @@ use crate::{permission::get_check_permissions, resource};
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateBuilder {
#[instrument(name = "CreateBuilder", skip(user))]
#[instrument(
"CreateBuilder",
skip_all,
fields(
operator = user.id,
builder = self.name,
config = serde_json::to_string(&self.config).unwrap(),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -22,7 +30,15 @@ impl Resolve<WriteArgs> for CreateBuilder {
}
impl Resolve<WriteArgs> for CopyBuilder {
#[instrument(name = "CopyBuilder", skip(user))]
#[instrument(
"CopyBuilder",
skip_all,
fields(
operator = user.id,
builder = self.name,
copy_builder = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -39,17 +55,32 @@ impl Resolve<WriteArgs> for CopyBuilder {
}
impl Resolve<WriteArgs> for DeleteBuilder {
#[instrument(name = "DeleteBuilder", skip(args))]
#[instrument(
"DeleteBuilder",
skip_all,
fields(
operator = user.id,
builder = self.id,
)
)]
async fn resolve(
self,
args: &WriteArgs,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Builder> {
Ok(resource::delete::<Builder>(&self.id, args).await?)
Ok(resource::delete::<Builder>(&self.id, user).await?)
}
}
impl Resolve<WriteArgs> for UpdateBuilder {
#[instrument(name = "UpdateBuilder", skip(user))]
#[instrument(
"UpdateBuilder",
skip_all,
fields(
operator = user.id,
builder = self.id,
update = serde_json::to_string(&self.config).unwrap(),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -62,7 +93,15 @@ impl Resolve<WriteArgs> for UpdateBuilder {
}
impl Resolve<WriteArgs> for RenameBuilder {
#[instrument(name = "RenameBuilder", skip(user))]
#[instrument(
"RenameBuilder",
skip_all,
fields(
operator = user.id,
builder = self.id,
new_name = self.name
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,

View File

@@ -33,7 +33,15 @@ use crate::{
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateDeployment {
#[instrument(name = "CreateDeployment", skip(user))]
#[instrument(
"CreateDeployment",
skip_all,
fields(
operator = user.id,
deployment = self.name,
config = serde_json::to_string(&self.config).unwrap(),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -49,7 +57,15 @@ impl Resolve<WriteArgs> for CreateDeployment {
}
impl Resolve<WriteArgs> for CopyDeployment {
#[instrument(name = "CopyDeployment", skip(user))]
#[instrument(
"CopyDeployment",
skip_all,
fields(
operator = user.id,
deployment = self.name,
copy_deployment = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -72,7 +88,15 @@ impl Resolve<WriteArgs> for CopyDeployment {
}
impl Resolve<WriteArgs> for CreateDeploymentFromContainer {
#[instrument(name = "CreateDeploymentFromContainer", skip(user))]
#[instrument(
"CreateDeploymentFromContainer",
skip_all,
fields(
operator = user.id,
server = self.server,
deployment = self.name,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -166,17 +190,32 @@ impl Resolve<WriteArgs> for CreateDeploymentFromContainer {
}
impl Resolve<WriteArgs> for DeleteDeployment {
#[instrument(name = "DeleteDeployment", skip(args))]
#[instrument(
"DeleteDeployment",
skip_all,
fields(
operator = user.id,
deployment = self.id
)
)]
async fn resolve(
self,
args: &WriteArgs,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Deployment> {
Ok(resource::delete::<Deployment>(&self.id, args).await?)
Ok(resource::delete::<Deployment>(&self.id, user).await?)
}
}
impl Resolve<WriteArgs> for UpdateDeployment {
#[instrument(name = "UpdateDeployment", skip(user))]
#[instrument(
"UpdateDeployment",
skip_all,
fields(
operator = user.id,
deployment = self.id,
update = serde_json::to_string(&self.config).unwrap(),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -189,7 +228,15 @@ impl Resolve<WriteArgs> for UpdateDeployment {
}
impl Resolve<WriteArgs> for RenameDeployment {
#[instrument(name = "RenameDeployment", skip(user))]
#[instrument(
"RenameDeployment",
skip_all,
fields(
operator = user.id,
deployment = self.id,
new_name = self.name,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,

View File

@@ -1,5 +1,3 @@
use std::time::Instant;
use anyhow::Context;
use axum::{
Extension, Router, extract::Path, middleware, routing::post,
@@ -11,6 +9,7 @@ use response::Response;
use serde::{Deserialize, Serialize};
use serde_json::json;
use serror::Json;
use strum::Display;
use typeshare::typeshare;
use uuid::Uuid;
@@ -33,8 +32,10 @@ mod resource;
mod server;
mod service_user;
mod stack;
mod swarm;
mod sync;
mod tag;
mod terminal;
mod user;
mod user_group;
mod variable;
@@ -47,42 +48,22 @@ pub struct WriteArgs {
#[derive(
Serialize, Deserialize, Debug, Clone, Resolve, EnumVariants,
)]
#[variant_derive(Debug)]
#[variant_derive(Debug, Display)]
#[args(WriteArgs)]
#[response(Response)]
#[error(serror::Error)]
#[serde(tag = "type", content = "params")]
pub enum WriteRequest {
// ==== USER ====
CreateLocalUser(CreateLocalUser),
UpdateUserUsername(UpdateUserUsername),
UpdateUserPassword(UpdateUserPassword),
DeleteUser(DeleteUser),
// ==== SERVICE USER ====
CreateServiceUser(CreateServiceUser),
UpdateServiceUserDescription(UpdateServiceUserDescription),
CreateApiKeyForServiceUser(CreateApiKeyForServiceUser),
DeleteApiKeyForServiceUser(DeleteApiKeyForServiceUser),
// ==== USER GROUP ====
CreateUserGroup(CreateUserGroup),
RenameUserGroup(RenameUserGroup),
DeleteUserGroup(DeleteUserGroup),
AddUserToUserGroup(AddUserToUserGroup),
RemoveUserFromUserGroup(RemoveUserFromUserGroup),
SetUsersInUserGroup(SetUsersInUserGroup),
SetEveryoneUserGroup(SetEveryoneUserGroup),
// ==== PERMISSIONS ====
UpdateUserAdmin(UpdateUserAdmin),
UpdateUserBasePermissions(UpdateUserBasePermissions),
UpdatePermissionOnResourceType(UpdatePermissionOnResourceType),
UpdatePermissionOnTarget(UpdatePermissionOnTarget),
// ==== RESOURCE ====
UpdateResourceMeta(UpdateResourceMeta),
// ==== SWARM ====
CreateSwarm(CreateSwarm),
CopySwarm(CopySwarm),
DeleteSwarm(DeleteSwarm),
UpdateSwarm(UpdateSwarm),
RenameSwarm(RenameSwarm),
// ==== SERVER ====
CreateServer(CreateServer),
CopyServer(CopyServer),
@@ -90,11 +71,14 @@ pub enum WriteRequest {
UpdateServer(UpdateServer),
RenameServer(RenameServer),
CreateNetwork(CreateNetwork),
UpdateServerPublicKey(UpdateServerPublicKey),
RotateServerKeys(RotateServerKeys),
// ==== TERMINAL ====
CreateTerminal(CreateTerminal),
DeleteTerminal(DeleteTerminal),
DeleteAllTerminals(DeleteAllTerminals),
UpdateServerPublicKey(UpdateServerPublicKey),
RotateServerKeys(RotateServerKeys),
BatchDeleteAllTerminals(BatchDeleteAllTerminals),
// ==== STACK ====
CreateStack(CreateStack),
@@ -122,13 +106,6 @@ pub enum WriteRequest {
WriteBuildFileContents(WriteBuildFileContents),
RefreshBuildCache(RefreshBuildCache),
// ==== BUILDER ====
CreateBuilder(CreateBuilder),
CopyBuilder(CopyBuilder),
DeleteBuilder(DeleteBuilder),
UpdateBuilder(UpdateBuilder),
RenameBuilder(RenameBuilder),
// ==== REPO ====
CreateRepo(CreateRepo),
CopyRepo(CopyRepo),
@@ -137,13 +114,6 @@ pub enum WriteRequest {
RenameRepo(RenameRepo),
RefreshRepoCache(RefreshRepoCache),
// ==== ALERTER ====
CreateAlerter(CreateAlerter),
CopyAlerter(CopyAlerter),
DeleteAlerter(DeleteAlerter),
UpdateAlerter(UpdateAlerter),
RenameAlerter(RenameAlerter),
// ==== PROCEDURE ====
CreateProcedure(CreateProcedure),
CopyProcedure(CopyProcedure),
@@ -168,6 +138,52 @@ pub enum WriteRequest {
CommitSync(CommitSync),
RefreshResourceSyncPending(RefreshResourceSyncPending),
// ==== BUILDER ====
CreateBuilder(CreateBuilder),
CopyBuilder(CopyBuilder),
DeleteBuilder(DeleteBuilder),
UpdateBuilder(UpdateBuilder),
RenameBuilder(RenameBuilder),
// ==== ALERTER ====
CreateAlerter(CreateAlerter),
CopyAlerter(CopyAlerter),
DeleteAlerter(DeleteAlerter),
UpdateAlerter(UpdateAlerter),
RenameAlerter(RenameAlerter),
// ==== ONBOARDING KEY ====
CreateOnboardingKey(CreateOnboardingKey),
UpdateOnboardingKey(UpdateOnboardingKey),
DeleteOnboardingKey(DeleteOnboardingKey),
// ==== USER ====
CreateLocalUser(CreateLocalUser),
UpdateUserUsername(UpdateUserUsername),
UpdateUserPassword(UpdateUserPassword),
DeleteUser(DeleteUser),
// ==== SERVICE USER ====
CreateServiceUser(CreateServiceUser),
UpdateServiceUserDescription(UpdateServiceUserDescription),
CreateApiKeyForServiceUser(CreateApiKeyForServiceUser),
DeleteApiKeyForServiceUser(DeleteApiKeyForServiceUser),
// ==== USER GROUP ====
CreateUserGroup(CreateUserGroup),
RenameUserGroup(RenameUserGroup),
DeleteUserGroup(DeleteUserGroup),
AddUserToUserGroup(AddUserToUserGroup),
RemoveUserFromUserGroup(RemoveUserFromUserGroup),
SetUsersInUserGroup(SetUsersInUserGroup),
SetEveryoneUserGroup(SetEveryoneUserGroup),
// ==== PERMISSIONS ====
UpdateUserAdmin(UpdateUserAdmin),
UpdateUserBasePermissions(UpdateUserBasePermissions),
UpdatePermissionOnResourceType(UpdatePermissionOnResourceType),
UpdatePermissionOnTarget(UpdatePermissionOnTarget),
// ==== TAG ====
CreateTag(CreateTag),
DeleteTag(DeleteTag),
@@ -189,11 +205,6 @@ pub enum WriteRequest {
UpdateDockerRegistryAccount(UpdateDockerRegistryAccount),
DeleteDockerRegistryAccount(DeleteDockerRegistryAccount),
// ==== ONBOARDING KEY ====
CreateOnboardingKey(CreateOnboardingKey),
UpdateOnboardingKey(UpdateOnboardingKey),
DeleteOnboardingKey(DeleteOnboardingKey),
// ==== ALERT ====
CloseAlert(CloseAlert),
}
@@ -230,31 +241,22 @@ async fn handler(
res?
}
#[instrument(
name = "WriteRequest",
skip(user, request),
fields(
user_id = user.id,
request = format!("{:?}", request.extract_variant())
)
)]
async fn task(
req_id: Uuid,
request: WriteRequest,
user: User,
) -> serror::Result<axum::response::Response> {
info!("/write request | user: {}", user.username);
let timer = Instant::now();
let variant = request.extract_variant();
info!("/write request | {variant} | user: {}", user.username);
let res = request.resolve(&WriteArgs { user }).await;
if let Err(e) = &res {
warn!("/write request {req_id} error: {:#}", e.error);
warn!(
"/write request {req_id} | {variant} | error: {:#}",
e.error
);
}
let elapsed = timer.elapsed();
debug!("/write request {req_id} | resolve time: {elapsed:?}");
res.map(|res| res.0)
}

View File

@@ -6,7 +6,9 @@ use komodo_client::{
DeleteOnboardingKey, DeleteOnboardingKeyResponse,
UpdateOnboardingKey, UpdateOnboardingKeyResponse,
},
entities::{komodo_timestamp, onboarding_key::OnboardingKey},
entities::{
komodo_timestamp, onboarding_key::OnboardingKey, random_string,
},
};
use noise::key::EncodedKeyPair;
use reqwest::StatusCode;
@@ -18,7 +20,18 @@ use crate::{api::write::WriteArgs, state::db_client};
//
impl Resolve<WriteArgs> for CreateOnboardingKey {
#[instrument(name = "CreateServerOnboardingKey", skip(self, admin))]
#[instrument(
"CreateOnboardingKey",
skip_all,
fields(
operator = admin.id,
name = self.name,
expires = self.expires,
tags = format!("{:?}", self.tags),
copy_server = self.copy_server,
create_builder = self.create_builder,
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -29,13 +42,16 @@ impl Resolve<WriteArgs> for CreateOnboardingKey {
.status_code(StatusCode::FORBIDDEN),
);
}
let keys = if let Some(private_key) = self.private_key {
EncodedKeyPair::from_private_key(&private_key)?
let private_key = if let Some(private_key) = self.private_key {
private_key
} else {
EncodedKeyPair::generate()?
format!("O-{}", random_string(30))
};
let public_key = EncodedKeyPair::from_private_key(&private_key)?
.public
.into_inner();
let onboarding_key = OnboardingKey {
public_key: keys.public.into_inner(),
public_key,
name: self.name,
enabled: true,
onboarded: Default::default(),
@@ -62,7 +78,7 @@ impl Resolve<WriteArgs> for CreateOnboardingKey {
"No Server onboarding key found on database after create",
)?;
Ok(CreateOnboardingKeyResponse {
private_key: keys.private.into_inner(),
private_key,
created,
})
}
@@ -71,6 +87,15 @@ impl Resolve<WriteArgs> for CreateOnboardingKey {
//
impl Resolve<WriteArgs> for UpdateOnboardingKey {
#[instrument(
"UpdateOnboardingKey",
skip_all,
fields(
operator = admin.id,
public_key = self.public_key,
update = format!("{:?}", self),
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -140,7 +165,14 @@ impl Resolve<WriteArgs> for UpdateOnboardingKey {
//
impl Resolve<WriteArgs> for DeleteOnboardingKey {
#[instrument(name = "DeleteServerOnboardingKey", skip(admin))]
#[instrument(
"DeleteOnboardingKey",
skip_all,
fields(
operator = admin.id,
public_key = self.public_key,
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,

View File

@@ -8,6 +8,7 @@ use database::mungos::{
options::UpdateOptions,
},
};
use derive_variants::ExtractVariant as _;
use komodo_client::{
api::write::*,
entities::{
@@ -22,7 +23,15 @@ use crate::{helpers::query::get_user, state::db_client};
use super::WriteArgs;
impl Resolve<WriteArgs> for UpdateUserAdmin {
#[instrument(name = "UpdateUserAdmin", skip(super_admin))]
#[instrument(
"UpdateUserAdmin",
skip_all,
fields(
operator = super_admin.id,
target_user = self.user_id,
admin = self.admin,
)
)]
async fn resolve(
self,
WriteArgs { user: super_admin }: &WriteArgs,
@@ -60,7 +69,17 @@ impl Resolve<WriteArgs> for UpdateUserAdmin {
}
impl Resolve<WriteArgs> for UpdateUserBasePermissions {
#[instrument(name = "UpdateUserBasePermissions", skip(admin))]
#[instrument(
"UpdateUserBasePermissions",
skip_all,
fields(
operator = admin.id,
target_user = self.user_id,
enabled = self.enabled,
create_servers = self.create_servers,
create_builds = self.create_builds,
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -117,7 +136,16 @@ impl Resolve<WriteArgs> for UpdateUserBasePermissions {
}
impl Resolve<WriteArgs> for UpdatePermissionOnResourceType {
#[instrument(name = "UpdatePermissionOnResourceType", skip(admin))]
#[instrument(
"UpdatePermissionOnResourceType",
skip_all,
fields(
operator = admin.id,
user_target = format!("{:?}", self.user_target),
resource_type = self.resource_type.to_string(),
permission = format!("{:?}", self.permission),
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -185,7 +213,17 @@ impl Resolve<WriteArgs> for UpdatePermissionOnResourceType {
}
impl Resolve<WriteArgs> for UpdatePermissionOnTarget {
#[instrument(name = "UpdatePermissionOnTarget", skip(admin))]
#[instrument(
"UpdatePermissionOnTarget",
skip_all,
fields(
operator = admin.id,
user_target = format!("{:?}", self.user_target),
resource_type = self.resource_target.extract_variant().to_string(),
resource_id = self.resource_target.extract_variant_id().1,
permission = format!("{:?}", self.permission),
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -269,8 +307,8 @@ async fn extract_user_target_with_validation(
.users
.find_one(filter)
.await
.context("failed to query db for users")?
.context("no matching user found")?
.context("Failed to query db for users")?
.context("No matching user found")?
.id;
Ok((UserTargetVariant::User, id))
}
@@ -283,8 +321,8 @@ async fn extract_user_target_with_validation(
.user_groups
.find_one(filter)
.await
.context("failed to query db for user_groups")?
.context("no matching user_group found")?
.context("Failed to query db for user_groups")?
.context("No matching user_group found")?
.id;
Ok((UserTargetVariant::UserGroup, id))
}
@@ -300,47 +338,19 @@ async fn extract_resource_target_with_validation(
let res = resource_target.extract_variant_id();
Ok((res.0, res.1.clone()))
}
ResourceTarget::Build(ident) => {
ResourceTarget::Swarm(ident) => {
let filter = match ObjectId::from_str(ident) {
Ok(id) => doc! { "_id": id },
Err(_) => doc! { "name": ident },
};
let id = db_client()
.builds
.swarms
.find_one(filter)
.await
.context("failed to query db for builds")?
.context("no matching build found")?
.context("Failed to query db for swarms")?
.context("No matching server found")?
.id;
Ok((ResourceTargetVariant::Build, id))
}
ResourceTarget::Builder(ident) => {
let filter = match ObjectId::from_str(ident) {
Ok(id) => doc! { "_id": id },
Err(_) => doc! { "name": ident },
};
let id = db_client()
.builders
.find_one(filter)
.await
.context("failed to query db for builders")?
.context("no matching builder found")?
.id;
Ok((ResourceTargetVariant::Builder, id))
}
ResourceTarget::Deployment(ident) => {
let filter = match ObjectId::from_str(ident) {
Ok(id) => doc! { "_id": id },
Err(_) => doc! { "name": ident },
};
let id = db_client()
.deployments
.find_one(filter)
.await
.context("failed to query db for deployments")?
.context("no matching deployment found")?
.id;
Ok((ResourceTargetVariant::Deployment, id))
Ok((ResourceTargetVariant::Server, id))
}
ResourceTarget::Server(ident) => {
let filter = match ObjectId::from_str(ident) {
@@ -351,11 +361,53 @@ async fn extract_resource_target_with_validation(
.servers
.find_one(filter)
.await
.context("failed to query db for servers")?
.context("no matching server found")?
.context("Failed to query db for servers")?
.context("No matching server found")?
.id;
Ok((ResourceTargetVariant::Server, id))
}
ResourceTarget::Stack(ident) => {
let filter = match ObjectId::from_str(ident) {
Ok(id) => doc! { "_id": id },
Err(_) => doc! { "name": ident },
};
let id = db_client()
.stacks
.find_one(filter)
.await
.context("Failed to query db for stacks")?
.context("No matching stack found")?
.id;
Ok((ResourceTargetVariant::Stack, id))
}
ResourceTarget::Deployment(ident) => {
let filter = match ObjectId::from_str(ident) {
Ok(id) => doc! { "_id": id },
Err(_) => doc! { "name": ident },
};
let id = db_client()
.deployments
.find_one(filter)
.await
.context("Failed to query db for deployments")?
.context("No matching deployment found")?
.id;
Ok((ResourceTargetVariant::Deployment, id))
}
ResourceTarget::Build(ident) => {
let filter = match ObjectId::from_str(ident) {
Ok(id) => doc! { "_id": id },
Err(_) => doc! { "name": ident },
};
let id = db_client()
.builds
.find_one(filter)
.await
.context("Failed to query db for builds")?
.context("No matching build found")?
.id;
Ok((ResourceTargetVariant::Build, id))
}
ResourceTarget::Repo(ident) => {
let filter = match ObjectId::from_str(ident) {
Ok(id) => doc! { "_id": id },
@@ -365,25 +417,11 @@ async fn extract_resource_target_with_validation(
.repos
.find_one(filter)
.await
.context("failed to query db for repos")?
.context("no matching repo found")?
.context("Failed to query db for repos")?
.context("No matching repo found")?
.id;
Ok((ResourceTargetVariant::Repo, id))
}
ResourceTarget::Alerter(ident) => {
let filter = match ObjectId::from_str(ident) {
Ok(id) => doc! { "_id": id },
Err(_) => doc! { "name": ident },
};
let id = db_client()
.alerters
.find_one(filter)
.await
.context("failed to query db for alerters")?
.context("no matching alerter found")?
.id;
Ok((ResourceTargetVariant::Alerter, id))
}
ResourceTarget::Procedure(ident) => {
let filter = match ObjectId::from_str(ident) {
Ok(id) => doc! { "_id": id },
@@ -393,8 +431,8 @@ async fn extract_resource_target_with_validation(
.procedures
.find_one(filter)
.await
.context("failed to query db for procedures")?
.context("no matching procedure found")?
.context("Failed to query db for procedures")?
.context("No matching procedure found")?
.id;
Ok((ResourceTargetVariant::Procedure, id))
}
@@ -407,8 +445,8 @@ async fn extract_resource_target_with_validation(
.actions
.find_one(filter)
.await
.context("failed to query db for actions")?
.context("no matching action found")?
.context("Failed to query db for actions")?
.context("No matching action found")?
.id;
Ok((ResourceTargetVariant::Action, id))
}
@@ -421,24 +459,38 @@ async fn extract_resource_target_with_validation(
.resource_syncs
.find_one(filter)
.await
.context("failed to query db for resource syncs")?
.context("no matching resource sync found")?
.context("Failed to query db for resource syncs")?
.context("No matching resource sync found")?
.id;
Ok((ResourceTargetVariant::ResourceSync, id))
}
ResourceTarget::Stack(ident) => {
ResourceTarget::Builder(ident) => {
let filter = match ObjectId::from_str(ident) {
Ok(id) => doc! { "_id": id },
Err(_) => doc! { "name": ident },
};
let id = db_client()
.stacks
.builders
.find_one(filter)
.await
.context("failed to query db for stacks")?
.context("no matching stack found")?
.context("Failed to query db for builders")?
.context("No matching builder found")?
.id;
Ok((ResourceTargetVariant::Stack, id))
Ok((ResourceTargetVariant::Builder, id))
}
ResourceTarget::Alerter(ident) => {
let filter = match ObjectId::from_str(ident) {
Ok(id) => doc! { "_id": id },
Err(_) => doc! { "name": ident },
};
let id = db_client()
.alerters
.find_one(filter)
.await
.context("Failed to query db for alerters")?
.context("No matching alerter found")?
.id;
Ok((ResourceTargetVariant::Alerter, id))
}
}
}

View File

@@ -11,7 +11,15 @@ use crate::{permission::get_check_permissions, resource};
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateProcedure {
#[instrument(name = "CreateProcedure", skip(user))]
#[instrument(
"CreateProcedure",
skip_all,
fields(
operator = user.id,
procedure = self.name,
config = serde_json::to_string(&self.config).unwrap()
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -22,7 +30,15 @@ impl Resolve<WriteArgs> for CreateProcedure {
}
impl Resolve<WriteArgs> for CopyProcedure {
#[instrument(name = "CopyProcedure", skip(user))]
#[instrument(
"CopyProcedure",
skip_all,
fields(
operator = user.id,
procedure = self.name,
copy_procedure = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -45,7 +61,15 @@ impl Resolve<WriteArgs> for CopyProcedure {
}
impl Resolve<WriteArgs> for UpdateProcedure {
#[instrument(name = "UpdateProcedure", skip(user))]
#[instrument(
"UpdateProcedure",
skip_all,
fields(
operator = user.id,
procedure = self.id,
update = serde_json::to_string(&self.config).unwrap(),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -58,7 +82,15 @@ impl Resolve<WriteArgs> for UpdateProcedure {
}
impl Resolve<WriteArgs> for RenameProcedure {
#[instrument(name = "RenameProcedure", skip(user))]
#[instrument(
"RenameProcedure",
skip_all,
fields(
operator = user.id,
procedure = self.id,
new_name = self.name,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -71,11 +103,18 @@ impl Resolve<WriteArgs> for RenameProcedure {
}
impl Resolve<WriteArgs> for DeleteProcedure {
#[instrument(name = "DeleteProcedure", skip(args))]
#[instrument(
"DeleteProcedure",
skip_all,
fields(
operator = user.id,
procedure = self.id
)
)]
async fn resolve(
self,
args: &WriteArgs,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<DeleteProcedureResponse> {
Ok(resource::delete::<Procedure>(&self.id, args).await?)
Ok(resource::delete::<Procedure>(&self.id, user).await?)
}
}

View File

@@ -10,7 +10,9 @@ use komodo_client::{
provider::{DockerRegistryAccount, GitProviderAccount},
},
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError;
use crate::{
helpers::update::{add_update, make_update},
@@ -20,25 +22,41 @@ use crate::{
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateGitProviderAccount {
#[instrument(
"CreateGitProviderAccount",
skip_all,
fields(
operator = user.id,
domain = self.account.domain,
username = self.account.username,
https = self.account.https.unwrap_or(true),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<CreateGitProviderAccountResponse> {
if !user.admin {
return Err(
anyhow!("only admins can create git provider accounts")
.into(),
anyhow!("Only admins can create git provider accounts")
.status_code(StatusCode::FORBIDDEN),
);
}
let mut account: GitProviderAccount = self.account.into();
if account.domain.is_empty() {
return Err(anyhow!("domain cannot be empty string.").into());
return Err(
anyhow!("Domain cannot be empty string.")
.status_code(StatusCode::BAD_REQUEST),
);
}
if account.username.is_empty() {
return Err(anyhow!("username cannot be empty string.").into());
return Err(
anyhow!("Username cannot be empty string.")
.status_code(StatusCode::BAD_REQUEST),
);
}
let mut update = make_update(
@@ -51,14 +69,14 @@ impl Resolve<WriteArgs> for CreateGitProviderAccount {
.git_accounts
.insert_one(&account)
.await
.context("failed to create git provider account on db")?
.context("Failed to create git provider account on db")?
.inserted_id
.as_object_id()
.context("inserted id is not ObjectId")?
.context("Inserted id is not ObjectId")?
.to_string();
update.push_simple_log(
"create git provider account",
"Create git provider account",
format!(
"Created git provider account for {} with username {}",
account.domain, account.username
@@ -70,7 +88,7 @@ impl Resolve<WriteArgs> for CreateGitProviderAccount {
add_update(update)
.await
.inspect_err(|e| {
error!("failed to add update for create git provider account | {e:#}")
error!("Failed to add update for create git provider account | {e:#}")
})
.ok();
@@ -79,14 +97,25 @@ impl Resolve<WriteArgs> for CreateGitProviderAccount {
}
impl Resolve<WriteArgs> for UpdateGitProviderAccount {
#[instrument(
"UpdateGitProviderAccount",
skip_all,
fields(
operator = user.id,
id = self.id,
domain = self.account.domain,
username = self.account.username,
https = self.account.https.unwrap_or(true),
)
)]
async fn resolve(
mut self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<UpdateGitProviderAccountResponse> {
if !user.admin {
return Err(
anyhow!("only admins can update git provider accounts")
.into(),
anyhow!("Only admins can update git provider accounts")
.status_code(StatusCode::FORBIDDEN),
);
}
@@ -94,8 +123,8 @@ impl Resolve<WriteArgs> for UpdateGitProviderAccount {
&& domain.is_empty()
{
return Err(
anyhow!("cannot update git provider with empty domain")
.into(),
anyhow!("Cannot update git provider with empty domain")
.status_code(StatusCode::BAD_REQUEST),
);
}
@@ -103,8 +132,8 @@ impl Resolve<WriteArgs> for UpdateGitProviderAccount {
&& username.is_empty()
{
return Err(
anyhow!("cannot update git provider with empty username")
.into(),
anyhow!("Cannot update git provider with empty username")
.status_code(StatusCode::BAD_REQUEST),
);
}
@@ -118,7 +147,7 @@ impl Resolve<WriteArgs> for UpdateGitProviderAccount {
);
let account = to_document(&self.account).context(
"failed to serialize partial git provider account to bson",
"Failed to serialize partial git provider account to bson",
)?;
let db = db_client();
update_one_by_id(
@@ -128,17 +157,17 @@ impl Resolve<WriteArgs> for UpdateGitProviderAccount {
None,
)
.await
.context("failed to update git provider account on db")?;
.context("Failed to update git provider account on db")?;
let Some(account) = find_one_by_id(&db.git_accounts, &self.id)
.await
.context("failed to query db for git accounts")?
.context("Failed to query db for git accounts")?
else {
return Err(anyhow!("no account found with given id").into());
return Err(anyhow!("No account found with given id").into());
};
update.push_simple_log(
"update git provider account",
"Update git provider account",
format!(
"Updated git provider account for {} with username {}",
account.domain, account.username
@@ -150,7 +179,7 @@ impl Resolve<WriteArgs> for UpdateGitProviderAccount {
add_update(update)
.await
.inspect_err(|e| {
error!("failed to add update for update git provider account | {e:#}")
error!("Failed to add update for update git provider account | {e:#}")
})
.ok();
@@ -159,14 +188,22 @@ impl Resolve<WriteArgs> for UpdateGitProviderAccount {
}
impl Resolve<WriteArgs> for DeleteGitProviderAccount {
#[instrument(
"DeleteGitProviderAccount",
skip_all,
fields(
operator = user.id,
id = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<DeleteGitProviderAccountResponse> {
if !user.admin {
return Err(
anyhow!("only admins can delete git provider accounts")
.into(),
anyhow!("Only admins can delete git provider accounts")
.status_code(StatusCode::FORBIDDEN),
);
}
@@ -179,16 +216,19 @@ impl Resolve<WriteArgs> for DeleteGitProviderAccount {
let db = db_client();
let Some(account) = find_one_by_id(&db.git_accounts, &self.id)
.await
.context("failed to query db for git accounts")?
.context("Failed to query db for git accounts")?
else {
return Err(anyhow!("no account found with given id").into());
return Err(
anyhow!("No account found with given id")
.status_code(StatusCode::BAD_REQUEST),
);
};
delete_one_by_id(&db.git_accounts, &self.id, None)
.await
.context("failed to delete git account on db")?;
update.push_simple_log(
"delete git provider account",
"Delete git provider account",
format!(
"Deleted git provider account for {} with username {}",
account.domain, account.username
@@ -200,7 +240,7 @@ impl Resolve<WriteArgs> for DeleteGitProviderAccount {
add_update(update)
.await
.inspect_err(|e| {
error!("failed to add update for delete git provider account | {e:#}")
error!("Failed to add update for delete git provider account | {e:#}")
})
.ok();
@@ -209,6 +249,15 @@ impl Resolve<WriteArgs> for DeleteGitProviderAccount {
}
impl Resolve<WriteArgs> for CreateDockerRegistryAccount {
#[instrument(
"CreateDockerRegistryAccount",
skip_all,
fields(
operator = user.id,
domain = self.account.domain,
username = self.account.username,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -216,20 +265,26 @@ impl Resolve<WriteArgs> for CreateDockerRegistryAccount {
if !user.admin {
return Err(
anyhow!(
"only admins can create docker registry account accounts"
"Only admins can create docker registry account accounts"
)
.into(),
.status_code(StatusCode::FORBIDDEN),
);
}
let mut account: DockerRegistryAccount = self.account.into();
if account.domain.is_empty() {
return Err(anyhow!("domain cannot be empty string.").into());
return Err(
anyhow!("Domain cannot be empty string.")
.status_code(StatusCode::BAD_REQUEST),
);
}
if account.username.is_empty() {
return Err(anyhow!("username cannot be empty string.").into());
return Err(
anyhow!("Username cannot be empty string.")
.status_code(StatusCode::BAD_REQUEST),
);
}
let mut update = make_update(
@@ -243,15 +298,15 @@ impl Resolve<WriteArgs> for CreateDockerRegistryAccount {
.insert_one(&account)
.await
.context(
"failed to create docker registry account account on db",
"Failed to create docker registry account account on db",
)?
.inserted_id
.as_object_id()
.context("inserted id is not ObjectId")?
.context("Inserted id is not ObjectId")?
.to_string();
update.push_simple_log(
"create docker registry account",
"Create docker registry account",
format!(
"Created docker registry account account for {} with username {}",
account.domain, account.username
@@ -263,7 +318,7 @@ impl Resolve<WriteArgs> for CreateDockerRegistryAccount {
add_update(update)
.await
.inspect_err(|e| {
error!("failed to add update for create docker registry account | {e:#}")
error!("Failed to add update for create docker registry account | {e:#}")
})
.ok();
@@ -272,14 +327,24 @@ impl Resolve<WriteArgs> for CreateDockerRegistryAccount {
}
impl Resolve<WriteArgs> for UpdateDockerRegistryAccount {
#[instrument(
"UpdateDockerRegistryAccount",
skip_all,
fields(
operator = user.id,
id = self.id,
domain = self.account.domain,
username = self.account.username,
)
)]
async fn resolve(
mut self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<UpdateDockerRegistryAccountResponse> {
if !user.admin {
return Err(
anyhow!("only admins can update docker registry accounts")
.into(),
anyhow!("Only admins can update docker registry accounts")
.status_code(StatusCode::FORBIDDEN),
);
}
@@ -288,9 +353,9 @@ impl Resolve<WriteArgs> for UpdateDockerRegistryAccount {
{
return Err(
anyhow!(
"cannot update docker registry account with empty domain"
"Cannot update docker registry account with empty domain"
)
.into(),
.status_code(StatusCode::BAD_REQUEST),
);
}
@@ -299,9 +364,9 @@ impl Resolve<WriteArgs> for UpdateDockerRegistryAccount {
{
return Err(
anyhow!(
"cannot update docker registry account with empty username"
"Cannot update docker registry account with empty username"
)
.into(),
.status_code(StatusCode::BAD_REQUEST),
);
}
@@ -314,7 +379,7 @@ impl Resolve<WriteArgs> for UpdateDockerRegistryAccount {
);
let account = to_document(&self.account).context(
"failed to serialize partial docker registry account account to bson",
"Failed to serialize partial docker registry account account to bson",
)?;
let db = db_client();
@@ -326,19 +391,19 @@ impl Resolve<WriteArgs> for UpdateDockerRegistryAccount {
)
.await
.context(
"failed to update docker registry account account on db",
"Failed to update docker registry account account on db",
)?;
let Some(account) =
find_one_by_id(&db.registry_accounts, &self.id)
.await
.context("failed to query db for registry accounts")?
.context("Failed to query db for registry accounts")?
else {
return Err(anyhow!("no account found with given id").into());
return Err(anyhow!("No account found with given id").into());
};
update.push_simple_log(
"update docker registry account",
"Update docker registry account",
format!(
"Updated docker registry account account for {} with username {}",
account.domain, account.username
@@ -350,7 +415,7 @@ impl Resolve<WriteArgs> for UpdateDockerRegistryAccount {
add_update(update)
.await
.inspect_err(|e| {
error!("failed to add update for update docker registry account | {e:#}")
error!("Failed to add update for update docker registry account | {e:#}")
})
.ok();
@@ -359,14 +424,22 @@ impl Resolve<WriteArgs> for UpdateDockerRegistryAccount {
}
impl Resolve<WriteArgs> for DeleteDockerRegistryAccount {
#[instrument(
"DeleteDockerRegistryAccount",
skip_all,
fields(
operator = user.id,
id = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<DeleteDockerRegistryAccountResponse> {
if !user.admin {
return Err(
anyhow!("only admins can delete docker registry accounts")
.into(),
anyhow!("Only admins can delete docker registry accounts")
.status_code(StatusCode::FORBIDDEN),
);
}
@@ -380,16 +453,19 @@ impl Resolve<WriteArgs> for DeleteDockerRegistryAccount {
let Some(account) =
find_one_by_id(&db.registry_accounts, &self.id)
.await
.context("failed to query db for git accounts")?
.context("Failed to query db for git accounts")?
else {
return Err(anyhow!("no account found with given id").into());
return Err(
anyhow!("No account found with given id")
.status_code(StatusCode::BAD_REQUEST),
);
};
delete_one_by_id(&db.registry_accounts, &self.id, None)
.await
.context("failed to delete registry account on db")?;
.context("Failed to delete registry account on db")?;
update.push_simple_log(
"delete registry account",
"Delete registry account",
format!(
"Deleted registry account for {} with username {}",
account.domain, account.username
@@ -401,7 +477,7 @@ impl Resolve<WriteArgs> for DeleteDockerRegistryAccount {
add_update(update)
.await
.inspect_err(|e| {
error!("failed to add update for delete docker registry account | {e:#}")
error!("Failed to add update for delete docker registry account | {e:#}")
})
.ok();

View File

@@ -32,7 +32,15 @@ use crate::{
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateRepo {
#[instrument(name = "CreateRepo", skip(user))]
#[instrument(
"CreateRepo",
skip_all,
fields(
operator = user.id,
repo = self.name,
config = serde_json::to_string(&self.config).unwrap(),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -43,7 +51,15 @@ impl Resolve<WriteArgs> for CreateRepo {
}
impl Resolve<WriteArgs> for CopyRepo {
#[instrument(name = "CopyRepo", skip(user))]
#[instrument(
"CopyRepo",
skip_all,
fields(
operator = user.id,
repo = self.name,
copy_repo = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -60,14 +76,32 @@ impl Resolve<WriteArgs> for CopyRepo {
}
impl Resolve<WriteArgs> for DeleteRepo {
#[instrument(name = "DeleteRepo", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Repo> {
Ok(resource::delete::<Repo>(&self.id, args).await?)
#[instrument(
"DeleteRepo",
skip_all,
fields(
operator = user.id,
repo = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Repo> {
Ok(resource::delete::<Repo>(&self.id, user).await?)
}
}
impl Resolve<WriteArgs> for UpdateRepo {
#[instrument(name = "UpdateRepo", skip(user))]
#[instrument(
"UpdateRepo",
skip_all,
fields(
operator = user.id,
repo = self.id,
update = serde_json::to_string(&self.config).unwrap()
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -77,7 +111,15 @@ impl Resolve<WriteArgs> for UpdateRepo {
}
impl Resolve<WriteArgs> for RenameRepo {
#[instrument(name = "RenameRepo", skip(user))]
#[instrument(
"RenameRepo",
skip_all,
fields(
operator = user.id,
repo = self.id,
new_name = self.name
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -154,11 +196,6 @@ impl Resolve<WriteArgs> for RenameRepo {
}
impl Resolve<WriteArgs> for RefreshRepoCache {
#[instrument(
name = "RefreshRepoCache",
level = "debug",
skip(user)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,

View File

@@ -1,20 +1,35 @@
use anyhow::anyhow;
use derive_variants::ExtractVariant as _;
use komodo_client::{
api::write::{UpdateResourceMeta, UpdateResourceMetaResponse},
entities::{
ResourceTarget, action::Action, alerter::Alerter, build::Build,
builder::Builder, deployment::Deployment, procedure::Procedure,
repo::Repo, server::Server, stack::Stack, sync::ResourceSync,
repo::Repo, server::Server, stack::Stack, swarm::Swarm,
sync::ResourceSync,
},
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError;
use crate::resource::{self, ResourceMetaUpdate};
use super::WriteArgs;
impl Resolve<WriteArgs> for UpdateResourceMeta {
#[instrument(name = "UpdateResourceMeta", skip(args))]
#[instrument(
"UpdateResourceMeta",
skip_all,
fields(
operator = args.user.id,
resource_type = self.target.extract_variant().to_string(),
resource_id = self.target.extract_variant_id().1,
description = self.description,
template = self.template,
tags = format!("{:?}", self.tags),
)
)]
async fn resolve(
self,
args: &WriteArgs,
@@ -28,12 +43,18 @@ impl Resolve<WriteArgs> for UpdateResourceMeta {
ResourceTarget::System(_) => {
return Err(
anyhow!("cannot update meta of System resource target")
.into(),
.status_code(StatusCode::BAD_REQUEST),
);
}
ResourceTarget::Swarm(id) => {
resource::update_meta::<Swarm>(&id, meta, args).await?;
}
ResourceTarget::Server(id) => {
resource::update_meta::<Server>(&id, meta, args).await?;
}
ResourceTarget::Stack(id) => {
resource::update_meta::<Stack>(&id, meta, args).await?;
}
ResourceTarget::Deployment(id) => {
resource::update_meta::<Deployment>(&id, meta, args).await?;
}
@@ -43,12 +64,6 @@ impl Resolve<WriteArgs> for UpdateResourceMeta {
ResourceTarget::Repo(id) => {
resource::update_meta::<Repo>(&id, meta, args).await?;
}
ResourceTarget::Builder(id) => {
resource::update_meta::<Builder>(&id, meta, args).await?;
}
ResourceTarget::Alerter(id) => {
resource::update_meta::<Alerter>(&id, meta, args).await?;
}
ResourceTarget::Procedure(id) => {
resource::update_meta::<Procedure>(&id, meta, args).await?;
}
@@ -59,8 +74,11 @@ impl Resolve<WriteArgs> for UpdateResourceMeta {
resource::update_meta::<ResourceSync>(&id, meta, args)
.await?;
}
ResourceTarget::Stack(id) => {
resource::update_meta::<Stack>(&id, meta, args).await?;
ResourceTarget::Builder(id) => {
resource::update_meta::<Builder>(&id, meta, args).await?;
}
ResourceTarget::Alerter(id) => {
resource::update_meta::<Alerter>(&id, meta, args).await?;
}
}
Ok(UpdateResourceMetaResponse {})

View File

@@ -3,7 +3,7 @@ use formatting::{bold, format_serror};
use komodo_client::{
api::write::*,
entities::{
NoData, Operation,
Operation,
permission::PermissionLevel,
server::{Server, ServerInfo},
to_docker_compatible_name,
@@ -25,7 +25,15 @@ use crate::{
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateServer {
#[instrument(name = "CreateServer", skip(user))]
#[instrument(
"CreateServer",
skip_all,
fields(
operator = user.id,
server = self.name,
config = serde_json::to_string(&self.config).unwrap()
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -44,7 +52,15 @@ impl Resolve<WriteArgs> for CreateServer {
}
impl Resolve<WriteArgs> for CopyServer {
#[instrument(name = "CopyServer", skip(user))]
#[instrument(
"CopyServer",
skip_all,
fields(
operator = user.id,
server = self.name,
copy_server = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -70,14 +86,32 @@ impl Resolve<WriteArgs> for CopyServer {
}
impl Resolve<WriteArgs> for DeleteServer {
#[instrument(name = "DeleteServer", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Server> {
Ok(resource::delete::<Server>(&self.id, args).await?)
#[instrument(
"DeleteServer",
skip_all,
fields(
operator = user.id,
server = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Server> {
Ok(resource::delete::<Server>(&self.id, user).await?)
}
}
impl Resolve<WriteArgs> for UpdateServer {
#[instrument(name = "UpdateServer", skip(user))]
#[instrument(
"UpdateServer",
skip_all,
fields(
operator = user.id,
server = self.id,
update = serde_json::to_string(&self.config).unwrap(),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -87,7 +121,15 @@ impl Resolve<WriteArgs> for UpdateServer {
}
impl Resolve<WriteArgs> for RenameServer {
#[instrument(name = "RenameServer", skip(user))]
#[instrument(
"RenameServer",
skip_all,
fields(
operator = user.id,
server = self.id,
new_name = self.name,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -97,7 +139,15 @@ impl Resolve<WriteArgs> for RenameServer {
}
impl Resolve<WriteArgs> for CreateNetwork {
#[instrument(name = "CreateNetwork", skip(user))]
#[instrument(
"CreateNetwork",
skip_all,
fields(
operator = user.id,
server = self.server,
network = self.name
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -137,88 +187,18 @@ impl Resolve<WriteArgs> for CreateNetwork {
}
}
impl Resolve<WriteArgs> for CreateTerminal {
#[instrument(name = "CreateTerminal", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Write.terminal(),
)
.await?;
let periphery = periphery_client(&server).await?;
periphery
.request(api::terminal::CreateTerminal {
name: self.name,
command: self.command,
recreate: self.recreate,
})
.await
.context("Failed to create terminal on Periphery")?;
Ok(NoData {})
}
}
impl Resolve<WriteArgs> for DeleteTerminal {
#[instrument(name = "DeleteTerminal", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Write.terminal(),
)
.await?;
let periphery = periphery_client(&server).await?;
periphery
.request(api::terminal::DeleteTerminal {
terminal: self.terminal,
})
.await
.context("Failed to delete terminal on Periphery")?;
Ok(NoData {})
}
}
impl Resolve<WriteArgs> for DeleteAllTerminals {
#[instrument(name = "DeleteAllTerminals", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Write.terminal(),
)
.await?;
let periphery = periphery_client(&server).await?;
periphery
.request(api::terminal::DeleteAllTerminals {})
.await
.context("Failed to delete all terminals on Periphery")?;
Ok(NoData {})
}
}
//
impl Resolve<WriteArgs> for UpdateServerPublicKey {
#[instrument(name = "UpdateServerPublicKey", skip(args))]
#[instrument(
"UpdateServerPublicKey",
skip_all,
fields(
operator = args.user.id,
server = self.server,
public_key = self.public_key,
)
)]
async fn resolve(
self,
args: &WriteArgs,
@@ -249,7 +229,14 @@ impl Resolve<WriteArgs> for UpdateServerPublicKey {
//
impl Resolve<WriteArgs> for RotateServerKeys {
#[instrument(name = "RotateServerPrivateKey", skip(args))]
#[instrument(
"RotateServerKeys",
skip_all,
fields(
operator = args.user.id,
server = self.server,
)
)]
async fn resolve(
self,
args: &WriteArgs,

View File

@@ -1,10 +1,5 @@
use std::str::FromStr;
use anyhow::{Context, anyhow};
use database::mungos::{
by_id::find_one_by_id,
mongodb::bson::{doc, oid::ObjectId},
};
use database::mungos::{by_id::find_one_by_id, mongodb::bson::doc};
use komodo_client::{
api::{user::CreateApiKey, write::*},
entities::{
@@ -12,33 +7,52 @@ use komodo_client::{
user::{User, UserConfig},
},
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::{AddStatusCode as _, AddStatusCodeError as _};
use crate::{api::user::UserArgs, state::db_client};
use crate::{
api::user::UserArgs,
helpers::validations::{validate_api_key_name, validate_username},
state::db_client,
};
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateServiceUser {
#[instrument(name = "CreateServiceUser", skip(user))]
#[instrument(
"CreateServiceUser",
skip_all,
fields(
operator = user.id,
username = self.username,
description = self.description,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<CreateServiceUserResponse> {
if !user.admin {
return Err(anyhow!("user not admin").into());
}
if ObjectId::from_str(&self.username).is_ok() {
return Err(
anyhow!("username cannot be valid ObjectId").into(),
anyhow!("Only Admins can manage Service Users")
.status_code(StatusCode::FORBIDDEN),
);
}
validate_username(&self.username)
.status_code(StatusCode::BAD_REQUEST)?;
let config = UserConfig::Service {
description: self.description,
};
let mut user = User {
id: Default::default(),
username: self.username,
config,
totp: Default::default(),
passkey: Default::default(),
enabled: true,
admin: false,
super_admin: false,
@@ -49,6 +63,7 @@ impl Resolve<WriteArgs> for CreateServiceUser {
all: Default::default(),
updated_at: komodo_timestamp(),
};
user.id = db_client()
.users
.insert_one(&user)
@@ -58,29 +73,48 @@ impl Resolve<WriteArgs> for CreateServiceUser {
.as_object_id()
.context("inserted id is not object id")?
.to_string();
Ok(user)
}
}
impl Resolve<WriteArgs> for UpdateServiceUserDescription {
#[instrument(name = "UpdateServiceUserDescription", skip(user))]
#[instrument(
"UpdateServiceUserDescription",
skip_all,
fields(
operator = user.id,
username = self.username,
description = self.description,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<UpdateServiceUserDescriptionResponse> {
if !user.admin {
return Err(anyhow!("user not admin").into());
return Err(
anyhow!("Only Admins can manage Service Users")
.status_code(StatusCode::FORBIDDEN),
);
}
let db = db_client();
let service_user = db
.users
.find_one(doc! { "username": &self.username })
.await
.context("failed to query db for user")?
.context("no user with given username")?;
.context("Failed to query db for user")?
.context("No user with given username")?;
let UserConfig::Service { .. } = &service_user.config else {
return Err(anyhow!("user is not service user").into());
return Err(
anyhow!("Target user is not Service User")
.status_code(StatusCode::FORBIDDEN),
);
};
db.users
.update_one(
doc! { "username": &self.username },
@@ -88,66 +122,110 @@ impl Resolve<WriteArgs> for UpdateServiceUserDescription {
)
.await
.context("failed to update user on db")?;
let res = db
let service_user = db
.users
.find_one(doc! { "username": &self.username })
.await
.context("failed to query db for user")?
.context("user with username not found")?;
Ok(res)
Ok(service_user)
}
}
impl Resolve<WriteArgs> for CreateApiKeyForServiceUser {
#[instrument(name = "CreateApiKeyForServiceUser", skip(user))]
#[instrument(
"CreateApiKeyForServiceUser",
skip_all,
fields(
operator = user.id,
service_user = self.user_id,
name = self.name,
expires = self.expires,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<CreateApiKeyForServiceUserResponse> {
if !user.admin {
return Err(anyhow!("user not admin").into());
return Err(
anyhow!("Only Admins can manage Service Users")
.status_code(StatusCode::FORBIDDEN),
);
}
validate_api_key_name(&self.name)
.status_code(StatusCode::BAD_REQUEST)?;
let service_user =
find_one_by_id(&db_client().users, &self.user_id)
.await
.context("failed to query db for user")?
.context("no user found with id")?;
.context("Failed to query db for user")?
.context("No user found with id")?;
let UserConfig::Service { .. } = &service_user.config else {
return Err(anyhow!("user is not service user").into());
return Err(
anyhow!("Target user is not Service User")
.status_code(StatusCode::FORBIDDEN),
);
};
CreateApiKey {
name: self.name,
expires: self.expires,
}
.resolve(&UserArgs { user: service_user })
.resolve(&UserArgs {
user: service_user,
session: None,
})
.await
}
}
impl Resolve<WriteArgs> for DeleteApiKeyForServiceUser {
#[instrument(name = "DeleteApiKeyForServiceUser", skip(user))]
#[instrument(
"DeleteApiKeyForServiceUser",
skip_all,
fields(
operator = user.id,
key = self.key,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<DeleteApiKeyForServiceUserResponse> {
if !user.admin {
return Err(anyhow!("user not admin").into());
return Err(
anyhow!("Only Admins can manage Service Users")
.status_code(StatusCode::FORBIDDEN),
);
}
let db = db_client();
let api_key = db
.api_keys
.find_one(doc! { "key": &self.key })
.await
.context("failed to query db for api key")?
.context("did not find matching api key")?;
let service_user =
find_one_by_id(&db_client().users, &api_key.user_id)
.await
.context("failed to query db for user")?
.context("no user found with id")?;
let UserConfig::Service { .. } = &service_user.config else {
return Err(anyhow!("user is not service user").into());
return Err(
anyhow!("Target user is not Service User")
.status_code(StatusCode::FORBIDDEN),
);
};
db.api_keys
.delete_one(doc! { "key": self.key })
.await

View File

@@ -10,7 +10,6 @@ use komodo_client::{
all_logs_success,
permission::PermissionLevel,
repo::Repo,
server::ServerState,
stack::{Stack, StackInfo},
update::Update,
user::stack_user,
@@ -25,9 +24,8 @@ use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::{
periphery_client,
query::get_server_with_state,
stack_git_token,
query::get_swarm_or_server,
stack_git_token, swarm_or_server_request,
update::{add_update, make_update},
},
permission::get_check_permissions,
@@ -42,7 +40,15 @@ use crate::{
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateStack {
#[instrument(name = "CreateStack", skip(user))]
#[instrument(
"CreateStack",
skip_all,
fields(
operator = user.id,
stack = self.name,
config = serde_json::to_string(&self.config).unwrap(),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -53,7 +59,15 @@ impl Resolve<WriteArgs> for CreateStack {
}
impl Resolve<WriteArgs> for CopyStack {
#[instrument(name = "CopyStack", skip(user))]
#[instrument(
"CopyStack",
skip_all,
fields(
operator = user.id,
stack = self.name,
copy_stack = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -71,14 +85,32 @@ impl Resolve<WriteArgs> for CopyStack {
}
impl Resolve<WriteArgs> for DeleteStack {
#[instrument(name = "DeleteStack", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Stack> {
Ok(resource::delete::<Stack>(&self.id, args).await?)
#[instrument(
"DeleteStack",
skip_all,
fields(
operator = user.id,
stack = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Stack> {
Ok(resource::delete::<Stack>(&self.id, user).await?)
}
}
impl Resolve<WriteArgs> for UpdateStack {
#[instrument(name = "UpdateStack", skip(user))]
#[instrument(
"UpdateStack",
skip_all,
fields(
operator = user.id,
stack = self.id,
update = serde_json::to_string(&self.config).unwrap(),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -88,7 +120,15 @@ impl Resolve<WriteArgs> for UpdateStack {
}
impl Resolve<WriteArgs> for RenameStack {
#[instrument(name = "RenameStack", skip(user))]
#[instrument(
"RenameStack",
skip_all,
fields(
operator = user.id,
stack = self.id,
new_name = self.name
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -98,7 +138,15 @@ impl Resolve<WriteArgs> for RenameStack {
}
impl Resolve<WriteArgs> for WriteStackFileContents {
#[instrument(name = "WriteStackFileContents", skip(user))]
#[instrument(
"WriteStackFileContents",
skip_all,
fields(
operator = user.id,
stack = self.stack,
path = self.file_path,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -147,38 +195,31 @@ impl Resolve<WriteArgs> for WriteStackFileContents {
}
}
#[instrument("WriteStackFileContentsOnHost", skip_all)]
async fn write_stack_file_contents_on_host(
stack: Stack,
file_path: String,
contents: String,
mut update: Update,
) -> serror::Result<Update> {
if stack.config.server_id.is_empty() {
return Err(anyhow!(
"Cannot write file, Files on host Stack has not configured a Server"
).into());
}
let (server, state) =
get_server_with_state(&stack.config.server_id).await?;
if state != ServerState::Ok {
return Err(
anyhow!(
"Cannot write file when server is unreachable or disabled"
)
.into(),
);
}
match periphery_client(&server)
.await?
.request(WriteComposeContentsToHost {
let swarm_or_server = get_swarm_or_server(
&stack.config.swarm_id,
&stack.config.server_id,
)
.await?;
let res = swarm_or_server_request(
&swarm_or_server,
WriteComposeContentsToHost {
name: stack.name,
run_directory: stack.config.run_directory,
file_path,
contents,
})
.await
.context("Failed to write contents to host")
{
},
)
.await;
match res {
Ok(log) => {
update.logs.push(log);
}
@@ -188,7 +229,7 @@ async fn write_stack_file_contents_on_host(
format_serror(&e.into()),
);
}
};
}
if !all_logs_success(&update.logs) {
update.finalize();
@@ -219,6 +260,7 @@ async fn write_stack_file_contents_on_host(
Ok(update)
}
#[instrument("WriteStackFileContentsGit", skip_all)]
async fn write_stack_file_contents_git(
mut stack: Stack,
file_path: &str,
@@ -360,11 +402,6 @@ async fn write_stack_file_contents_git(
}
impl Resolve<WriteArgs> for RefreshStackCache {
#[instrument(
name = "RefreshStackCache",
level = "debug",
skip(user)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -412,26 +449,22 @@ impl Resolve<WriteArgs> for RefreshStackCache {
// =============
// FILES ON HOST
// =============
let (server, state) = if stack.config.server_id.is_empty() {
(None, ServerState::Disabled)
} else {
let (server, state) =
get_server_with_state(&stack.config.server_id).await?;
(Some(server), state)
};
if state != ServerState::Ok {
(vec![], None, None, None, None)
} else if let Some(server) = server {
if let Ok(swarm_or_server) = get_swarm_or_server(
&stack.config.swarm_id,
&stack.config.server_id,
)
.await
{
let GetComposeContentsOnHostResponse { contents, errors } =
match periphery_client(&server)
.await?
.request(GetComposeContentsOnHost {
match swarm_or_server_request(
&swarm_or_server,
GetComposeContentsOnHost {
file_paths: stack.all_file_dependencies(),
name: stack.name.clone(),
run_directory: stack.config.run_directory.clone(),
})
.await
.context("failed to get compose file contents from host")
},
)
.await
{
Ok(res) => res,
Err(e) => GetComposeContentsOnHostResponse {
@@ -442,7 +475,6 @@ impl Resolve<WriteArgs> for RefreshStackCache {
}],
},
};
let project_name = stack.project_name(true);
let mut services = Vec::new();

View File

@@ -0,0 +1,108 @@
use komodo_client::{
api::write::*,
entities::{
permission::PermissionLevel, swarm::Swarm, update::Update,
},
};
use resolver_api::Resolve;
use crate::{permission::get_check_permissions, resource};
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateSwarm {
#[instrument(
"CreateSwarm",
skip_all,
fields(
operator = user.id,
swarm = self.name,
config = serde_json::to_string(&self.config).unwrap(),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Swarm> {
resource::create::<Swarm>(&self.name, self.config, None, user)
.await
}
}
impl Resolve<WriteArgs> for CopySwarm {
#[instrument(
"CopySwarm",
skip_all,
fields(
operator = user.id,
swarm = self.name,
copy_swarm = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Swarm> {
let Swarm { config, .. } = get_check_permissions::<Swarm>(
&self.id,
user,
PermissionLevel::Read.into(),
)
.await?;
resource::create::<Swarm>(&self.name, config.into(), None, user)
.await
}
}
impl Resolve<WriteArgs> for DeleteSwarm {
#[instrument(
"DeleteSwarm",
skip_all,
fields(
operator = user.id,
swarm = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Swarm> {
Ok(resource::delete::<Swarm>(&self.id, user).await?)
}
}
impl Resolve<WriteArgs> for UpdateSwarm {
#[instrument(
"UpdateSwarm",
skip_all,
fields(
operator = user.id,
swarm = self.id,
update = serde_json::to_string(&self.config).unwrap()
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Swarm> {
Ok(resource::update::<Swarm>(&self.id, self.config, user).await?)
}
}
impl Resolve<WriteArgs> for RenameSwarm {
#[instrument(
"RenameSwarm",
skip_all,
fields(
operator = user.id,
swarm = self.id,
new_name = self.name
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Update> {
Ok(resource::rename::<Swarm>(&self.id, &self.name, user).await?)
}
}

View File

@@ -33,6 +33,7 @@ use komodo_client::{
},
};
use resolver_api::Resolve;
use tracing::Instrument;
use crate::{
alert::send_alerts,
@@ -56,7 +57,15 @@ use crate::{
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateResourceSync {
#[instrument(name = "CreateResourceSync", skip(user))]
#[instrument(
"CreateResourceSync",
skip_all,
fields(
operator = user.id,
sync = self.name,
config = serde_json::to_string(&self.config).unwrap(),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -72,7 +81,15 @@ impl Resolve<WriteArgs> for CreateResourceSync {
}
impl Resolve<WriteArgs> for CopyResourceSync {
#[instrument(name = "CopyResourceSync", skip(user))]
#[instrument(
"CopyResourceSync",
skip_all,
fields(
operator = user.id,
sync = self.name,
copy_sync = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -95,17 +112,32 @@ impl Resolve<WriteArgs> for CopyResourceSync {
}
impl Resolve<WriteArgs> for DeleteResourceSync {
#[instrument(name = "DeleteResourceSync", skip(args))]
#[instrument(
"DeleteResourceSync",
skip_all,
fields(
operator = user.id,
sync = self.id,
)
)]
async fn resolve(
self,
args: &WriteArgs,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<ResourceSync> {
Ok(resource::delete::<ResourceSync>(&self.id, args).await?)
Ok(resource::delete::<ResourceSync>(&self.id, user).await?)
}
}
impl Resolve<WriteArgs> for UpdateResourceSync {
#[instrument(name = "UpdateResourceSync", skip(user))]
#[instrument(
"UpdateResourceSync",
skip_all,
fields(
operator = user.id,
sync = self.id,
update = serde_json::to_string(&self.config).unwrap(),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -118,7 +150,15 @@ impl Resolve<WriteArgs> for UpdateResourceSync {
}
impl Resolve<WriteArgs> for RenameResourceSync {
#[instrument(name = "RenameResourceSync", skip(user))]
#[instrument(
"RenameResourceSync",
skip_all,
fields(
operator = user.id,
sync = self.id,
new_name = self.name
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -131,7 +171,16 @@ impl Resolve<WriteArgs> for RenameResourceSync {
}
impl Resolve<WriteArgs> for WriteSyncFileContents {
#[instrument(name = "WriteSyncFileContents", skip(args))]
#[instrument(
"WriteSyncFileContents",
skip_all,
fields(
operator = args.user.id,
sync = self.sync,
resource_path = self.resource_path,
file_path = self.file_path,
)
)]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Update> {
let sync = get_check_permissions::<ResourceSync>(
&self.sync,
@@ -176,6 +225,7 @@ impl Resolve<WriteArgs> for WriteSyncFileContents {
}
}
#[instrument("WriteSyncFileContentsOnHost", skip_all)]
async fn write_sync_file_contents_on_host(
req: WriteSyncFileContents,
args: &WriteArgs,
@@ -238,6 +288,7 @@ async fn write_sync_file_contents_on_host(
Ok(update)
}
#[instrument("WriteSyncFileContentsGit", skip_all)]
async fn write_sync_file_contents_git(
req: WriteSyncFileContents,
args: &WriteArgs,
@@ -389,7 +440,14 @@ async fn write_sync_file_contents_git(
}
impl Resolve<WriteArgs> for CommitSync {
#[instrument(name = "CommitSync", skip(args))]
#[instrument(
"CommitSync",
skip_all,
fields(
operator = args.user.id,
sync = self.sync,
)
)]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Update> {
let WriteArgs { user } = args;
@@ -476,7 +534,9 @@ impl Resolve<WriteArgs> for CommitSync {
.sync_directory
.join(to_path_compatible_name(&sync.name))
.join(&resource_path);
let span = info_span!("CommitSyncOnHost");
if let Err(e) = secret_file::write_async(&file_path, &res.toml)
.instrument(span)
.await
.with_context(|| {
format!("Failed to write resource file to {file_path:?}",)
@@ -569,6 +629,7 @@ impl Resolve<WriteArgs> for CommitSync {
}
}
#[instrument("CommitSyncGit", skip_all)]
async fn commit_git_sync(
mut args: RepoExecutionArgs,
resource_path: &Path,
@@ -613,11 +674,6 @@ async fn commit_git_sync(
}
impl Resolve<WriteArgs> for RefreshResourceSyncPending {
#[instrument(
name = "RefreshResourceSyncPending",
level = "debug",
skip(user)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,

View File

@@ -27,7 +27,15 @@ use crate::{
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateTag {
#[instrument(name = "CreateTag", skip(user))]
#[instrument(
"CreateTag",
skip_all,
fields(
operator = user.id,
tag = self.name,
color = format!("{:?}", self.color),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -68,7 +76,15 @@ impl Resolve<WriteArgs> for CreateTag {
}
impl Resolve<WriteArgs> for RenameTag {
#[instrument(name = "RenameTag", skip(user))]
#[instrument(
"RenameTag",
skip_all,
fields(
operator = user.id,
tag = self.id,
new_name = self.name,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -93,7 +109,15 @@ impl Resolve<WriteArgs> for RenameTag {
}
impl Resolve<WriteArgs> for UpdateTagColor {
#[instrument(name = "UpdateTagColor", skip(user))]
#[instrument(
"UpdateTagColor",
skip_all,
fields(
operator = user.id,
tag = self.tag,
color = format!("{:?}", self.color),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -114,7 +138,14 @@ impl Resolve<WriteArgs> for UpdateTagColor {
}
impl Resolve<WriteArgs> for DeleteTag {
#[instrument(name = "DeleteTag", skip(user))]
#[instrument(
"DeleteTag",
skip_all,
fields(
operator = user.id,
tag_id = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,

View File

@@ -0,0 +1,309 @@
use anyhow::Context as _;
use futures_util::{StreamExt as _, stream::FuturesUnordered};
use komodo_client::{
api::write::*,
entities::{
NoData, deployment::Deployment, permission::PermissionLevel,
server::Server, stack::Stack, terminal::TerminalTarget,
user::User,
},
};
use periphery_client::api;
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCode;
use crate::{
helpers::{
periphery_client,
query::get_all_tags,
terminal::{
create_container_terminal_inner,
get_deployment_periphery_container,
get_stack_service_periphery_container,
},
},
permission::get_check_permissions,
resource,
};
use super::WriteArgs;
//
impl Resolve<WriteArgs> for CreateTerminal {
#[instrument(
"CreateTerminal",
skip_all,
fields(
operator = user.id,
terminal = self.name,
target = format!("{:?}", self.target),
command = self.command,
mode = format!("{:?}", self.mode),
recreate = format!("{:?}", self.recreate),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
match self.target.clone() {
TerminalTarget::Server { server } => {
let server = server
.context("Must provide 'target.params.server'")
.status_code(StatusCode::BAD_REQUEST)?;
create_server_terminal(self, server, user).await?;
}
TerminalTarget::Container { server, container } => {
create_container_terminal(self, server, container, user)
.await?;
}
TerminalTarget::Stack { stack, service } => {
let service = service
.context("Must provide 'target.params.service'")
.status_code(StatusCode::BAD_REQUEST)?;
create_stack_service_terminal(self, stack, service, user)
.await?;
}
TerminalTarget::Deployment { deployment } => {
create_deployment_terminal(self, deployment, user).await?;
}
};
Ok(NoData {})
}
}
async fn create_server_terminal(
CreateTerminal {
name,
command,
recreate,
target: _,
mode: _,
}: CreateTerminal,
server: String,
user: &User,
) -> anyhow::Result<()> {
let server = get_check_permissions::<Server>(
&server,
user,
PermissionLevel::Read.terminal(),
)
.await?;
let periphery = periphery_client(&server).await?;
periphery
.request(api::terminal::CreateServerTerminal {
name,
command,
recreate,
})
.await
.context("Failed to create Server Terminal on Periphery")?;
Ok(())
}
async fn create_container_terminal(
req: CreateTerminal,
server: String,
container: String,
user: &User,
) -> anyhow::Result<()> {
let server = get_check_permissions::<Server>(
&server,
user,
PermissionLevel::Read.terminal(),
)
.await?;
let periphery = periphery_client(&server).await?;
create_container_terminal_inner(req, &periphery, container).await
}
async fn create_stack_service_terminal(
req: CreateTerminal,
stack: String,
service: String,
user: &User,
) -> anyhow::Result<()> {
let (_, periphery, container) =
get_stack_service_periphery_container(&stack, &service, user)
.await?;
create_container_terminal_inner(req, &periphery, container).await
}
async fn create_deployment_terminal(
req: CreateTerminal,
deployment: String,
user: &User,
) -> anyhow::Result<()> {
let (_, periphery, container) =
get_deployment_periphery_container(&deployment, user).await?;
create_container_terminal_inner(req, &periphery, container).await
}
//
impl Resolve<WriteArgs> for DeleteTerminal {
#[instrument(
"DeleteTerminal",
skip_all,
fields(
operator = user.id,
target = format!("{:?}", self.target),
terminal = self.terminal,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
let server = match &self.target {
TerminalTarget::Server { server } => {
let server = server
.as_ref()
.context("Must provide 'target.params.server'")
.status_code(StatusCode::BAD_REQUEST)?;
get_check_permissions::<Server>(
server,
user,
PermissionLevel::Read.terminal(),
)
.await?
}
TerminalTarget::Container { server, .. } => {
get_check_permissions::<Server>(
server,
user,
PermissionLevel::Read.terminal(),
)
.await?
}
TerminalTarget::Stack { stack, .. } => {
let server = get_check_permissions::<Stack>(
stack,
user,
PermissionLevel::Read.terminal(),
)
.await?
.config
.server_id;
resource::get::<Server>(&server).await?
}
TerminalTarget::Deployment { deployment } => {
let server = get_check_permissions::<Deployment>(
deployment,
user,
PermissionLevel::Read.terminal(),
)
.await?
.config
.server_id;
resource::get::<Server>(&server).await?
}
};
let periphery = periphery_client(&server).await?;
periphery
.request(api::terminal::DeleteTerminal {
target: self.target,
terminal: self.terminal,
})
.await
.context("Failed to delete terminal on Periphery")?;
Ok(NoData {})
}
}
//
impl Resolve<WriteArgs> for DeleteAllTerminals {
#[instrument(
"DeleteAllTerminals",
skip_all,
fields(
operator = user.id,
server = self.server,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read.terminal(),
)
.await?;
let periphery = periphery_client(&server).await?;
periphery
.request(api::terminal::DeleteAllTerminals {})
.await
.context("Failed to delete all terminals on Periphery")?;
Ok(NoData {})
}
}
//
impl Resolve<WriteArgs> for BatchDeleteAllTerminals {
#[instrument(
"BatchDeleteAllTerminals",
skip_all,
fields(
operator = user.id,
query = format!("{:?}", self.query),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> Result<Self::Response, Self::Error> {
let all_tags = if self.query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_full_for_user::<Server>(
self.query,
user,
PermissionLevel::Read.terminal(),
&all_tags,
)
.await?
.into_iter()
.map(|server| async move {
let res = async {
let periphery = periphery_client(&server).await?;
periphery
.request(api::terminal::DeleteAllTerminals {})
.await
.context("Failed to delete all terminals on Periphery")?;
anyhow::Ok(())
}
.await;
if let Err(e) = res {
warn!(
"Failed to delete all terminals on {} ({}) | {e:#}",
server.name, server.id
)
}
})
.collect::<FuturesUnordered<_>>()
.collect::<Vec<_>>()
.await;
Ok(NoData {})
}
}

View File

@@ -15,40 +15,42 @@ use komodo_client::{
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError;
use serror::{AddStatusCode as _, AddStatusCodeError};
use crate::{config::core_config, state::db_client};
use crate::{
config::core_config,
helpers::validations::{validate_password, validate_username},
state::db_client,
};
use super::WriteArgs;
//
impl Resolve<WriteArgs> for CreateLocalUser {
#[instrument(name = "CreateLocalUser", skip(admin, self), fields(admin_id = admin.id, username = self.username))]
#[instrument(
"CreateLocalUser",
skip_all,
fields(
admin_id = admin.id,
username = self.username
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<CreateLocalUserResponse> {
if !admin.admin {
return Err(
anyhow!("This method is admin-only.")
anyhow!("This method is Admin Only.")
.status_code(StatusCode::FORBIDDEN),
);
}
if self.username.is_empty() {
return Err(anyhow!("Username cannot be empty.").into());
}
if ObjectId::from_str(&self.username).is_ok() {
return Err(
anyhow!("Username cannot be valid ObjectId").into(),
);
}
if self.password.is_empty() {
return Err(anyhow!("Password cannot be empty.").into());
}
validate_username(&self.username)
.status_code(StatusCode::BAD_REQUEST)?;
validate_password(&self.password)
.status_code(StatusCode::BAD_REQUEST)?;
let db = db_client();
@@ -80,6 +82,8 @@ impl Resolve<WriteArgs> for CreateLocalUser {
config: UserConfig::Local {
password: hashed_password,
},
totp: Default::default(),
passkey: Default::default(),
};
user.id = db_client()
@@ -101,7 +105,14 @@ impl Resolve<WriteArgs> for CreateLocalUser {
//
impl Resolve<WriteArgs> for UpdateUserUsername {
#[instrument(name = "UpdateUserUsername", skip(user), fields(user_id = user.id))]
#[instrument(
"UpdateUserUsername",
skip_all,
fields(
operator = user.id,
new_username = self.username,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -116,17 +127,11 @@ impl Resolve<WriteArgs> for UpdateUserUsername {
);
}
}
if self.username.is_empty() {
return Err(anyhow!("Username cannot be empty.").into());
}
if ObjectId::from_str(&self.username).is_ok() {
return Err(
anyhow!("Username cannot be valid ObjectId").into(),
);
}
validate_username(&self.username)?;
let db = db_client();
if db
.users
.find_one(doc! { "username": &self.username })
@@ -136,8 +141,10 @@ impl Resolve<WriteArgs> for UpdateUserUsername {
{
return Err(anyhow!("Username already taken.").into());
}
let id = ObjectId::from_str(&user.id)
.context("User id not valid ObjectId.")?;
db.users
.update_one(
doc! { "_id": id },
@@ -145,6 +152,7 @@ impl Resolve<WriteArgs> for UpdateUserUsername {
)
.await
.context("Failed to update user username on database.")?;
Ok(NoData {})
}
}
@@ -152,7 +160,11 @@ impl Resolve<WriteArgs> for UpdateUserUsername {
//
impl Resolve<WriteArgs> for UpdateUserPassword {
#[instrument(name = "UpdateUserPassword", skip(user, self), fields(user_id = user.id))]
#[instrument(
"UpdateUserPassword",
skip_all,
fields(operator = user.id)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -167,7 +179,12 @@ impl Resolve<WriteArgs> for UpdateUserPassword {
);
}
}
validate_password(&self.password)
.status_code(StatusCode::BAD_REQUEST)?;
db_client().set_user_password(user, &self.password).await?;
Ok(NoData {})
}
}
@@ -175,7 +192,14 @@ impl Resolve<WriteArgs> for UpdateUserPassword {
//
impl Resolve<WriteArgs> for DeleteUser {
#[instrument(name = "DeleteUser", skip(admin), fields(user = self.user))]
#[instrument(
"DeleteUser",
skip_all,
fields(
admin_id = admin.id,
user_to_delete = self.user
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -186,15 +210,19 @@ impl Resolve<WriteArgs> for DeleteUser {
.status_code(StatusCode::FORBIDDEN),
);
}
if admin.username == self.user || admin.id == self.user {
return Err(anyhow!("User cannot delete themselves.").into());
}
let query = if let Ok(id) = ObjectId::from_str(&self.user) {
doc! { "_id": id }
} else {
doc! { "username": self.user }
};
let db = db_client();
let Some(user) = db
.users
.find_one(query.clone())
@@ -205,21 +233,25 @@ impl Resolve<WriteArgs> for DeleteUser {
anyhow!("No user found with given id / username").into(),
);
};
if user.super_admin {
return Err(
anyhow!("Cannot delete a super admin user.").into(),
);
}
if user.admin && !admin.super_admin {
return Err(
anyhow!("Only a Super Admin can delete an admin user.")
.into(),
);
}
db.users
.delete_one(query)
.await
.context("Failed to delete user from database")?;
// Also remove user id from all user groups
if let Err(e) = db
.user_groups
@@ -228,6 +260,7 @@ impl Resolve<WriteArgs> for DeleteUser {
{
warn!("Failed to remove deleted user from user groups | {e:?}");
};
Ok(user)
}
}

View File

@@ -19,7 +19,14 @@ use crate::state::db_client;
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateUserGroup {
#[instrument(name = "CreateUserGroup", skip(admin), fields(admin = admin.username))]
#[instrument(
"CreateUserGroup",
skip_all,
fields(
operator = admin.id,
group = self.name,
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -57,7 +64,15 @@ impl Resolve<WriteArgs> for CreateUserGroup {
}
impl Resolve<WriteArgs> for RenameUserGroup {
#[instrument(name = "RenameUserGroup", skip(admin), fields(admin = admin.username))]
#[instrument(
"RenameUserGroup",
skip_all,
fields(
operator = admin.id,
group = self.id,
new_name = self.name,
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -86,7 +101,14 @@ impl Resolve<WriteArgs> for RenameUserGroup {
}
impl Resolve<WriteArgs> for DeleteUserGroup {
#[instrument(name = "DeleteUserGroup", skip(admin), fields(admin = admin.username))]
#[instrument(
"DeleteUserGroup",
skip_all,
fields(
operator = admin.id,
group = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -122,7 +144,15 @@ impl Resolve<WriteArgs> for DeleteUserGroup {
}
impl Resolve<WriteArgs> for AddUserToUserGroup {
#[instrument(name = "AddUserToUserGroup", skip(admin), fields(admin = admin.username))]
#[instrument(
"AddUserToUserGroup",
skip_all,
fields(
operator = admin.id,
group = self.user_group,
user = self.user,
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -169,7 +199,15 @@ impl Resolve<WriteArgs> for AddUserToUserGroup {
}
impl Resolve<WriteArgs> for RemoveUserFromUserGroup {
#[instrument(name = "RemoveUserFromUserGroup", skip(admin), fields(admin = admin.username))]
#[instrument(
"RemoveUserFromUserGroup",
skip_all,
fields(
operator = admin.id,
group = self.user_group,
user = self.user,
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -216,7 +254,15 @@ impl Resolve<WriteArgs> for RemoveUserFromUserGroup {
}
impl Resolve<WriteArgs> for SetUsersInUserGroup {
#[instrument(name = "SetUsersInUserGroup", skip(admin), fields(admin = admin.username))]
#[instrument(
"SetUsersInUserGroup",
skip_all,
fields(
operator = admin.id,
group = self.user_group,
users = format!("{:?}", self.users)
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -266,7 +312,15 @@ impl Resolve<WriteArgs> for SetUsersInUserGroup {
}
impl Resolve<WriteArgs> for SetEveryoneUserGroup {
#[instrument(name = "SetEveryoneUserGroup", skip(admin), fields(admin = admin.username))]
#[instrument(
"SetEveryoneUserGroup",
skip_all,
fields(
operator = admin.id,
group = self.user_group,
everyone = self.everyone,
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,

View File

@@ -6,12 +6,13 @@ use komodo_client::{
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError;
use serror::{AddStatusCode as _, AddStatusCodeError};
use crate::{
helpers::{
query::get_variable,
update::{add_update, make_update},
validations::{validate_variable_name, validate_variable_value},
},
state::db_client,
};
@@ -19,14 +20,23 @@ use crate::{
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateVariable {
#[instrument(name = "CreateVariable", skip(user, self), fields(name = &self.name))]
#[instrument(
"CreateVariable",
skip_all,
fields(
operator = user.id,
variable = self.name,
description = self.description,
is_secret = self.is_secret,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<CreateVariableResponse> {
if !user.admin {
return Err(
anyhow!("Only admins can create variables")
anyhow!("Only Admins can create Variables")
.status_code(StatusCode::FORBIDDEN),
);
}
@@ -38,6 +48,11 @@ impl Resolve<WriteArgs> for CreateVariable {
is_secret,
} = self;
validate_variable_name(&name)
.status_code(StatusCode::BAD_REQUEST)?;
validate_variable_value(&value)
.status_code(StatusCode::BAD_REQUEST)?;
let variable = Variable {
name,
value,
@@ -49,7 +64,7 @@ impl Resolve<WriteArgs> for CreateVariable {
.variables
.insert_one(&variable)
.await
.context("Failed to create variable on db")?;
.context("Failed to create Variable on db")?;
let mut update = make_update(
ResourceTarget::system(),
@@ -58,7 +73,8 @@ impl Resolve<WriteArgs> for CreateVariable {
);
update
.push_simple_log("create variable", format!("{variable:#?}"));
.push_simple_log("Create Variable", format!("{variable:#?}"));
update.finalize();
add_update(update).await?;
@@ -68,20 +84,32 @@ impl Resolve<WriteArgs> for CreateVariable {
}
impl Resolve<WriteArgs> for UpdateVariableValue {
#[instrument(name = "UpdateVariableValue", skip(user, self), fields(name = &self.name))]
#[instrument(
"UpdateVariableValue",
skip_all,
fields(
operator = user.id,
variable = self.name,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<UpdateVariableValueResponse> {
if !user.admin {
return Err(
anyhow!("Only admins can update variables")
anyhow!("Only Admins can update Variables")
.status_code(StatusCode::FORBIDDEN),
);
}
let UpdateVariableValue { name, value } = self;
validate_variable_name(&name)
.status_code(StatusCode::BAD_REQUEST)?;
validate_variable_value(&value)
.status_code(StatusCode::BAD_REQUEST)?;
let variable = get_variable(&name).await?;
if value == variable.value {
@@ -125,17 +153,26 @@ impl Resolve<WriteArgs> for UpdateVariableValue {
}
impl Resolve<WriteArgs> for UpdateVariableDescription {
#[instrument(name = "UpdateVariableDescription", skip(user))]
#[instrument(
"UpdateVariableDescription",
skip_all,
fields(
operator = user.id,
variable = self.name,
description = self.description,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<UpdateVariableDescriptionResponse> {
if !user.admin {
return Err(
anyhow!("Only admins can update variables")
anyhow!("Only Admins can update Variables")
.status_code(StatusCode::FORBIDDEN),
);
}
db_client()
.variables
.update_one(
@@ -144,22 +181,32 @@ impl Resolve<WriteArgs> for UpdateVariableDescription {
)
.await
.context("Failed to update variable description on db")?;
Ok(get_variable(&self.name).await?)
}
}
impl Resolve<WriteArgs> for UpdateVariableIsSecret {
#[instrument(name = "UpdateVariableIsSecret", skip(user))]
#[instrument(
"UpdateVariableIsSecret",
skip_all,
fields(
operator = user.id,
variable = self.name,
is_secret = self.is_secret,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<UpdateVariableIsSecretResponse> {
if !user.admin {
return Err(
anyhow!("Only admins can update variables")
anyhow!("Only Admins can update Variables")
.status_code(StatusCode::FORBIDDEN),
);
}
db_client()
.variables
.update_one(
@@ -167,28 +214,39 @@ impl Resolve<WriteArgs> for UpdateVariableIsSecret {
doc! { "$set": { "is_secret": self.is_secret } },
)
.await
.context("Failed to update variable is secret on db")?;
.context("Failed to update Variable 'is_secret' on db")?;
Ok(get_variable(&self.name).await?)
}
}
impl Resolve<WriteArgs> for DeleteVariable {
#[instrument(
"DeleteVariable",
skip_all,
fields(
operator = user.id,
variable = self.name,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<DeleteVariableResponse> {
if !user.admin {
return Err(
anyhow!("Only admins can delete variables")
anyhow!("Only Admins can delete Variables")
.status_code(StatusCode::FORBIDDEN),
);
}
let variable = get_variable(&self.name).await?;
db_client()
.variables
.delete_one(doc! { "name": &self.name })
.await
.context("Failed to delete variable on db")?;
.context("Failed to delete Variable on db")?;
let mut update = make_update(
ResourceTarget::system(),

101
bin/core/src/api/ws/mod.rs Normal file
View File

@@ -0,0 +1,101 @@
use std::net::IpAddr;
use crate::{
auth::{auth_api_key_check_enabled, auth_jwt_check_enabled},
helpers::query::get_user,
state::auth_rate_limiter,
};
use anyhow::{Context, anyhow};
use axum::{
Router,
extract::ws::{self, WebSocket},
http::HeaderMap,
routing::get,
};
use komodo_client::{entities::user::User, ws::WsLoginMessage};
use rate_limit::WithFailureRateLimit;
use reqwest::StatusCode;
use serror::{AddStatusCode, AddStatusCodeError};
mod terminal;
mod update;
pub fn router() -> Router {
Router::new()
// Periphery facing
.route("/periphery", get(crate::connection::server::handler))
// User facing
.route("/update", get(update::handler))
.route("/terminal", get(terminal::handler))
}
async fn user_ws_login(
mut socket: WebSocket,
headers: &HeaderMap,
fallback_ip: IpAddr,
) -> Option<(WebSocket, User)> {
let res = async {
let message = match socket
.recv()
.await
.context("Failed to receive message over socket: Closed")
.status_code(StatusCode::BAD_REQUEST)?
.context("Failed to recieve message over socket: Error")
.status_code(StatusCode::BAD_REQUEST)?
{
ws::Message::Text(utf8_bytes) => utf8_bytes.to_string(),
ws::Message::Binary(bytes) => String::from_utf8(bytes.into())
.context("Received invalid message bytes: Not UTF-8")
.status_code(StatusCode::BAD_REQUEST)?,
message => {
return Err(
anyhow!("Received invalid message: {message:?}")
.status_code(StatusCode::BAD_REQUEST),
);
}
};
match WsLoginMessage::from_json_str(&message)
.context("Invalid login message")
.status_code(StatusCode::BAD_REQUEST)?
{
WsLoginMessage::Jwt { jwt } => auth_jwt_check_enabled(&jwt)
.await
.status_code(StatusCode::UNAUTHORIZED),
WsLoginMessage::ApiKeys { key, secret } => {
auth_api_key_check_enabled(&key, &secret)
.await
.status_code(StatusCode::UNAUTHORIZED)
}
}
}
.with_failure_rate_limit_using_headers(
auth_rate_limiter(),
headers,
Some(fallback_ip),
)
.await;
match res {
Ok(user) => {
let _ = socket.send(ws::Message::text("LOGGED_IN")).await;
Some((socket, user))
}
Err(e) => {
let _ = socket
.send(ws::Message::text(format!(
"[{}]: {:#}",
e.status, e.error
)))
.await;
None
}
}
}
async fn check_user_valid(user_id: &str) -> anyhow::Result<User> {
let user = get_user(user_id).await?;
if !user.enabled {
return Err(anyhow!("User not enabled"));
}
Ok(user)
}

View File

@@ -0,0 +1,188 @@
use std::net::SocketAddr;
use anyhow::anyhow;
use axum::{
extract::{ConnectInfo, FromRequestParts, WebSocketUpgrade, ws},
http::{HeaderMap, request},
response::IntoResponse,
};
use bytes::Bytes;
use futures_util::{SinkExt, StreamExt as _};
use komodo_client::{
api::terminal::ConnectTerminalQuery, entities::user::User,
};
use periphery_client::api::terminal::DisconnectTerminal;
use serde::de::DeserializeOwned;
use tokio_util::sync::CancellationToken;
use crate::{
helpers::terminal::setup_target_for_user,
periphery::{PeripheryClient, terminal::ConnectTerminalResponse},
state::periphery_connections,
};
#[instrument("ConnectTerminal", skip(ws))]
pub async fn handler(
Qs(query): Qs<ConnectTerminalQuery>,
ConnectInfo(info): ConnectInfo<SocketAddr>,
headers: HeaderMap,
ws: WebSocketUpgrade,
) -> impl IntoResponse {
let ip = info.ip();
ws.on_upgrade(move |socket| async move {
let Some((mut client_socket, user)) =
super::user_ws_login(socket, &headers, ip).await
else {
return;
};
let (periphery, response) =
match setup_forwarding(query, &user).await {
Ok(response) => response,
Err(e) => {
let _ = client_socket
.send(ws::Message::text(format!("ERROR: {e:#}")))
.await;
let _ = client_socket.close().await;
return;
}
};
forward_ws_channel(periphery, client_socket, response).await
})
}
async fn setup_forwarding(
ConnectTerminalQuery {
target,
terminal,
init,
}: ConnectTerminalQuery,
user: &User,
) -> anyhow::Result<(PeripheryClient, ConnectTerminalResponse)> {
let (target, terminal, periphery) =
setup_target_for_user(target, terminal, init, user).await?;
let response = periphery.connect_terminal(terminal, target).await?;
Ok((periphery, response))
}
async fn forward_ws_channel(
periphery: PeripheryClient,
client_socket: axum::extract::ws::WebSocket,
ConnectTerminalResponse {
channel,
sender: periphery_sender,
receiver: mut periphery_receiver,
}: ConnectTerminalResponse,
) {
let (mut client_send, mut client_receive) = client_socket.split();
let cancel = CancellationToken::new();
periphery_receiver.set_cancel(cancel.clone());
trace!("starting ws exchange");
let core_to_periphery = async {
loop {
let client_recv_res = tokio::select! {
res = client_receive.next() => res,
_ = cancel.cancelled() => break,
};
let bytes = match client_recv_res {
Some(Ok(ws::Message::Binary(bytes))) => bytes.into(),
Some(Ok(ws::Message::Text(text))) => {
let bytes: Bytes = text.into();
bytes.into()
}
Some(Ok(ws::Message::Close(_frame))) => {
break;
}
Some(Err(_e)) => {
break;
}
None => {
break;
}
// Ignore
Some(Ok(_)) => continue,
};
if let Err(_e) =
periphery_sender.send_terminal(channel, Ok(bytes)).await
{
break;
};
}
cancel.cancel();
let _ = periphery_sender
.send_terminal(channel, Err(anyhow!("Client disconnected")))
.await;
};
let periphery_to_core = async {
loop {
// Already adheres to cancellation token
match periphery_receiver.recv().await {
Ok(Ok(bytes)) => {
if let Err(e) =
client_send.send(ws::Message::Binary(bytes.into())).await
{
debug!("{e:?}");
break;
};
}
Ok(Err(e)) => {
let _ = client_send
.send(ws::Message::text(format!("{e:#}")))
.await;
break;
}
Err(_) => {
let _ =
client_send.send(ws::Message::text("STREAM EOF")).await;
break;
}
}
}
let _ = client_send.close().await;
cancel.cancel();
};
tokio::join!(core_to_periphery, periphery_to_core);
// Cleanup
if let Err(e) =
periphery.request(DisconnectTerminal { channel }).await
{
warn!(
"Failed to disconnect Periphery terminal forwarding | {e:#}",
)
}
if let Some(connection) =
periphery_connections().get(&periphery.id).await
{
connection.terminals.remove(&channel).await;
}
}
pub struct Qs<T>(pub T);
impl<S, T> FromRequestParts<S> for Qs<T>
where
S: Send + Sync,
T: DeserializeOwned,
{
type Rejection = axum::response::Response;
async fn from_request_parts(
parts: &mut request::Parts,
_state: &S,
) -> Result<Self, Self::Rejection> {
let raw = parts.uri.query().unwrap_or_default();
serde_qs::from_str::<T>(raw).map(Qs).map_err(|e| {
axum::response::IntoResponse::into_response((
axum::http::StatusCode::BAD_REQUEST,
format!("Failed to parse request query: {e}"),
))
})
}
}

View File

@@ -1,9 +1,12 @@
use std::net::SocketAddr;
use anyhow::anyhow;
use axum::{
extract::{WebSocketUpgrade, ws::Message},
extract::{ConnectInfo, WebSocketUpgrade, ws::Message},
http::HeaderMap,
response::IntoResponse,
};
use futures::{SinkExt, StreamExt};
use futures_util::{SinkExt, StreamExt};
use komodo_client::entities::{
ResourceTarget, permission::PermissionLevel, user::User,
};
@@ -16,18 +19,24 @@ use crate::helpers::{
channel::update_channel, query::get_user_permission_on_target,
};
#[instrument(level = "debug")]
pub async fn handler(ws: WebSocketUpgrade) -> impl IntoResponse {
pub async fn handler(
headers: HeaderMap,
ConnectInfo(info): ConnectInfo<SocketAddr>,
ws: WebSocketUpgrade,
) -> impl IntoResponse {
// get a reveiver for internal update messages.
let mut receiver = update_channel().receiver.resubscribe();
let ip = info.ip();
// handle http -> ws updgrade
ws.on_upgrade(|socket| async move {
let Some((socket, user)) = super::user_ws_login(socket).await else {
return
ws.on_upgrade(move |socket| async move {
let Some((client_socket, user)) =
super::user_ws_login(socket, &headers, ip).await
else {
return;
};
let (mut ws_sender, mut ws_reciever) = socket.split();
let (mut ws_sender, mut ws_reciever) = client_socket.split();
let cancel = CancellationToken::new();
let cancel_clone = cancel.clone();
@@ -82,7 +91,6 @@ pub async fn handler(ws: WebSocketUpgrade) -> impl IntoResponse {
})
}
#[instrument(level = "debug")]
async fn user_can_see_update(
user: &User,
update_target: &ResourceTarget,

View File

@@ -1,17 +1,15 @@
use std::sync::OnceLock;
use anyhow::{Context, anyhow};
use komodo_client::entities::config::core::{
CoreConfig, OauthCredentials,
use komodo_client::entities::{
config::core::{CoreConfig, OauthCredentials},
random_string,
};
use reqwest::StatusCode;
use serde::{Deserialize, Serialize, de::DeserializeOwned};
use tokio::sync::Mutex;
use crate::{
auth::STATE_PREFIX_LENGTH, config::core_config,
helpers::random_string,
};
use crate::{auth::STATE_PREFIX_LENGTH, config::core_config};
pub fn github_oauth_client() -> &'static Option<GithubOauthClient> {
static GITHUB_OAUTH_CLIENT: OnceLock<Option<GithubOauthClient>> =
@@ -76,7 +74,6 @@ impl GithubOauthClient {
.into()
}
#[instrument(level = "debug", skip(self))]
pub async fn get_login_redirect_url(
&self,
redirect: Option<String>,
@@ -95,7 +92,6 @@ impl GithubOauthClient {
redirect_url
}
#[instrument(level = "debug", skip(self))]
pub async fn check_state(&self, state: &str) -> bool {
let mut contained = false;
self.states.lock().await.retain(|s| {
@@ -109,7 +105,6 @@ impl GithubOauthClient {
contained
}
#[instrument(level = "debug", skip(self))]
pub async fn get_access_token(
&self,
code: &str,
@@ -130,7 +125,6 @@ impl GithubOauthClient {
.context("failed to get github access token using code")
}
#[instrument(level = "debug", skip(self))]
pub async fn get_github_user(
&self,
token: &str,
@@ -141,7 +135,6 @@ impl GithubOauthClient {
.context("failed to get github user using access token")
}
#[instrument(level = "debug", skip(self))]
async fn get<R: DeserializeOwned>(
&self,
endpoint: &str,

View File

@@ -1,21 +1,37 @@
use std::net::SocketAddr;
use anyhow::{Context, anyhow};
use axum::{
Router, extract::Query, response::Redirect, routing::get,
Router,
extract::{ConnectInfo, Query},
http::HeaderMap,
response::Redirect,
routing::get,
};
use database::mongo_indexed::Document;
use database::mungos::mongodb::bson::doc;
use komodo_client::entities::{
komodo_timestamp,
user::{User, UserConfig},
use futures_util::TryFutureExt;
use komodo_client::{
api::auth::UserIdOrTwoFactor,
entities::{
komodo_timestamp, random_string,
user::{User, UserConfig},
},
};
use rate_limit::WithFailureRateLimit;
use reqwest::StatusCode;
use serde::Deserialize;
use serror::AddStatusCode;
use serror::{AddStatusCode, AddStatusCodeError as _};
use tower_sessions::Session;
use crate::{
api::{
SESSION_KEY_PASSKEY_LOGIN, SESSION_KEY_TOTP_LOGIN,
SESSION_KEY_USER_ID,
},
auth::format_redirect,
config::core_config,
helpers::random_string,
state::{db_client, jwt_client},
state::{auth_rate_limiter, db_client, webauthn},
};
use self::client::github_oauth_client;
@@ -29,21 +45,32 @@ pub fn router() -> Router {
.route(
"/login",
get(|Query(query): Query<RedirectQuery>| async {
Redirect::to(
&github_oauth_client()
.as_ref()
// OK: the router is only mounted in case that the client is populated
.unwrap()
.get_login_redirect_url(query.redirect)
.await,
)
let uri = github_oauth_client()
.as_ref()
.context("Github Oauth not configured")
.status_code(StatusCode::UNAUTHORIZED)?
.get_login_redirect_url(query.redirect)
.await;
serror::Result::Ok(Redirect::to(&uri))
}),
)
.route(
"/callback",
get(|query| async {
callback(query).await.status_code(StatusCode::UNAUTHORIZED)
}),
get(
|query,
session: Session,
headers: HeaderMap,
ConnectInfo(info): ConnectInfo<SocketAddr>| async move {
callback(query, session)
.map_err(|e| e.status_code(StatusCode::UNAUTHORIZED))
.with_failure_rate_limit_using_headers(
auth_rate_limiter(),
&headers,
Some(info.ip()),
)
.await
},
),
)
}
@@ -53,9 +80,9 @@ struct CallbackQuery {
code: String,
}
#[instrument(name = "GithubCallback", level = "debug")]
async fn callback(
Query(query): Query<CallbackQuery>,
session: Session,
) -> anyhow::Result<Redirect> {
let client = github_oauth_client().as_ref().unwrap();
if !client.check_state(&query.state).await {
@@ -71,10 +98,39 @@ async fn callback(
.find_one(doc! { "config.data.github_id": &github_id })
.await
.context("failed at find user query from database")?;
let jwt = match user {
Some(user) => jwt_client()
.encode(user.id)
.context("failed to generate jwt")?,
let user_id_or_two_factor = match user {
Some(user) => {
match (user.passkey.passkey, user.totp.enrolled()) {
// WebAuthn Passkey 2FA
(Some(passkey), _) => {
let webauthn = webauthn().context(
"No webauthn provider available, invalid KOMODO_HOST config",
)?;
let (response, server_state) = webauthn
.start_passkey_authentication(&[passkey])
.context("Failed to start passkey authentication flow")?;
session
.insert(
SESSION_KEY_PASSKEY_LOGIN,
(user.id, server_state),
)
.await?;
UserIdOrTwoFactor::Passkey(response)
}
// TOTP 2FA
(None, true) => {
session
.insert(SESSION_KEY_TOTP_LOGIN, user.id)
.await
.context(
"Failed to store totp login state in for user session",
)?;
UserIdOrTwoFactor::Totp {}
}
// No 2FA
(None, false) => UserIdOrTwoFactor::UserId(user.id),
}
}
None => {
let ts = komodo_timestamp();
let no_users_exist =
@@ -113,28 +169,38 @@ async fn callback(
github_id,
avatar: github_user.avatar_url,
},
totp: Default::default(),
passkey: Default::default(),
};
let user_id = db_client
.users
.insert_one(user)
.await
.context("failed to create user on mongo")?
.context("Failed to create user on mongo")?
.inserted_id
.as_object_id()
.context("inserted_id is not ObjectId")?
.to_string();
jwt_client()
.encode(user_id)
.context("failed to generate jwt")?
UserIdOrTwoFactor::UserId(user_id)
}
};
let exchange_token = jwt_client().create_exchange_token(jwt).await;
let redirect = &query.state[STATE_PREFIX_LENGTH..];
let redirect_url = if redirect.is_empty() {
format!("{}?token={exchange_token}", core_config().host)
} else {
let splitter = if redirect.contains('?') { '&' } else { '?' };
format!("{redirect}{splitter}token={exchange_token}")
};
Ok(Redirect::to(&redirect_url))
let redirect = Some(&query.state[STATE_PREFIX_LENGTH..]);
match user_id_or_two_factor {
UserIdOrTwoFactor::UserId(user_id) => {
session
.insert(SESSION_KEY_USER_ID, user_id)
.await
.context("Failed to store user id for client session")?;
Ok(format_redirect(redirect, "redeem_ready=true"))
}
UserIdOrTwoFactor::Totp {} => {
Ok(format_redirect(redirect, "totp=true"))
}
UserIdOrTwoFactor::Passkey(passkey) => {
let passkey = serde_json::to_string(&passkey)
.context("Failed to serialize passkey response")?;
let passkey = urlencoding::encode(&passkey);
Ok(format_redirect(redirect, &format!("passkey={passkey}")))
}
}
}

View File

@@ -1,18 +1,16 @@
use std::sync::OnceLock;
use anyhow::{Context, anyhow};
use jsonwebtoken::{DecodingKey, Validation, decode};
use komodo_client::entities::config::core::{
CoreConfig, OauthCredentials,
use jsonwebtoken::dangerous::insecure_decode;
use komodo_client::entities::{
config::core::{CoreConfig, OauthCredentials},
random_string,
};
use reqwest::StatusCode;
use serde::{Deserialize, de::DeserializeOwned};
use tokio::sync::Mutex;
use crate::{
auth::STATE_PREFIX_LENGTH, config::core_config,
helpers::random_string,
};
use crate::{auth::STATE_PREFIX_LENGTH, config::core_config};
pub fn google_oauth_client() -> &'static Option<GoogleOauthClient> {
static GOOGLE_OAUTH_CLIENT: OnceLock<Option<GoogleOauthClient>> =
@@ -85,7 +83,6 @@ impl GoogleOauthClient {
.into()
}
#[instrument(level = "debug", skip(self))]
pub async fn get_login_redirect_url(
&self,
redirect: Option<String>,
@@ -104,7 +101,6 @@ impl GoogleOauthClient {
redirect_url
}
#[instrument(level = "debug", skip(self))]
pub async fn check_state(&self, state: &str) -> bool {
let mut contained = false;
self.states.lock().await.retain(|s| {
@@ -118,7 +114,6 @@ impl GoogleOauthClient {
contained
}
#[instrument(level = "debug", skip(self))]
pub async fn get_access_token(
&self,
code: &str,
@@ -139,24 +134,15 @@ impl GoogleOauthClient {
.context("failed to get google access token using code")
}
#[instrument(level = "debug", skip(self))]
pub fn get_google_user(
&self,
id_token: &str,
) -> anyhow::Result<GoogleUser> {
let mut v = Validation::new(Default::default());
v.insecure_disable_signature_validation();
v.validate_aud = false;
let res = decode::<GoogleUser>(
id_token,
&DecodingKey::from_secret(b""),
&v,
)
.context("failed to decode google id token")?;
let res = insecure_decode::<GoogleUser>(id_token)
.context("failed to decode google id token")?;
Ok(res.claims)
}
#[instrument(level = "debug", skip(self))]
async fn post<R: DeserializeOwned>(
&self,
endpoint: &str,

View File

@@ -1,19 +1,35 @@
use std::net::SocketAddr;
use anyhow::{Context, anyhow};
use async_timing_util::unix_timestamp_ms;
use axum::{
Router, extract::Query, response::Redirect, routing::get,
Router,
extract::{ConnectInfo, Query},
http::HeaderMap,
response::Redirect,
routing::get,
};
use database::mongo_indexed::Document;
use database::mungos::mongodb::bson::doc;
use komodo_client::entities::user::{User, UserConfig};
use futures_util::TryFutureExt;
use komodo_client::{
api::auth::UserIdOrTwoFactor,
entities::{
random_string,
user::{User, UserConfig},
},
};
use rate_limit::WithFailureRateLimit;
use reqwest::StatusCode;
use serde::Deserialize;
use serror::AddStatusCode;
use serror::{AddStatusCode, AddStatusCodeError as _};
use tower_sessions::Session;
use crate::{
api::{SESSION_KEY_PASSKEY_LOGIN, SESSION_KEY_TOTP_LOGIN, SESSION_KEY_USER_ID},
auth::format_redirect,
config::core_config,
helpers::random_string,
state::{db_client, jwt_client},
state::{auth_rate_limiter, db_client, webauthn},
};
use self::client::google_oauth_client;
@@ -27,21 +43,32 @@ pub fn router() -> Router {
.route(
"/login",
get(|Query(query): Query<RedirectQuery>| async move {
Redirect::to(
&google_oauth_client()
.as_ref()
// OK: its not mounted unless the client is populated
.unwrap()
.get_login_redirect_url(query.redirect)
.await,
)
let uri = google_oauth_client()
.as_ref()
.context("Google Oauth not configured")
.status_code(StatusCode::UNAUTHORIZED)?
.get_login_redirect_url(query.redirect)
.await;
serror::Result::Ok(Redirect::to(&uri))
}),
)
.route(
"/callback",
get(|query| async {
callback(query).await.status_code(StatusCode::UNAUTHORIZED)
}),
get(
|query,
session: Session,
headers: HeaderMap,
ConnectInfo(info): ConnectInfo<SocketAddr>| async move {
callback(query, session)
.map_err(|e| e.status_code(StatusCode::UNAUTHORIZED))
.with_failure_rate_limit_using_headers(
auth_rate_limiter(),
&headers,
Some(info.ip()),
)
.await
},
),
)
}
@@ -52,9 +79,9 @@ struct CallbackQuery {
error: Option<String>,
}
#[instrument(name = "GoogleCallback", level = "debug")]
async fn callback(
Query(query): Query<CallbackQuery>,
session: Session,
) -> anyhow::Result<Redirect> {
// Safe: the method is only called after the client is_some
let client = google_oauth_client().as_ref().unwrap();
@@ -80,10 +107,39 @@ async fn callback(
.find_one(doc! { "config.data.google_id": &google_id })
.await
.context("failed at find user query from mongo")?;
let jwt = match user {
Some(user) => jwt_client()
.encode(user.id)
.context("failed to generate jwt")?,
let user_id_or_two_factor = match user {
Some(user) => {
match (user.passkey.passkey, user.totp.enrolled()) {
// WebAuthn Passkey 2FA
(Some(passkey), _) => {
let webauthn = webauthn().context(
"No webauthn provider available, invalid KOMODO_HOST config",
)?;
let (response, server_state) = webauthn
.start_passkey_authentication(&[passkey])
.context("Failed to start passkey authentication flow")?;
session
.insert(
SESSION_KEY_PASSKEY_LOGIN,
(user.id, server_state),
)
.await?;
UserIdOrTwoFactor::Passkey(response)
}
// TOTP 2FA
(None, true) => {
session
.insert(SESSION_KEY_TOTP_LOGIN, user.id)
.await
.context(
"Failed to store totp login state in for user session",
)?;
UserIdOrTwoFactor::Totp {}
}
// No 2FA
(None, false) => UserIdOrTwoFactor::UserId(user.id),
}
}
None => {
let ts = unix_timestamp_ms() as i64;
let no_users_exist =
@@ -127,28 +183,38 @@ async fn callback(
google_id,
avatar: google_user.picture,
},
totp: Default::default(),
passkey: Default::default(),
};
let user_id = db_client
.users
.insert_one(user)
.await
.context("failed to create user on mongo")?
.context("Failed to create user on mongo")?
.inserted_id
.as_object_id()
.context("inserted_id is not ObjectId")?
.to_string();
jwt_client()
.encode(user_id)
.context("failed to generate jwt")?
UserIdOrTwoFactor::UserId(user_id)
}
};
let exchange_token = jwt_client().create_exchange_token(jwt).await;
let redirect = &state[STATE_PREFIX_LENGTH..];
let redirect_url = if redirect.is_empty() {
format!("{}?token={exchange_token}", core_config().host)
} else {
let splitter = if redirect.contains('?') { '&' } else { '?' };
format!("{redirect}{splitter}token={exchange_token}")
};
Ok(Redirect::to(&redirect_url))
let redirect = Some(&state[STATE_PREFIX_LENGTH..]);
match user_id_or_two_factor {
UserIdOrTwoFactor::UserId(user_id) => {
session
.insert(SESSION_KEY_USER_ID, user_id)
.await
.context("Failed to store user id for client session")?;
Ok(format_redirect(redirect, "redeem_ready=true"))
}
UserIdOrTwoFactor::Totp {} => {
Ok(format_redirect(redirect, "totp=true"))
}
UserIdOrTwoFactor::Passkey(passkey) => {
let passkey = serde_json::to_string(&passkey)
.context("Failed to serialize passkey response")?;
let passkey = urlencoding::encode(&passkey);
Ok(format_redirect(redirect, &format!("passkey={passkey}")))
}
}
}

View File

@@ -1,22 +1,14 @@
use std::collections::HashMap;
use anyhow::{Context, anyhow};
use async_timing_util::{
Timelength, get_timelength_in_ms, unix_timestamp_ms,
};
use anyhow::Context;
use async_timing_util::{get_timelength_in_ms, unix_timestamp_ms};
use database::mungos::mongodb::bson::doc;
use jsonwebtoken::{
DecodingKey, EncodingKey, Header, Validation, decode, encode,
};
use komodo_client::{
api::auth::JwtResponse, entities::config::core::CoreConfig,
api::auth::JwtResponse,
entities::{config::core::CoreConfig, random_string},
};
use serde::{Deserialize, Serialize};
use tokio::sync::Mutex;
use crate::helpers::random_string;
type ExchangeTokenMap = Mutex<HashMap<String, (JwtResponse, u128)>>;
#[derive(Serialize, Deserialize, Clone)]
pub struct JwtClaims {
@@ -31,7 +23,6 @@ pub struct JwtClient {
encoding_key: EncodingKey,
decoding_key: DecodingKey,
ttl_ms: u128,
exchange_tokens: ExchangeTokenMap,
}
impl JwtClient {
@@ -49,7 +40,6 @@ impl JwtClient {
ttl_ms: get_timelength_in_ms(
config.jwt_ttl.to_string().parse()?,
),
exchange_tokens: Default::default(),
})
}
@@ -65,47 +55,13 @@ impl JwtClient {
exp,
};
let jwt = encode(&self.header, &claims, &self.encoding_key)
.context("failed at signing claim")?;
.context("Failed at signing claim")?;
Ok(JwtResponse { user_id, jwt })
}
pub fn decode(&self, jwt: &str) -> anyhow::Result<JwtClaims> {
decode::<JwtClaims>(jwt, &self.decoding_key, &self.validation)
.map(|res| res.claims)
.context("failed to decode token claims")
}
#[instrument(level = "debug", skip_all)]
pub async fn create_exchange_token(
&self,
jwt: JwtResponse,
) -> String {
let exchange_token = random_string(40);
self.exchange_tokens.lock().await.insert(
exchange_token.clone(),
(
jwt,
unix_timestamp_ms()
+ get_timelength_in_ms(Timelength::OneMinute),
),
);
exchange_token
}
#[instrument(level = "debug", skip(self))]
pub async fn redeem_exchange_token(
&self,
exchange_token: &str,
) -> anyhow::Result<JwtResponse> {
let (jwt, valid_until) = self
.exchange_tokens
.lock()
.await
.remove(exchange_token)
.context("invalid exchange token: unrecognized")?;
if unix_timestamp_ms() < valid_until {
Ok(jwt)
} else {
Err(anyhow!("invalid exchange token: expired"))
}
.context("Failed to decode token claims")
}
}

View File

@@ -1,10 +1,10 @@
use std::str::FromStr;
use std::sync::{Arc, OnceLock};
use anyhow::{Context, anyhow};
use async_timing_util::unix_timestamp_ms;
use database::{
hash_password,
mungos::mongodb::bson::{Document, doc, oid::ObjectId},
mungos::mongodb::bson::{Document, doc},
};
use komodo_client::{
api::auth::{
@@ -13,137 +13,233 @@ use komodo_client::{
},
entities::user::{User, UserConfig},
};
use rate_limit::{RateLimiter, WithFailureRateLimit};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::{AddStatusCode as _, AddStatusCodeError};
use tower_sessions::Session;
use crate::{
api::auth::AuthArgs,
api::{
SESSION_KEY_PASSKEY_LOGIN, SESSION_KEY_TOTP_LOGIN, auth::AuthArgs,
},
config::core_config,
state::{db_client, jwt_client},
helpers::validations::{validate_password, validate_username},
state::{auth_rate_limiter, db_client, jwt_client, webauthn},
};
impl Resolve<AuthArgs> for SignUpLocalUser {
#[instrument(name = "SignUpLocalUser", skip(self))]
#[instrument("SignUpLocalUser", skip(self))]
async fn resolve(
self,
_: &AuthArgs,
AuthArgs {
headers,
ip,
session,
}: &AuthArgs,
) -> serror::Result<SignUpLocalUserResponse> {
let core_config = core_config();
if !core_config.local_auth {
return Err(anyhow!("Local auth is not enabled").into());
}
if self.username.is_empty() {
return Err(anyhow!("Username cannot be empty string").into());
}
if ObjectId::from_str(&self.username).is_ok() {
return Err(
anyhow!("Username cannot be valid ObjectId").into(),
);
}
if self.password.is_empty() {
return Err(anyhow!("Password cannot be empty string").into());
}
let db = db_client();
let no_users_exist =
db.users.find_one(Document::new()).await?.is_none();
if !no_users_exist && core_config.disable_user_registration {
return Err(anyhow!("User registration is disabled").into());
}
if db
.users
.find_one(doc! { "username": &self.username })
sign_up_local_user(self)
.with_failure_rate_limit_using_headers(
auth_rate_limiter(),
headers,
Some(*ip),
)
.await
.context("Failed to query for existing users")?
.is_some()
{
return Err(anyhow!("Username already taken.").into());
}
let ts = unix_timestamp_ms() as i64;
let hashed_password = hash_password(self.password)?;
let user = User {
id: Default::default(),
username: self.username,
enabled: no_users_exist || core_config.enable_new_users,
admin: no_users_exist,
super_admin: no_users_exist,
create_server_permissions: no_users_exist,
create_build_permissions: no_users_exist,
updated_at: ts,
last_update_view: 0,
recents: Default::default(),
all: Default::default(),
config: UserConfig::Local {
password: hashed_password,
},
};
let user_id = db_client()
.users
.insert_one(user)
.await
.context("failed to create user")?
.inserted_id
.as_object_id()
.context("inserted_id is not ObjectId")?
.to_string();
jwt_client()
.encode(user_id.clone())
.context("failed to generate jwt for user")
.map_err(Into::into)
}
}
async fn sign_up_local_user(
req: SignUpLocalUser,
) -> serror::Result<SignUpLocalUserResponse> {
let config = core_config();
if !config.local_auth {
return Err(anyhow!("Local auth is not enabled").into());
}
validate_username(&req.username)
.status_code(StatusCode::BAD_REQUEST)?;
validate_password(&req.password)
.status_code(StatusCode::BAD_REQUEST)?;
let db = db_client();
let no_users_exist =
db.users.find_one(Document::new()).await?.is_none();
if !no_users_exist && config.disable_user_registration {
return Err(
anyhow!("User registration is disabled")
.status_code(StatusCode::UNAUTHORIZED),
);
}
if db
.users
.find_one(doc! { "username": &req.username })
.await
.context("Failed to query for existing users")?
.is_some()
{
// When user registration is enabled, there is no way around allowing
// potential attackers to gain some insight about which usernames exist
// if they are allowed to register accounts. Since this can be easily inferred,
// might as well be clear. The auth rate limiter is critical here.
return Err(
anyhow!("Username already taken.")
.status_code(StatusCode::BAD_REQUEST),
);
}
let ts = unix_timestamp_ms() as i64;
let hashed_password = hash_password(req.password)?;
let user = User {
id: Default::default(),
username: req.username,
enabled: no_users_exist || config.enable_new_users,
admin: no_users_exist,
super_admin: no_users_exist,
create_server_permissions: no_users_exist,
create_build_permissions: no_users_exist,
updated_at: ts,
last_update_view: 0,
recents: Default::default(),
all: Default::default(),
config: UserConfig::Local {
password: hashed_password,
},
totp: Default::default(),
passkey: Default::default(),
};
let user_id = db_client()
.users
.insert_one(user)
.await
.context("Failed to create user on database")?
.inserted_id
.as_object_id()
.context("The 'inserted_id' is not ObjectId")?
.to_string();
jwt_client()
.encode(user_id)
.context("Failed to generate JWT for user")
.map_err(Into::into)
}
/// Local login method has a dedicated rate limiter
/// so the UI background calls using existing JWT do
/// not influence the number of attempts user has
/// to log in.
fn login_local_user_rate_limiter() -> &'static RateLimiter {
static LOGIN_LOCAL_USER_RATE_LIMITER: OnceLock<Arc<RateLimiter>> =
OnceLock::new();
LOGIN_LOCAL_USER_RATE_LIMITER.get_or_init(|| {
let config = core_config();
RateLimiter::new(
config.auth_rate_limit_disabled,
config.auth_rate_limit_max_attempts as usize,
config.auth_rate_limit_window_seconds,
)
})
}
impl Resolve<AuthArgs> for LoginLocalUser {
#[instrument(name = "LoginLocalUser", level = "debug", skip(self))]
async fn resolve(
self,
_: &AuthArgs,
AuthArgs {
headers,
ip,
session,
}: &AuthArgs,
) -> serror::Result<LoginLocalUserResponse> {
if !core_config().local_auth {
return Err(anyhow!("local auth is not enabled").into());
}
let user = db_client()
.users
.find_one(doc! { "username": &self.username })
.await
.context("failed at db query for users")?
.with_context(|| {
format!("did not find user with username {}", self.username)
})?;
let UserConfig::Local {
password: user_pw_hash,
} = user.config
else {
return Err(
anyhow!(
"non-local auth users can not log in with a password"
)
.into(),
);
};
let verified = bcrypt::verify(self.password, &user_pw_hash)
.context("failed at verify password")?;
if !verified {
return Err(anyhow!("invalid credentials").into());
}
jwt_client()
.encode(user.id.clone())
.context("failed at generating jwt for user")
.map_err(Into::into)
login_local_user(
self,
session
.as_ref()
.context("Method called in context without session")?,
)
.with_failure_rate_limit_using_headers(
login_local_user_rate_limiter(),
headers,
Some(*ip),
)
.await
}
}
async fn login_local_user(
req: LoginLocalUser,
session: &Session,
) -> serror::Result<LoginLocalUserResponse> {
if !core_config().local_auth {
return Err(
anyhow!("Local auth is not enabled")
.status_code(StatusCode::UNAUTHORIZED),
);
}
validate_username(&req.username)
.status_code(StatusCode::BAD_REQUEST)?;
let user = db_client()
.users
.find_one(doc! { "username": &req.username })
.await
.context("Failed at db query for users")?
.context("Invalid login credentials")
.status_code(StatusCode::UNAUTHORIZED)?;
let UserConfig::Local {
password: user_pw_hash,
} = user.config
else {
return Err(
anyhow!("Invalid login credentials")
.status_code(StatusCode::UNAUTHORIZED),
);
};
let verified = bcrypt::verify(req.password, &user_pw_hash)
.context("Invalid login credentials")
.status_code(StatusCode::UNAUTHORIZED)?;
if !verified {
return Err(
anyhow!("Invalid login credentials")
.status_code(StatusCode::UNAUTHORIZED),
);
}
match (user.passkey.passkey, user.totp.enrolled()) {
// WebAuthn 2FA
(Some(passkey), _) => {
let webauthn = webauthn().context(
"No webauthn provider available, invalid KOMODO_HOST config",
)?;
let (response, server_state) = webauthn
.start_passkey_authentication(&[passkey])
.context("Failed to start passkey authentication flow")?;
session
.insert(SESSION_KEY_PASSKEY_LOGIN, (user.id, server_state))
.await?;
Ok(LoginLocalUserResponse::Passkey(response))
}
// TOTP 2FA
(None, true) => {
session.insert(SESSION_KEY_TOTP_LOGIN, user.id).await?;
Ok(LoginLocalUserResponse::Totp {})
}
// No 2FA, can return JWT immediately
(None, false) => {
jwt_client()
.encode(user.id)
// This is in internal error (500), not auth error
.context("Failed to generate JWT for user")
.map(|jwt| LoginLocalUserResponse::Jwt(jwt))
.map_err(Into::into)
}
}
}

View File

@@ -1,18 +1,25 @@
use std::net::SocketAddr;
use anyhow::{Context, anyhow};
use async_timing_util::unix_timestamp_ms;
use axum::{
extract::Request, http::HeaderMap, middleware::Next,
response::Response,
extract::{ConnectInfo, Request},
http::HeaderMap,
middleware::Next,
response::{Redirect, Response},
};
use database::mungos::mongodb::bson::doc;
use futures_util::TryFutureExt;
use komodo_client::entities::{komodo_timestamp, user::User};
use rate_limit::WithFailureRateLimit;
use reqwest::StatusCode;
use serde::Deserialize;
use serror::AddStatusCode;
use serror::AddStatusCodeError as _;
use crate::{
config::core_config,
helpers::query::get_user,
state::{db_client, jwt_client},
state::{auth_rate_limiter, db_client, jwt_client},
};
use self::jwt::JwtClaims;
@@ -21,30 +28,46 @@ pub mod github;
pub mod google;
pub mod jwt;
pub mod oidc;
pub mod totp;
mod local;
/// Length of random token in Oauth / OIDC 'state'
const STATE_PREFIX_LENGTH: usize = 20;
/// JWT Clock skew tolerance in milliseconds (10 seconds for JWTs)
const JWT_CLOCK_SKEW_TOLERANCE_MS: u128 = 10 * 1000;
/// Api Key Clock skew tolerance in milliseconds (5 minutes for Api Keys)
const API_KEY_CLOCK_SKEW_TOLERANCE_MS: i64 = 5 * 60 * 1000;
#[derive(Debug, Deserialize)]
struct RedirectQuery {
redirect: Option<String>,
}
#[instrument(level = "debug")]
pub async fn auth_request(
headers: HeaderMap,
mut req: Request,
next: Next,
) -> serror::Result<Response> {
let user = authenticate_check_enabled(&headers)
.await
.status_code(StatusCode::UNAUTHORIZED)?;
let fallback = req
.extensions()
.get::<ConnectInfo<SocketAddr>>()
.map(|addr| addr.ip());
let mut user = authenticate_check_enabled(&headers)
.map_err(|e| e.status_code(StatusCode::UNAUTHORIZED))
.with_failure_rate_limit_using_headers(
auth_rate_limiter(),
&headers,
fallback,
)
.await?;
// Sanitize the user for safety before
// attaching to the request handlers.
user.sanitize();
req.extensions_mut().insert(user);
Ok(next.run(req).await)
}
#[instrument(level = "debug")]
pub async fn get_user_id_from_headers(
headers: &HeaderMap,
) -> anyhow::Result<String> {
@@ -55,54 +78,57 @@ pub async fn get_user_id_from_headers(
) {
(Some(jwt), _, _) => {
// USE JWT
let jwt = jwt.to_str().context("jwt is not str")?;
auth_jwt_get_user_id(jwt)
.await
.context("failed to authenticate jwt")
let jwt = jwt.to_str().context("JWT is not valid UTF-8")?;
auth_jwt_get_user_id(jwt).await
}
(None, Some(key), Some(secret)) => {
// USE API KEY / SECRET
let key = key.to_str().context("key is not str")?;
let secret = secret.to_str().context("secret is not str")?;
auth_api_key_get_user_id(key, secret)
.await
.context("failed to authenticate api key")
let key =
key.to_str().context("X-API-KEY is not valid UTF-8")?;
let secret =
secret.to_str().context("X-API-SECRET is not valid UTF-8")?;
auth_api_key_get_user_id(key, secret).await
}
_ => {
// AUTH FAIL
Err(anyhow!(
"must attach either AUTHORIZATION header with jwt OR pass X-API-KEY and X-API-SECRET"
"Must attach either AUTHORIZATION header with jwt OR pass X-API-KEY and X-API-SECRET"
))
}
}
}
#[instrument(level = "debug")]
pub async fn authenticate_check_enabled(
headers: &HeaderMap,
) -> anyhow::Result<User> {
let user_id = get_user_id_from_headers(headers).await?;
let user = get_user(&user_id).await?;
let user = get_user(&user_id)
.await
.map_err(|_| anyhow!("Invalid user credentials"))?;
if user.enabled {
Ok(user)
} else {
Err(anyhow!("user not enabled"))
Err(anyhow!("Invalid user credentials"))
}
}
#[instrument(level = "debug")]
pub async fn auth_jwt_get_user_id(
jwt: &str,
) -> anyhow::Result<String> {
let claims: JwtClaims = jwt_client().decode(jwt)?;
if claims.exp > unix_timestamp_ms() {
let claims: JwtClaims = jwt_client()
.decode(jwt)
.map_err(|_| anyhow!("Invalid user credentials"))?;
// Apply clock skew tolerance.
// Token is valid if expiration is greater than (now - tolerance)
if claims.exp
> unix_timestamp_ms().saturating_sub(JWT_CLOCK_SKEW_TOLERANCE_MS)
{
Ok(claims.id)
} else {
Err(anyhow!("token has expired"))
Err(anyhow!("Invalid user credentials"))
}
}
#[instrument(level = "debug")]
pub async fn auth_jwt_check_enabled(
jwt: &str,
) -> anyhow::Result<User> {
@@ -110,7 +136,6 @@ pub async fn auth_jwt_check_enabled(
check_enabled(user_id).await
}
#[instrument(level = "debug")]
pub async fn auth_api_key_get_user_id(
key: &str,
secret: &str,
@@ -119,23 +144,28 @@ pub async fn auth_api_key_get_user_id(
.api_keys
.find_one(doc! { "key": key })
.await
.context("failed to query db")?
.context("no api key matching key")?;
if key.expires != 0 && key.expires < komodo_timestamp() {
return Err(anyhow!("api key expired"));
.context("Failed to query db")?
.context("Invalid user credentials")?;
// Apply clock skew tolerance.
// Token is invalid if expiration is less than (now - tolerance)
if key.expires != 0
&& key.expires
< komodo_timestamp()
.saturating_sub(API_KEY_CLOCK_SKEW_TOLERANCE_MS)
{
return Err(anyhow!("Invalid user credentials"));
}
if bcrypt::verify(secret, &key.secret)
.context("failed to verify secret hash")?
.map_err(|_| anyhow!("Invalid user credentials"))?
{
// secret matches
Ok(key.user_id)
} else {
// secret mismatch
Err(anyhow!("invalid api secret"))
Err(anyhow!("Invalid user credentials"))
}
}
#[instrument(level = "debug")]
pub async fn auth_api_key_check_enabled(
key: &str,
secret: &str,
@@ -144,12 +174,23 @@ pub async fn auth_api_key_check_enabled(
check_enabled(user_id).await
}
#[instrument(level = "debug")]
async fn check_enabled(user_id: String) -> anyhow::Result<User> {
let user = get_user(&user_id).await?;
if user.enabled {
Ok(user)
} else {
Err(anyhow!("user not enabled"))
Err(anyhow!("Invalid user credentials"))
}
}
fn format_redirect(redirect: Option<&str>, extra: &str) -> Redirect {
let redirect_url = if let Some(redirect) = redirect
&& !redirect.is_empty()
{
let splitter = if redirect.contains('?') { '&' } else { '?' };
format!("{redirect}{splitter}{extra}")
} else {
format!("{}?{extra}", core_config().host)
};
Redirect::to(&redirect_url)
}

View File

@@ -1,15 +1,23 @@
use std::sync::OnceLock;
use std::{net::SocketAddr, sync::OnceLock};
use anyhow::{Context, anyhow};
use axum::{
Router, extract::Query, response::Redirect, routing::get,
Router,
extract::{ConnectInfo, Query},
http::HeaderMap,
response::Redirect,
routing::get,
};
use client::oidc_client;
use dashmap::DashMap;
use database::mungos::mongodb::bson::{Document, doc};
use komodo_client::entities::{
komodo_timestamp,
user::{User, UserConfig},
use futures_util::TryFutureExt;
use komodo_client::{
api::auth::UserIdOrTwoFactor,
entities::{
komodo_timestamp, random_string,
user::{User, UserConfig},
},
};
use openidconnect::{
AccessTokenHash, AuthorizationCode, CsrfToken,
@@ -17,14 +25,20 @@ use openidconnect::{
PkceCodeChallenge, PkceCodeVerifier, Scope, TokenResponse,
core::{CoreAuthenticationFlow, CoreGenderClaim},
};
use rate_limit::WithFailureRateLimit;
use reqwest::StatusCode;
use serde::Deserialize;
use serror::AddStatusCode;
use serror::{AddStatusCode as _, AddStatusCodeError};
use tower_sessions::Session;
use crate::{
api::{
SESSION_KEY_PASSKEY_LOGIN, SESSION_KEY_TOTP_LOGIN,
SESSION_KEY_USER_ID,
},
auth::format_redirect,
config::core_config,
helpers::random_string,
state::{db_client, jwt_client},
state::{auth_rate_limiter, db_client, webauthn},
};
use super::RedirectQuery;
@@ -69,13 +83,24 @@ pub fn router() -> Router {
)
.route(
"/callback",
get(|query| async {
callback(query).await.status_code(StatusCode::UNAUTHORIZED)
}),
get(
|query,
session: Session,
headers: HeaderMap,
ConnectInfo(info): ConnectInfo<SocketAddr>| async move {
callback(query, session)
.map_err(|e| e.status_code(StatusCode::UNAUTHORIZED))
.with_failure_rate_limit_using_headers(
auth_rate_limiter(),
&headers,
Some(info.ip()),
)
.await
},
),
)
}
#[instrument(name = "OidcRedirect", level = "debug")]
async fn login(
Query(RedirectQuery { redirect }): Query<RedirectQuery>,
) -> anyhow::Result<Redirect> {
@@ -138,9 +163,9 @@ struct CallbackQuery {
error: Option<String>,
}
#[instrument(name = "OidcCallback", level = "debug")]
async fn callback(
Query(query): Query<CallbackQuery>,
session: Session,
) -> anyhow::Result<Redirect> {
let client = oidc_client().load();
let client =
@@ -220,12 +245,41 @@ async fn callback(
"config.data.user_id": user_id
})
.await
.context("failed at find user query from database")?;
.context("Failed at find user query from database")?;
let jwt = match user {
Some(user) => jwt_client()
.encode(user.id)
.context("failed to generate jwt")?,
let user_id_or_two_factor = match user {
Some(user) => {
match (user.passkey.passkey, user.totp.enrolled()) {
// WebAuthn Passkey 2FA
(Some(passkey), _) => {
let webauthn = webauthn().context(
"No webauthn provider available, invalid KOMODO_HOST config",
)?;
let (response, server_state) = webauthn
.start_passkey_authentication(&[passkey])
.context("Failed to start passkey authentication flow")?;
session
.insert(
SESSION_KEY_PASSKEY_LOGIN,
(user.id, server_state),
)
.await?;
UserIdOrTwoFactor::Passkey(response)
}
// TOTP 2FA
(None, true) => {
session
.insert(SESSION_KEY_TOTP_LOGIN, user.id)
.await
.context(
"Failed to store totp login state in for user session",
)?;
UserIdOrTwoFactor::Totp {}
}
// No 2FA
(None, false) => UserIdOrTwoFactor::UserId(user.id),
}
}
None => {
let ts = komodo_timestamp();
let no_users_exist =
@@ -296,6 +350,8 @@ async fn callback(
provider: core_config.oidc_provider.clone(),
user_id: user_id.to_string(),
},
totp: Default::default(),
passkey: Default::default(),
};
let user_id = db_client
@@ -308,17 +364,29 @@ async fn callback(
.context("inserted_id is not ObjectId")?
.to_string();
jwt_client()
.encode(user_id)
.context("failed to generate jwt")?
UserIdOrTwoFactor::UserId(user_id)
}
};
let exchange_token = jwt_client().create_exchange_token(jwt).await;
let redirect_url = if let Some(redirect) = redirect {
let splitter = if redirect.contains('?') { '&' } else { '?' };
format!("{redirect}{splitter}token={exchange_token}")
} else {
format!("{}?token={exchange_token}", core_config().host)
};
Ok(Redirect::to(&redirect_url))
match user_id_or_two_factor {
UserIdOrTwoFactor::UserId(user_id) => {
session
.insert(SESSION_KEY_USER_ID, user_id)
.await
.context("Failed to store user id for client session")?;
Ok(format_redirect(redirect.as_deref(), "redeem_ready=true"))
}
UserIdOrTwoFactor::Totp {} => {
Ok(format_redirect(redirect.as_deref(), "totp=true"))
}
UserIdOrTwoFactor::Passkey(passkey) => {
let passkey = serde_json::to_string(&passkey)
.context("Failed to serialize passkey response")?;
let passkey = urlencoding::encode(&passkey);
Ok(format_redirect(
redirect.as_deref(),
&format!("passkey={passkey}"),
))
}
}
}

17
bin/core/src/auth/totp.rs Normal file
View File

@@ -0,0 +1,17 @@
use anyhow::Context as _;
pub fn make_totp(
secret_bytes: Vec<u8>,
account_name: impl Into<Option<String>>,
) -> anyhow::Result<totp_rs::TOTP> {
totp_rs::TOTP::new(
totp_rs::Algorithm::SHA1,
6,
1,
30,
secret_bytes,
Some(String::from("Komodo")),
account_name.into().unwrap_or_default(),
)
.context("Failed to construct TOTP")
}

View File

@@ -11,7 +11,7 @@ use aws_sdk_ec2::{
Tag, TagSpecification,
},
};
use base64::Engine;
use data_encoding::BASE64_NOPAD;
use komodo_client::entities::{
ResourceTarget,
alert::{Alert, AlertData, SeverityLevel},
@@ -57,7 +57,6 @@ impl aws_credential_types::provider::ProvideCredentials
}
}
#[instrument]
async fn create_ec2_client(region: String) -> Client {
let region = Region::new(region);
let config = aws_config::defaults(BehaviorVersion::latest())
@@ -68,7 +67,7 @@ async fn create_ec2_client(region: String) -> Client {
Client::new(&config)
}
#[instrument]
#[instrument("LaunchEc2Instance")]
pub async fn launch_ec2_instance(
name: &str,
config: &AwsBuilderConfig,
@@ -128,10 +127,7 @@ pub async fn launch_ec2_instance(
)
.min_count(1)
.max_count(1)
.user_data(
base64::engine::general_purpose::STANDARD_NO_PAD
.encode(user_data),
);
.user_data(BASE64_NOPAD.encode(user_data.as_bytes()));
let res = req
.send()
@@ -170,7 +166,7 @@ pub async fn launch_ec2_instance(
const MAX_TERMINATION_TRIES: usize = 5;
const TERMINATION_WAIT_SECS: u64 = 15;
#[instrument]
#[instrument("TerminateEc2Instance")]
pub async fn terminate_ec2_instance_with_retry(
region: String,
instance_id: &str,
@@ -210,7 +206,7 @@ pub async fn terminate_ec2_instance_with_retry(
unreachable!()
}
#[instrument(skip(client))]
#[instrument("TerminateEc2InstanceInner", skip_all)]
async fn terminate_ec2_instance_inner(
client: &Client,
instance_id: &str,
@@ -229,7 +225,6 @@ async fn terminate_ec2_instance_inner(
}
/// Automatically retries 5 times, waiting 2 sec in between
#[instrument(level = "debug")]
async fn get_ec2_instance_status(
client: &Client,
instance_id: &str,
@@ -261,7 +256,6 @@ async fn get_ec2_instance_status(
}
}
#[instrument(level = "debug")]
async fn get_ec2_instance_state_name(
client: &Client,
instance_id: &str,
@@ -281,7 +275,6 @@ async fn get_ec2_instance_state_name(
}
/// Automatically retries 5 times, waiting 2 sec in between
#[instrument(level = "debug")]
async fn get_ec2_instance_public_ip(
client: &Client,
instance_id: &str,

View File

@@ -1,6 +1,7 @@
use std::{path::PathBuf, sync::OnceLock};
use anyhow::Context;
use axum::http::HeaderValue;
use colored::Colorize;
use config::ConfigLoader;
use environment_file::{
@@ -14,6 +15,7 @@ use komodo_client::entities::{
logger::LogConfig,
};
use noise::key::{RotatableKeyPair, SpkiPublicKey};
use tower_http::cors::CorsLayer;
/// Should call in startup to ensure Core errors without valid private key.
pub fn core_keys() -> &'static RotatableKeyPair {
@@ -89,6 +91,70 @@ pub fn periphery_public_keys() -> Option<&'static [SpkiPublicKey]> {
.as_deref()
}
/// Creates a CORS layer based on the Core configuration.
///
/// - If `cors_allowed_origins` is empty: Allows all origins (backward compatibility)
/// - If `cors_allowed_origins` is set: Only allows the specified origins
/// - Methods and headers are always allowed (Any)
/// - Credentials are only allowed if `cors_allow_credentials` is true
pub fn cors_layer() -> CorsLayer {
let config = core_config();
let mut cors = CorsLayer::new()
.allow_methods(tower_http::cors::AllowMethods::mirror_request())
.allow_headers(tower_http::cors::AllowHeaders::mirror_request())
.allow_credentials(config.cors_allow_credentials);
if config.cors_allowed_origins.is_empty() {
warn!(
"CORS using allowed origin 'Any' (*). Use KOMODO_CORS_ALLOWED_ORIGINS to configure specific origins."
);
cors = cors.allow_origin(tower_http::cors::Any)
} else {
let allowed_origins = config
.cors_allowed_origins
.iter()
.filter_map(|origin| {
HeaderValue::from_str(origin)
.inspect_err(|e| {
warn!("Invalid CORS allowed origin: {origin} | {e:?}")
})
.ok()
})
.collect::<Vec<_>>();
info!("CORS using allowed origin/s: {allowed_origins:?}");
cors = cors.allow_origin(allowed_origins);
};
cors
}
pub fn monitoring_interval() -> async_timing_util::Timelength {
static MONITORING_INTERVAL: OnceLock<
async_timing_util::Timelength,
> = OnceLock::new();
*MONITORING_INTERVAL.get_or_init(|| {
core_config().monitoring_interval.try_into().unwrap_or_else(
|_| {
error!("Invalid 'monitoring_interval', using default 15-sec");
async_timing_util::Timelength::FifteenSeconds
},
)
})
}
pub fn core_host() -> Option<&'static url::Url> {
static CORE_URL: OnceLock<Option<url::Url>> = OnceLock::new();
CORE_URL
.get_or_init(|| {
url::Url::parse(&core_config().host)
.inspect_err(|e| {
warn!(
"Invalid KOMODO_HOST: not URL. Passkeys won't work. | {e:?}"
)
})
.ok()
})
.as_ref()
}
pub fn core_config() -> &'static CoreConfig {
static CORE_CONFIG: OnceLock<CoreConfig> = OnceLock::new();
CORE_CONFIG.get_or_init(|| {
@@ -281,6 +347,21 @@ pub fn core_config() -> &'static CoreConfig {
.komodo_frontend_path
.unwrap_or(config.frontend_path),
jwt_ttl: env.komodo_jwt_ttl.unwrap_or(config.jwt_ttl),
auth_rate_limit_disabled: env
.komodo_auth_rate_limit_disabled
.unwrap_or(config.auth_rate_limit_disabled),
auth_rate_limit_max_attempts: env
.komodo_auth_rate_limit_max_attempts
.unwrap_or(config.auth_rate_limit_max_attempts),
auth_rate_limit_window_seconds: env
.komodo_auth_rate_limit_window_seconds
.unwrap_or(config.auth_rate_limit_window_seconds),
cors_allowed_origins: env
.komodo_cors_allowed_origins
.unwrap_or(config.cors_allowed_origins),
cors_allow_credentials: env
.komodo_cors_allow_credentials
.unwrap_or(config.cors_allow_credentials),
sync_directory: env
.komodo_sync_directory
.unwrap_or(config.sync_directory),
@@ -336,6 +417,9 @@ pub fn core_config() -> &'static CoreConfig {
.komodo_lock_login_credentials_for
.unwrap_or(config.lock_login_credentials_for),
local_auth: env.komodo_local_auth.unwrap_or(config.local_auth),
min_password_length: env
.komodo_min_password_length
.unwrap_or(config.min_password_length),
logging: LogConfig {
level: env
.komodo_logging_level
@@ -349,12 +433,16 @@ pub fn core_config() -> &'static CoreConfig {
location: env
.komodo_logging_location
.unwrap_or(config.logging.location),
ansi: env.komodo_logging_ansi.unwrap_or(config.logging.ansi),
otlp_endpoint: env
.komodo_logging_otlp_endpoint
.unwrap_or(config.logging.otlp_endpoint),
opentelemetry_service_name: env
.komodo_logging_opentelemetry_service_name
.unwrap_or(config.logging.opentelemetry_service_name),
opentelemetry_scope_name: env
.komodo_logging_opentelemetry_scope_name
.unwrap_or(config.logging.opentelemetry_scope_name),
},
pretty_startup_config: env
.komodo_pretty_startup_config

View File

@@ -45,7 +45,6 @@ impl PeripheryConnectionArgs<'_> {
periphery_connections().insert(id.clone(), self).await;
let responses = connection.responses.clone();
let terminals = connection.terminals.clone();
tokio::spawn(async move {
loop {
@@ -91,17 +90,22 @@ impl PeripheryConnectionArgs<'_> {
}
});
Ok(PeripheryClient {
id,
responses,
terminals,
})
Ok(PeripheryClient { id, responses })
}
}
impl PeripheryConnection {
/// Custom Core -> Periphery side only login wrapper
/// to implement passkey support for backward compatibility
#[instrument(
"PeripheryLogin",
skip(self, socket, identifiers),
fields(
server_id = self.args.id,
address = self.args.address,
direction = "CoreToPeriphery"
)
)]
async fn client_login(
&self,
socket: &mut TungsteniteWebsocket,
@@ -124,6 +128,7 @@ impl PeripheryConnection {
}
}
#[instrument("V1PasskeyPeripheryLoginFlow", skip(socket, passkey))]
async fn handle_passkey_login(
socket: &mut TungsteniteWebsocket,
// for legacy auth

View File

@@ -31,7 +31,7 @@ use transport::{
},
channel::{BufferedReceiver, Sender, buffered_channel},
websocket::{
Websocket, WebsocketMessage, WebsocketReceiver as _,
Websocket, WebsocketReceiver as _, WebsocketReceiverExt,
WebsocketSender as _,
},
};
@@ -109,6 +109,7 @@ pub struct PeripheryConnectionArgs<'a> {
impl PublicKeyValidator for PeripheryConnectionArgs<'_> {
type ValidationResult = String;
#[instrument("ValidatePeripheryPublicKey", skip(self))]
async fn validate(
&self,
public_key: String,
@@ -256,7 +257,8 @@ impl<'a> From<&'a OwnedPeripheryConnectionArgs>
pub type ResponseChannels =
CloneCache<Uuid, Sender<EncodedResponse<EncodedJsonMessage>>>;
pub type TerminalChannels = CloneCache<Uuid, Sender<Vec<u8>>>;
pub type TerminalChannels =
CloneCache<Uuid, Sender<anyhow::Result<Vec<u8>>>>;
#[derive(Debug)]
pub struct PeripheryConnection {
@@ -326,6 +328,11 @@ impl PeripheryConnection {
)
}
#[instrument(
"StandardPeripheryLoginFlow",
skip(self, socket, identifiers),
fields(expected_public_key = self.args.periphery_public_key)
)]
pub async fn handle_login<W: Websocket, L: LoginFlow>(
&self,
socket: &mut W,
@@ -360,8 +367,22 @@ impl PeripheryConnection {
let forward_writes = async {
loop {
let Ok(message) = receiver.recv().await else {
break;
let message = match tokio::time::timeout(
Duration::from_secs(5),
receiver.recv(),
)
.await
{
Ok(Ok(message)) => message,
Ok(Err(_)) => break,
// Handle sending Ping
Err(_) => {
if let Err(e) = ws_write.ping().await {
self.set_error(e).await;
break;
}
continue;
}
};
match ws_write.send(message.into_bytes()).await {
Ok(_) => receiver.clear_buffer(),
@@ -378,19 +399,13 @@ impl PeripheryConnection {
let handle_reads = async {
loop {
match ws_read.recv().await {
Ok(WebsocketMessage::Message(message)) => {
self.handle_incoming_message(message).await
}
Ok(WebsocketMessage::Close(_))
| Ok(WebsocketMessage::Closed) => {
self.set_error(anyhow!("Connection closed")).await;
break;
}
match ws_read.recv_message().await {
Ok(message) => self.handle_incoming_message(message).await,
Err(e) => {
self.set_error(e).await;
break;
}
};
}
}
// Cancel again if not already
cancel.cancel();
@@ -403,15 +418,8 @@ impl PeripheryConnection {
pub async fn handle_incoming_message(
&self,
message: EncodedTransportMessage,
message: TransportMessage,
) {
let message: TransportMessage = match message.decode() {
Ok(res) => res,
Err(e) => {
warn!("Failed to parse Message bytes | {e:#}");
return;
}
};
match message {
TransportMessage::Response(data) => {
match data.decode().map(ResponseMessage::into_inner) {

View File

@@ -22,6 +22,7 @@ use periphery_client::{
};
use resolver_api::Resolve;
use serror::{AddStatusCode, AddStatusCodeError};
use tracing::Instrument;
use transport::{
auth::{
HeaderConnectionIdentifiers, LoginFlow, LoginFlowArgs,
@@ -133,13 +134,23 @@ async fn existing_server_handler(
return;
};
if let Err(e) = connection
.handle_login::<_, ServerLoginFlow>(
&mut socket,
identifiers.build(query.as_bytes()),
)
.await
{
let span = info_span!(
"PeripheryLogin",
server_id = server.id,
direction = "PeripheryToCore"
);
let login = async {
connection
.handle_login::<_, ServerLoginFlow>(
&mut socket,
identifiers.build(query.as_bytes()),
)
.await
}
.instrument(span)
.await;
if let Err(e) = login {
connection.set_error(e).await;
return;
}

View File

@@ -1,3 +1,16 @@
//! # Action State Management
//!
//! This module provides thread-safe state management for resource exections.
//! It prevents concurrent execution of exections on the same resource using
//! a Mutex-based locking mechanism with RAII guards.
//!
//! ## Safety
//!
//! - Uses RAII pattern to ensure locks are always released
//! - Handles lock poisoning gracefully
//! - Prevents race conditions through per-resource locks
//! - No deadlock risk: each resource has independent locks
use std::sync::{Arc, Mutex};
use anyhow::anyhow;
@@ -9,12 +22,13 @@ use komodo_client::{
deployment::DeploymentActionState,
procedure::ProcedureActionState, repo::RepoActionState,
server::ServerActionState, stack::StackActionState,
sync::ResourceSyncActionState,
swarm::SwarmActionState, sync::ResourceSyncActionState,
},
};
#[derive(Default)]
pub struct ActionStates {
pub swarm: CloneCache<String, Arc<ActionState<SwarmActionState>>>,
pub server: CloneCache<String, Arc<ActionState<ServerActionState>>>,
pub stack: CloneCache<String, Arc<ActionState<StackActionState>>>,
pub deployment:
@@ -28,7 +42,16 @@ pub struct ActionStates {
CloneCache<String, Arc<ActionState<ResourceSyncActionState>>>,
}
/// Need to be able to check "busy" with write lock acquired.
/// Thread-safe state container for resource executions.
///
/// Uses a Mutex to prevent concurrent executions and provides
/// RAII-based locking through [UpdateGuard].
///
/// # Safety
///
/// - Each resource has its own ActionState instance
/// - State is reset to default when [UpdateGuard] is dropped
/// - Lock poisoning error handling is handled gracefully with anyhow::Error
#[derive(Default)]
pub struct ActionState<States: Default + Send + 'static>(
Mutex<States>,
@@ -42,7 +65,7 @@ impl<States: Default + Busy + Copy + Send + 'static>
*self
.0
.lock()
.map_err(|e| anyhow!("action state lock poisoned | {e:?}"))?,
.map_err(|e| anyhow!("Action state lock poisoned | {e:?}"))?,
)
}
@@ -51,14 +74,33 @@ impl<States: Default + Busy + Copy + Send + 'static>
self
.0
.lock()
.map_err(|e| anyhow!("action state lock poisoned | {e:?}"))?
.map_err(|e| anyhow!("Action state lock poisoned | {e:?}"))?
.busy(),
)
}
/// Will acquire lock, check busy, and if not will
/// run the provided update function on the states.
/// Returns a guard that returns the states to default (not busy) when dropped.
/// Acquires lock, checks if resource is busy, and if not,
/// runs the provided update function on the states.
///
/// Returns an `UpdateGuard` that automatically resets the state
/// to default (not busy) when dropped.
///
/// # Errors
///
/// Returns an error if:
/// - The lock is poisoned
/// - The resource is currently busy
///
/// # Example
///
/// ```rust
/// let guard = action_state.update(|state| {
/// *state = SomeNewState;
/// })?;
/// // State is locked and marked as busy
/// // ... perform work ...
/// drop(guard) // Guard is dropped, state returns to default
/// ```
pub fn update(
&self,
update_fn: impl Fn(&mut States),
@@ -91,10 +133,22 @@ impl<States: Default + Busy + Copy + Send + 'static>
}
}
/// When dropped will return the inner state to default.
/// The inner mutex guard must already be dropped BEFORE this is dropped,
/// which is guaranteed as the inner guard is dropped by all public methods before
/// user could drop UpdateGuard.
/// RAII guard that automatically resets the action state when dropped.
///
/// # Safety
///
/// The inner mutex guard is guaranteed to be dropped before this guard
/// is dropped, preventing deadlocks. This is ensured by all public methods
/// that create UpdateGuard instances.
///
/// # Behavior
///
/// When dropped, this guard will:
/// 1. Re-acquire the lock
/// 2. Call the provided return function (typically resetting to default)
/// 3. Release the lock
///
/// If the lock is poisoned, an error is logged but the drop continues.
pub struct UpdateGuard<'a, States: Default + Send + 'static>(
&'a Mutex<States>,
Box<dyn Fn(&mut States) + Send>,

Some files were not shown because too many files have changed in this diff Show More