CLI send enhancements and shared plugin event routing (#398)

This commit is contained in:
Gregory Schier
2026-02-20 13:21:55 -08:00
committed by GitHub
parent 746bedf885
commit 4e56daa555
30 changed files with 1556 additions and 582 deletions

View File

@@ -12,7 +12,9 @@ path = "src/main.rs"
clap = { version = "4", features = ["derive"] }
dirs = "6"
env_logger = "0.11"
futures = "0.3"
log = { workspace = true }
schemars = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
tokio = { workspace = true, features = ["rt-multi-thread", "macros"] }

View File

@@ -1,340 +0,0 @@
# CLI Command Architecture Plan
## Goal
Redesign the yaak-cli command structure to use a resource-oriented `<resource> <action>`
pattern that scales well, is discoverable, and supports both human and LLM workflows.
## Status Snapshot
Current branch state:
- Modular CLI structure with command modules and shared `CliContext`
- Resource/action hierarchy in place for:
- `workspace list|show|create|update|delete`
- `request list|show|create|update|send|delete`
- `folder list|show|create|update|delete`
- `environment list|show|create|update|delete`
- Top-level `send` exists as a request-send shortcut (not yet flexible request/folder/workspace resolution)
- Legacy `get` command removed
- JSON create/update flow implemented (`--json` and positional JSON shorthand)
- No `request schema` command yet
Progress checklist:
- [x] Phase 1 complete
- [x] Phase 2 complete
- [x] Phase 3 complete
- [ ] Phase 4 complete
- [ ] Phase 5 complete
- [ ] Phase 6 complete
## Command Architecture
### Design Principles
- **Resource-oriented**: top-level commands are nouns, subcommands are verbs
- **Polymorphic requests**: `request` covers HTTP, gRPC, and WebSocket — the CLI
resolves the type via `get_any_request` and adapts behavior accordingly
- **Simple creation, full-fidelity via JSON**: human-friendly flags for basic creation,
`--json` for full control (targeted at LLM and scripting workflows)
- **Runtime schema introspection**: `request schema` outputs JSON Schema for the request
models, with dynamic auth fields populated from loaded plugins at runtime
- **Destructive actions require confirmation**: `delete` commands prompt for user
confirmation before proceeding. Can be bypassed with `--yes` / `-y` for scripting
### Commands
```
# Top-level shortcut
yaakcli send <id> [-e <env_id>] # id can be a request, folder, or workspace
# Resource commands
yaakcli workspace list
yaakcli workspace show <id>
yaakcli workspace create --name <name>
yaakcli workspace create --json '{"name": "My Workspace"}'
yaakcli workspace create '{"name": "My Workspace"}' # positional JSON shorthand
yaakcli workspace update --json '{"id": "wk_abc", "name": "New Name"}'
yaakcli workspace delete <id>
yaakcli request list <workspace_id>
yaakcli request show <id>
yaakcli request create <workspace_id> --name <name> --url <url> [--method GET]
yaakcli request create --json '{"workspaceId": "wk_abc", "url": "..."}'
yaakcli request update --json '{"id": "rq_abc", "url": "https://new.com"}'
yaakcli request send <id> [-e <env_id>]
yaakcli request delete <id>
yaakcli request schema <http|grpc|websocket>
yaakcli folder list <workspace_id>
yaakcli folder show <id>
yaakcli folder create <workspace_id> --name <name>
yaakcli folder create --json '{"workspaceId": "wk_abc", "name": "Auth"}'
yaakcli folder update --json '{"id": "fl_abc", "name": "New Name"}'
yaakcli folder delete <id>
yaakcli environment list <workspace_id>
yaakcli environment show <id>
yaakcli environment create <workspace_id> --name <name>
yaakcli environment create --json '{"workspaceId": "wk_abc", "name": "Production"}'
yaakcli environment update --json '{"id": "ev_abc", ...}'
yaakcli environment delete <id>
```
### `send` — Top-Level Shortcut
`yaakcli send <id>` is a convenience alias that accepts any sendable ID. It tries
each type in order via DB lookups (short-circuiting on first match):
1. Request (HTTP, gRPC, or WebSocket via `get_any_request`)
2. Folder (sends all requests in the folder)
3. Workspace (sends all requests in the workspace)
ID prefixes exist (e.g. `rq_`, `fl_`, `wk_`) but are not relied upon — resolution
is purely by DB lookup.
`request send <id>` is the same but restricted to request IDs only.
### Request Send — Polymorphic Behavior
`send` means "execute this request" regardless of protocol:
- **HTTP**: send request, print response, exit
- **gRPC**: invoke the method; for streaming, stream output to stdout until done/Ctrl+C
- **WebSocket**: connect, stream messages to stdout until closed/Ctrl+C
### `request schema` — Runtime JSON Schema
Outputs a JSON Schema describing the full request shape, including dynamic fields:
1. Generate base schema from `schemars::JsonSchema` derive on the Rust model structs
2. Load plugins, collect auth strategy definitions and their form inputs
3. Merge plugin-defined auth fields into the `authentication` property as a `oneOf`
4. Output the combined schema as JSON
This lets an LLM call `schema`, read the shape, and construct valid JSON for
`create --json` or `update --json`.
## Implementation Steps
### Phase 1: Restructure commands (no new functionality)
Refactor `main.rs` into the new resource/action pattern using clap subcommand nesting.
Existing behavior stays the same, just reorganized. Remove the `get` command.
1. Create module structure: `commands/workspace.rs`, `commands/request.rs`, etc.
2. Define nested clap enums:
```rust
enum Commands {
Send(SendArgs),
Workspace(WorkspaceArgs),
Request(RequestArgs),
Folder(FolderArgs),
Environment(EnvironmentArgs),
}
```
3. Move existing `Workspaces` logic into `workspace list`
4. Move existing `Requests` logic into `request list`
5. Move existing `Send` logic into `request send`
6. Move existing `Create` logic into `request create`
7. Delete the `Get` command entirely
8. Extract shared setup (DB init, plugin init, encryption) into a reusable context struct
### Phase 2: Add missing CRUD commands
Status: complete
1. `workspace show <id>`
2. `workspace create --name <name>` (and `--json`)
3. `workspace update --json`
4. `workspace delete <id>`
5. `request show <id>` (JSON output of the full request model)
6. `request delete <id>`
7. `folder list <workspace_id>`
8. `folder show <id>`
9. `folder create <workspace_id> --name <name>` (and `--json`)
10. `folder update --json`
11. `folder delete <id>`
12. `environment list <workspace_id>`
13. `environment show <id>`
14. `environment create <workspace_id> --name <name>` (and `--json`)
15. `environment update --json`
16. `environment delete <id>`
### Phase 3: JSON input for create/update
Both commands accept JSON via `--json <string>` or as a positional argument (detected
by leading `{`). They follow the same upsert pattern as the plugin API.
- **`create --json`**: JSON must include `workspaceId`. Must NOT include `id` (or
use empty string `""`). Deserializes into the model with defaults for missing fields,
then upserts (insert).
- **`update --json`**: JSON must include `id`. Performs a fetch-merge-upsert:
1. Fetch the existing model from DB
2. Serialize it to `serde_json::Value`
3. Deep-merge the user's partial JSON on top (JSON Merge Patch / RFC 7386 semantics)
4. Deserialize back into the typed model
5. Upsert (update)
This matches how the MCP server plugin already does it (fetch existing, spread, override),
but the CLI handles the merge server-side so callers don't have to.
Setting a field to `null` removes it (for `Option<T>` fields), per RFC 7386.
Implementation:
1. Add `--json` flag and positional JSON detection to `create` commands
2. Add `update` commands with required `--json` flag
3. Implement JSON merge utility (or use `json-patch` crate)
### Phase 4: Runtime schema generation
1. Add `schemars` dependency to `yaak-models`
2. Derive `JsonSchema` on `HttpRequest`, `GrpcRequest`, `WebsocketRequest`, and their
nested types (`HttpRequestHeader`, `HttpUrlParameter`, etc.)
3. Implement `request schema` command:
- Generate base schema from schemars
- Query plugins for auth strategy form inputs
- Convert plugin form inputs into JSON Schema properties
- Merge into the `authentication` field
- Print to stdout
### Phase 5: Polymorphic send
1. Update `request send` to use `get_any_request` to resolve the request type
2. Match on `AnyRequest` variant and dispatch to the appropriate sender:
- `AnyRequest::HttpRequest` — existing HTTP send logic
- `AnyRequest::GrpcRequest` — gRPC invoke (future implementation)
- `AnyRequest::WebsocketRequest` — WebSocket connect (future implementation)
3. gRPC and WebSocket send can initially return "not yet implemented" errors
### Phase 6: Top-level `send` and folder/workspace send
1. Add top-level `yaakcli send <id>` command
2. Resolve ID by trying DB lookups in order: any_request → folder → workspace
3. For folder: list all requests in folder, send each
4. For workspace: list all requests in workspace, send each
5. Add execution options: `--sequential` (default), `--parallel`, `--fail-fast`
## Execution Plan (PR Slices)
### PR 1: Command tree refactor + compatibility aliases
Scope:
1. Introduce `commands/` modules and a `CliContext` for shared setup
2. Add new clap hierarchy (`workspace`, `request`, `folder`, `environment`)
3. Route existing behavior into:
- `workspace list`
- `request list <workspace_id>`
- `request send <id>`
- `request create <workspace_id> ...`
4. Keep compatibility aliases temporarily:
- `workspaces` -> `workspace list`
- `requests <workspace_id>` -> `request list <workspace_id>`
- `create ...` -> `request create ...`
5. Remove `get` and update help text
Acceptance criteria:
- `yaakcli --help` shows noun/verb structure
- Existing list/send/create workflows still work
- No behavior change in HTTP send output format
### PR 2: CRUD surface area
Scope:
1. Implement `show/create/update/delete` for `workspace`, `request`, `folder`, `environment`
2. Ensure delete commands require confirmation by default (`--yes` bypass)
3. Normalize output format for list/show/create/update/delete responses
Acceptance criteria:
- Every command listed in the "Commands" section parses and executes
- Delete commands are safe by default in interactive terminals
- `--yes` supports non-interactive scripts
### PR 3: JSON input + merge patch semantics
Scope:
1. Add shared parser for `--json` and positional JSON shorthand
2. Add `create --json` and `update --json` for all mutable resources
3. Implement server-side RFC 7386 merge patch behavior
4. Add guardrails:
- `create --json`: reject non-empty `id`
- `update --json`: require `id`
Acceptance criteria:
- Partial `update --json` only modifies provided keys
- `null` clears optional values
- Invalid JSON and missing required fields return actionable errors
### PR 4: `request schema` and plugin auth integration
Scope:
1. Add `schemars` to `yaak-models` and derive `JsonSchema` for request models
2. Implement `request schema <http|grpc|websocket>`
3. Merge plugin auth form inputs into `authentication` schema at runtime
Acceptance criteria:
- Command prints valid JSON schema
- Schema reflects installed auth providers at runtime
- No panic when plugins fail to initialize (degrade gracefully)
### PR 5: Polymorphic request send
Scope:
1. Replace request resolution in `request send` with `get_any_request`
2. Dispatch by request type
3. Keep HTTP fully functional
4. Return explicit NYI errors for gRPC/WebSocket until implemented
Acceptance criteria:
- HTTP behavior remains unchanged
- gRPC/WebSocket IDs are recognized and return explicit status
### PR 6: Top-level `send` + bulk execution
Scope:
1. Add top-level `send <id>` for request/folder/workspace IDs
2. Implement folder/workspace fan-out execution
3. Add execution controls: `--sequential`, `--parallel`, `--fail-fast`
Acceptance criteria:
- Correct ID dispatch order: request -> folder -> workspace
- Deterministic summary output (success/failure counts)
- Non-zero exit code when any request fails (unless explicitly configured otherwise)
## Validation Matrix
1. CLI parsing tests for every command path (including aliases while retained)
2. Integration tests against temp SQLite DB for CRUD flows
3. Snapshot tests for output text where scripting compatibility matters
4. Manual smoke tests:
- Send HTTP request with template/rendered vars
- JSON create/update for each resource
- Delete confirmation and `--yes`
- Top-level `send` on request/folder/workspace
## Open Questions
1. Should compatibility aliases (`workspaces`, `requests`, `create`) be removed immediately or after one release cycle?
2. For bulk `send`, should default behavior stop on first failure or continue and summarize?
3. Should command output default to human-readable text with an optional `--format json`, or return JSON by default for `show`/`list`?
4. For `request schema`, should plugin-derived auth fields be namespaced by plugin ID to avoid collisions?
## Crate Changes
- **yaak-cli**: restructure into modules, new clap hierarchy
- **yaak-models**: add `schemars` dependency, derive `JsonSchema` on model structs
(current derives: `Debug, Clone, PartialEq, Serialize, Deserialize, Default, TS`)

View File

@@ -1,4 +1,4 @@
use clap::{Args, Parser, Subcommand};
use clap::{Args, Parser, Subcommand, ValueEnum};
use std::path::PathBuf;
#[derive(Parser)]
@@ -23,7 +23,7 @@ pub struct Cli {
#[derive(Subcommand)]
pub enum Commands {
/// Send an HTTP request by ID
/// Send a request, folder, or workspace by ID
Send(SendArgs),
/// Workspace commands
@@ -41,8 +41,20 @@ pub enum Commands {
#[derive(Args)]
pub struct SendArgs {
/// Request ID
pub request_id: String,
/// Request, folder, or workspace ID
pub id: String,
/// Execute requests sequentially (default)
#[arg(long, conflicts_with = "parallel")]
pub sequential: bool,
/// Execute requests in parallel
#[arg(long, conflicts_with = "sequential")]
pub parallel: bool,
/// Stop on first request failure when sending folders/workspaces
#[arg(long, conflicts_with = "parallel")]
pub fail_fast: bool,
}
#[derive(Args)]
@@ -119,12 +131,18 @@ pub enum RequestCommands {
request_id: String,
},
/// Send an HTTP request by ID
/// Send a request by ID
Send {
/// Request ID
request_id: String,
},
/// Output JSON schema for request create/update payloads
Schema {
#[arg(value_enum)]
request_type: RequestSchemaType,
},
/// Create a new HTTP request
Create {
/// Workspace ID (or positional JSON payload shorthand)
@@ -169,6 +187,13 @@ pub enum RequestCommands {
},
}
#[derive(Clone, Copy, Debug, ValueEnum)]
pub enum RequestSchemaType {
Http,
Grpc,
Websocket,
}
#[derive(Args)]
pub struct FolderArgs {
#[command(subcommand)]

View File

@@ -51,8 +51,8 @@ fn show(ctx: &CliContext, environment_id: &str) -> CommandResult {
.db()
.get_environment(environment_id)
.map_err(|e| format!("Failed to get environment: {e}"))?;
let output =
serde_json::to_string_pretty(&environment).map_err(|e| format!("Failed to serialize environment: {e}"))?;
let output = serde_json::to_string_pretty(&environment)
.map_err(|e| format!("Failed to serialize environment: {e}"))?;
println!("{output}");
Ok(())
}
@@ -81,9 +81,8 @@ fn create(
}
validate_create_id(&payload, "environment")?;
let mut environment: Environment =
serde_json::from_value(payload)
.map_err(|e| format!("Failed to parse environment create JSON: {e}"))?;
let mut environment: Environment = serde_json::from_value(payload)
.map_err(|e| format!("Failed to parse environment create JSON: {e}"))?;
if environment.workspace_id.is_empty() {
return Err("environment create JSON requires non-empty \"workspaceId\"".to_string());
@@ -105,8 +104,9 @@ fn create(
let workspace_id = workspace_id.ok_or_else(|| {
"environment create requires workspace_id unless JSON payload is provided".to_string()
})?;
let name = name
.ok_or_else(|| "environment create requires --name unless JSON payload is provided".to_string())?;
let name = name.ok_or_else(|| {
"environment create requires --name unless JSON payload is provided".to_string()
})?;
let environment = Environment {
workspace_id,

View File

@@ -31,7 +31,8 @@ pub fn run(ctx: &CliContext, args: FolderArgs) -> i32 {
}
fn list(ctx: &CliContext, workspace_id: &str) -> CommandResult {
let folders = ctx.db().list_folders(workspace_id).map_err(|e| format!("Failed to list folders: {e}"))?;
let folders =
ctx.db().list_folders(workspace_id).map_err(|e| format!("Failed to list folders: {e}"))?;
if folders.is_empty() {
println!("No folders found in workspace {}", workspace_id);
} else {
@@ -43,9 +44,10 @@ fn list(ctx: &CliContext, workspace_id: &str) -> CommandResult {
}
fn show(ctx: &CliContext, folder_id: &str) -> CommandResult {
let folder = ctx.db().get_folder(folder_id).map_err(|e| format!("Failed to get folder: {e}"))?;
let output =
serde_json::to_string_pretty(&folder).map_err(|e| format!("Failed to serialize folder: {e}"))?;
let folder =
ctx.db().get_folder(folder_id).map_err(|e| format!("Failed to get folder: {e}"))?;
let output = serde_json::to_string_pretty(&folder)
.map_err(|e| format!("Failed to serialize folder: {e}"))?;
println!("{output}");
Ok(())
}
@@ -72,8 +74,8 @@ fn create(
}
validate_create_id(&payload, "folder")?;
let folder: Folder =
serde_json::from_value(payload).map_err(|e| format!("Failed to parse folder create JSON: {e}"))?;
let folder: Folder = serde_json::from_value(payload)
.map_err(|e| format!("Failed to parse folder create JSON: {e}"))?;
if folder.workspace_id.is_empty() {
return Err("folder create JSON requires non-empty \"workspaceId\"".to_string());
@@ -88,10 +90,12 @@ fn create(
return Ok(());
}
let workspace_id = workspace_id
.ok_or_else(|| "folder create requires workspace_id unless JSON payload is provided".to_string())?;
let name =
name.ok_or_else(|| "folder create requires --name unless JSON payload is provided".to_string())?;
let workspace_id = workspace_id.ok_or_else(|| {
"folder create requires workspace_id unless JSON payload is provided".to_string()
})?;
let name = name.ok_or_else(|| {
"folder create requires --name unless JSON payload is provided".to_string()
})?;
let folder = Folder { workspace_id, name, ..Default::default() };
@@ -108,10 +112,8 @@ fn update(ctx: &CliContext, json: Option<String>, json_input: Option<String>) ->
let patch = parse_required_json(json, json_input, "folder update")?;
let id = require_id(&patch, "folder update")?;
let existing = ctx
.db()
.get_folder(&id)
.map_err(|e| format!("Failed to get folder for update: {e}"))?;
let existing =
ctx.db().get_folder(&id).map_err(|e| format!("Failed to get folder for update: {e}"))?;
let updated = apply_merge_patch(&existing, &patch, &id, "folder update")?;
let saved = ctx

View File

@@ -1,15 +1,19 @@
use crate::cli::{RequestArgs, RequestCommands};
use crate::cli::{RequestArgs, RequestCommands, RequestSchemaType};
use crate::context::CliContext;
use crate::utils::confirm::confirm_delete;
use crate::utils::json::{
apply_merge_patch, is_json_shorthand, parse_optional_json, parse_required_json, require_id,
validate_create_id,
};
use schemars::schema_for;
use serde_json::{Map, Value, json};
use std::collections::HashMap;
use tokio::sync::mpsc;
use yaak::send::{SendHttpRequestByIdWithPluginsParams, send_http_request_by_id_with_plugins};
use yaak_models::models::HttpRequest;
use yaak_models::models::{GrpcRequest, HttpRequest, WebsocketRequest};
use yaak_models::queries::any_request::AnyRequest;
use yaak_models::util::UpdateSource;
use yaak_plugins::events::PluginContext;
use yaak_plugins::events::{FormInput, FormInputBase, JsonPrimitive, PluginContext};
type CommandResult<T = ()> = std::result::Result<T, String>;
@@ -31,6 +35,15 @@ pub async fn run(
}
};
}
RequestCommands::Schema { request_type } => {
return match schema(ctx, request_type).await {
Ok(()) => 0,
Err(error) => {
eprintln!("Error: {error}");
1
}
};
}
RequestCommands::Create { workspace_id, name, method, url, json } => {
create(ctx, workspace_id, name, method, url, json)
}
@@ -62,6 +75,221 @@ fn list(ctx: &CliContext, workspace_id: &str) -> CommandResult {
Ok(())
}
async fn schema(ctx: &CliContext, request_type: RequestSchemaType) -> CommandResult {
let mut schema = match request_type {
RequestSchemaType::Http => serde_json::to_value(schema_for!(HttpRequest))
.map_err(|e| format!("Failed to serialize HTTP request schema: {e}"))?,
RequestSchemaType::Grpc => serde_json::to_value(schema_for!(GrpcRequest))
.map_err(|e| format!("Failed to serialize gRPC request schema: {e}"))?,
RequestSchemaType::Websocket => serde_json::to_value(schema_for!(WebsocketRequest))
.map_err(|e| format!("Failed to serialize WebSocket request schema: {e}"))?,
};
if let Err(error) = merge_auth_schema_from_plugins(ctx, &mut schema).await {
eprintln!("Warning: Failed to enrich authentication schema from plugins: {error}");
}
let output = serde_json::to_string_pretty(&schema)
.map_err(|e| format!("Failed to format schema JSON: {e}"))?;
println!("{output}");
Ok(())
}
async fn merge_auth_schema_from_plugins(
ctx: &CliContext,
schema: &mut Value,
) -> Result<(), String> {
let plugin_context = PluginContext::new_empty();
let plugin_manager = ctx.plugin_manager();
let summaries = plugin_manager
.get_http_authentication_summaries(&plugin_context)
.await
.map_err(|e| e.to_string())?;
let mut auth_variants = Vec::new();
for (_, summary) in summaries {
let config = match plugin_manager
.get_http_authentication_config(
&plugin_context,
&summary.name,
HashMap::<String, JsonPrimitive>::new(),
"yaakcli_request_schema",
)
.await
{
Ok(config) => config,
Err(error) => {
eprintln!(
"Warning: Failed to load auth config for strategy '{}': {}",
summary.name, error
);
continue;
}
};
auth_variants.push(auth_variant_schema(&summary.name, &summary.label, &config.args));
}
let Some(properties) = schema.get_mut("properties").and_then(Value::as_object_mut) else {
return Ok(());
};
let Some(auth_schema) = properties.get_mut("authentication") else {
return Ok(());
};
if !auth_variants.is_empty() {
let mut one_of = vec![auth_schema.clone()];
one_of.extend(auth_variants);
*auth_schema = json!({ "oneOf": one_of });
}
Ok(())
}
fn auth_variant_schema(auth_name: &str, auth_label: &str, args: &[FormInput]) -> Value {
let mut properties = Map::new();
let mut required = Vec::new();
for input in args {
add_input_schema(input, &mut properties, &mut required);
}
let mut schema = json!({
"title": auth_label,
"description": format!("Authentication values for strategy '{}'", auth_name),
"type": "object",
"properties": properties,
"additionalProperties": true
});
if !required.is_empty() {
schema["required"] = json!(required);
}
schema
}
fn add_input_schema(
input: &FormInput,
properties: &mut Map<String, Value>,
required: &mut Vec<String>,
) {
match input {
FormInput::Text(v) => add_base_schema(
&v.base,
json!({
"type": "string",
"writeOnly": v.password.unwrap_or(false),
}),
properties,
required,
),
FormInput::Editor(v) => add_base_schema(
&v.base,
json!({
"type": "string",
"x-editorLanguage": v.language.clone(),
}),
properties,
required,
),
FormInput::Select(v) => {
let options: Vec<Value> =
v.options.iter().map(|o| Value::String(o.value.clone())).collect();
add_base_schema(
&v.base,
json!({
"type": "string",
"enum": options,
}),
properties,
required,
);
}
FormInput::Checkbox(v) => {
add_base_schema(&v.base, json!({ "type": "boolean" }), properties, required);
}
FormInput::File(v) => {
if v.multiple.unwrap_or(false) {
add_base_schema(
&v.base,
json!({
"type": "array",
"items": { "type": "string" },
}),
properties,
required,
);
} else {
add_base_schema(&v.base, json!({ "type": "string" }), properties, required);
}
}
FormInput::HttpRequest(v) => {
add_base_schema(&v.base, json!({ "type": "string" }), properties, required);
}
FormInput::KeyValue(v) => {
add_base_schema(
&v.base,
json!({
"type": "object",
"additionalProperties": true,
}),
properties,
required,
);
}
FormInput::Accordion(v) => {
if let Some(children) = &v.inputs {
for child in children {
add_input_schema(child, properties, required);
}
}
}
FormInput::HStack(v) => {
if let Some(children) = &v.inputs {
for child in children {
add_input_schema(child, properties, required);
}
}
}
FormInput::Banner(v) => {
if let Some(children) = &v.inputs {
for child in children {
add_input_schema(child, properties, required);
}
}
}
FormInput::Markdown(_) => {}
}
}
fn add_base_schema(
base: &FormInputBase,
mut schema: Value,
properties: &mut Map<String, Value>,
required: &mut Vec<String>,
) {
if base.hidden.unwrap_or(false) || base.name.trim().is_empty() {
return;
}
if let Some(description) = &base.description {
schema["description"] = Value::String(description.clone());
}
if let Some(label) = &base.label {
schema["title"] = Value::String(label.clone());
}
if let Some(default_value) = &base.default_value {
schema["default"] = Value::String(default_value.clone());
}
let name = base.name.clone();
properties.insert(name.clone(), schema);
if !base.optional.unwrap_or(false) {
required.push(name);
}
}
fn create(
ctx: &CliContext,
workspace_id: Option<String>,
@@ -146,12 +374,10 @@ fn update(ctx: &CliContext, json: Option<String>, json_input: Option<String>) ->
}
fn show(ctx: &CliContext, request_id: &str) -> CommandResult {
let request = ctx
.db()
.get_http_request(request_id)
.map_err(|e| format!("Failed to get request: {e}"))?;
let output =
serde_json::to_string_pretty(&request).map_err(|e| format!("Failed to serialize request: {e}"))?;
let request =
ctx.db().get_http_request(request_id).map_err(|e| format!("Failed to get request: {e}"))?;
let output = serde_json::to_string_pretty(&request)
.map_err(|e| format!("Failed to serialize request: {e}"))?;
println!("{output}");
Ok(())
}
@@ -178,9 +404,35 @@ pub async fn send_request_by_id(
verbose: bool,
) -> Result<(), String> {
let request =
ctx.db().get_http_request(request_id).map_err(|e| format!("Failed to get request: {e}"))?;
ctx.db().get_any_request(request_id).map_err(|e| format!("Failed to get request: {e}"))?;
match request {
AnyRequest::HttpRequest(http_request) => {
send_http_request_by_id(
ctx,
&http_request.id,
&http_request.workspace_id,
environment,
verbose,
)
.await
}
AnyRequest::GrpcRequest(_) => {
Err("gRPC request send is not implemented yet in yaak-cli".to_string())
}
AnyRequest::WebsocketRequest(_) => {
Err("WebSocket request send is not implemented yet in yaak-cli".to_string())
}
}
}
let plugin_context = PluginContext::new(None, Some(request.workspace_id.clone()));
async fn send_http_request_by_id(
ctx: &CliContext,
request_id: &str,
workspace_id: &str,
environment: Option<&str>,
verbose: bool,
) -> Result<(), String> {
let plugin_context = PluginContext::new(None, Some(workspace_id.to_string()));
let (event_tx, mut event_rx) = mpsc::channel(100);
let event_handle = tokio::spawn(async move {

View File

@@ -1,6 +1,12 @@
use crate::cli::SendArgs;
use crate::commands::request;
use crate::context::CliContext;
use futures::future::join_all;
enum ExecutionMode {
Sequential,
Parallel,
}
pub async fn run(
ctx: &CliContext,
@@ -8,7 +14,7 @@ pub async fn run(
environment: Option<&str>,
verbose: bool,
) -> i32 {
match request::send_request_by_id(ctx, &args.request_id, environment, verbose).await {
match send_target(ctx, args, environment, verbose).await {
Ok(()) => 0,
Err(error) => {
eprintln!("Error: {error}");
@@ -16,3 +22,163 @@ pub async fn run(
}
}
}
async fn send_target(
ctx: &CliContext,
args: SendArgs,
environment: Option<&str>,
verbose: bool,
) -> Result<(), String> {
let mode = if args.parallel { ExecutionMode::Parallel } else { ExecutionMode::Sequential };
if ctx.db().get_any_request(&args.id).is_ok() {
return request::send_request_by_id(ctx, &args.id, environment, verbose).await;
}
if ctx.db().get_folder(&args.id).is_ok() {
let request_ids = collect_folder_request_ids(ctx, &args.id)?;
if request_ids.is_empty() {
println!("No requests found in folder {}", args.id);
return Ok(());
}
return send_many(ctx, request_ids, mode, args.fail_fast, environment, verbose).await;
}
if ctx.db().get_workspace(&args.id).is_ok() {
let request_ids = collect_workspace_request_ids(ctx, &args.id)?;
if request_ids.is_empty() {
println!("No requests found in workspace {}", args.id);
return Ok(());
}
return send_many(ctx, request_ids, mode, args.fail_fast, environment, verbose).await;
}
Err(format!("Could not resolve ID '{}' as request, folder, or workspace", args.id))
}
fn collect_folder_request_ids(ctx: &CliContext, folder_id: &str) -> Result<Vec<String>, String> {
let mut ids = Vec::new();
let mut http_ids = ctx
.db()
.list_http_requests_for_folder_recursive(folder_id)
.map_err(|e| format!("Failed to list HTTP requests in folder: {e}"))?
.into_iter()
.map(|r| r.id)
.collect::<Vec<_>>();
ids.append(&mut http_ids);
let mut grpc_ids = ctx
.db()
.list_grpc_requests_for_folder_recursive(folder_id)
.map_err(|e| format!("Failed to list gRPC requests in folder: {e}"))?
.into_iter()
.map(|r| r.id)
.collect::<Vec<_>>();
ids.append(&mut grpc_ids);
let mut websocket_ids = ctx
.db()
.list_websocket_requests_for_folder_recursive(folder_id)
.map_err(|e| format!("Failed to list WebSocket requests in folder: {e}"))?
.into_iter()
.map(|r| r.id)
.collect::<Vec<_>>();
ids.append(&mut websocket_ids);
Ok(ids)
}
fn collect_workspace_request_ids(
ctx: &CliContext,
workspace_id: &str,
) -> Result<Vec<String>, String> {
let mut ids = Vec::new();
let mut http_ids = ctx
.db()
.list_http_requests(workspace_id)
.map_err(|e| format!("Failed to list HTTP requests in workspace: {e}"))?
.into_iter()
.map(|r| r.id)
.collect::<Vec<_>>();
ids.append(&mut http_ids);
let mut grpc_ids = ctx
.db()
.list_grpc_requests(workspace_id)
.map_err(|e| format!("Failed to list gRPC requests in workspace: {e}"))?
.into_iter()
.map(|r| r.id)
.collect::<Vec<_>>();
ids.append(&mut grpc_ids);
let mut websocket_ids = ctx
.db()
.list_websocket_requests(workspace_id)
.map_err(|e| format!("Failed to list WebSocket requests in workspace: {e}"))?
.into_iter()
.map(|r| r.id)
.collect::<Vec<_>>();
ids.append(&mut websocket_ids);
Ok(ids)
}
async fn send_many(
ctx: &CliContext,
request_ids: Vec<String>,
mode: ExecutionMode,
fail_fast: bool,
environment: Option<&str>,
verbose: bool,
) -> Result<(), String> {
let mut success_count = 0usize;
let mut failures: Vec<(String, String)> = Vec::new();
match mode {
ExecutionMode::Sequential => {
for request_id in request_ids {
match request::send_request_by_id(ctx, &request_id, environment, verbose).await {
Ok(()) => success_count += 1,
Err(error) => {
failures.push((request_id, error));
if fail_fast {
break;
}
}
}
}
}
ExecutionMode::Parallel => {
let tasks = request_ids
.iter()
.map(|request_id| async move {
(
request_id.clone(),
request::send_request_by_id(ctx, request_id, environment, verbose).await,
)
})
.collect::<Vec<_>>();
for (request_id, result) in join_all(tasks).await {
match result {
Ok(()) => success_count += 1,
Err(error) => failures.push((request_id, error)),
}
}
}
}
let failure_count = failures.len();
println!("Send summary: {success_count} succeeded, {failure_count} failed");
if failure_count == 0 {
return Ok(());
}
for (request_id, error) in failures {
eprintln!(" {}: {}", request_id, error);
}
Err("One or more requests failed".to_string())
}

View File

@@ -28,7 +28,8 @@ pub fn run(ctx: &CliContext, args: WorkspaceArgs) -> i32 {
}
fn list(ctx: &CliContext) -> CommandResult {
let workspaces = ctx.db().list_workspaces().map_err(|e| format!("Failed to list workspaces: {e}"))?;
let workspaces =
ctx.db().list_workspaces().map_err(|e| format!("Failed to list workspaces: {e}"))?;
if workspaces.is_empty() {
println!("No workspaces found");
} else {
@@ -75,8 +76,9 @@ fn create(
return Ok(());
}
let name =
name.ok_or_else(|| "workspace create requires --name unless JSON payload is provided".to_string())?;
let name = name.ok_or_else(|| {
"workspace create requires --name unless JSON payload is provided".to_string()
})?;
let workspace = Workspace { name, ..Default::default() };
let created = ctx

View File

@@ -1,5 +1,7 @@
use crate::plugin_events::CliPluginEventBridge;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use tokio::sync::Mutex;
use yaak_crypto::manager::EncryptionManager;
use yaak_models::blob_manager::BlobManager;
use yaak_models::db_context::DbContext;
@@ -13,6 +15,7 @@ pub struct CliContext {
blob_manager: BlobManager,
pub encryption_manager: Arc<EncryptionManager>,
plugin_manager: Option<Arc<PluginManager>>,
plugin_event_bridge: Mutex<Option<CliPluginEventBridge>>,
}
impl CliContext {
@@ -65,7 +68,20 @@ impl CliContext {
None
};
Self { data_dir, query_manager, blob_manager, encryption_manager, plugin_manager }
let plugin_event_bridge = if let Some(plugin_manager) = &plugin_manager {
Some(CliPluginEventBridge::start(plugin_manager.clone(), query_manager.clone()).await)
} else {
None
};
Self {
data_dir,
query_manager,
blob_manager,
encryption_manager,
plugin_manager,
plugin_event_bridge: Mutex::new(plugin_event_bridge),
}
}
pub fn data_dir(&self) -> &Path {
@@ -90,6 +106,9 @@ impl CliContext {
pub async fn shutdown(&self) {
if let Some(plugin_manager) = &self.plugin_manager {
if let Some(plugin_event_bridge) = self.plugin_event_bridge.lock().await.take() {
plugin_event_bridge.shutdown(plugin_manager).await;
}
plugin_manager.terminate().await;
}
}

View File

@@ -1,6 +1,7 @@
mod cli;
mod commands;
mod context;
mod plugin_events;
mod utils;
use clap::Parser;
@@ -24,7 +25,9 @@ async fn main() {
let needs_plugins = matches!(
&command,
Commands::Send(_)
| Commands::Request(cli::RequestArgs { command: RequestCommands::Send { .. } })
| Commands::Request(cli::RequestArgs {
command: RequestCommands::Send { .. } | RequestCommands::Schema { .. },
})
);
let context = CliContext::initialize(data_dir, app_id, needs_plugins).await;

View File

@@ -0,0 +1,212 @@
use std::sync::Arc;
use tokio::task::JoinHandle;
use yaak::plugin_events::{
GroupedPluginEvent, HostRequest, SharedPluginEventContext, handle_shared_plugin_event,
};
use yaak_models::query_manager::QueryManager;
use yaak_plugins::events::{
EmptyPayload, ErrorResponse, InternalEvent, InternalEventPayload, ListOpenWorkspacesResponse,
WorkspaceInfo,
};
use yaak_plugins::manager::PluginManager;
pub struct CliPluginEventBridge {
rx_id: String,
task: JoinHandle<()>,
}
impl CliPluginEventBridge {
pub async fn start(plugin_manager: Arc<PluginManager>, query_manager: QueryManager) -> Self {
let (rx_id, mut rx) = plugin_manager.subscribe("cli").await;
let rx_id_for_task = rx_id.clone();
let pm = plugin_manager.clone();
let task = tokio::spawn(async move {
while let Some(event) = rx.recv().await {
// Events with reply IDs are replies to app-originated requests.
if event.reply_id.is_some() {
continue;
}
let Some(plugin_handle) = pm.get_plugin_by_ref_id(&event.plugin_ref_id).await
else {
eprintln!(
"Warning: Ignoring plugin event with unknown plugin ref '{}'",
event.plugin_ref_id
);
continue;
};
let plugin_name = plugin_handle.info().name;
let Some(reply_payload) = build_plugin_reply(&query_manager, &event, &plugin_name)
else {
continue;
};
if let Err(err) = pm.reply(&event, &reply_payload).await {
eprintln!("Warning: Failed replying to plugin event: {err}");
}
}
pm.unsubscribe(&rx_id_for_task).await;
});
Self { rx_id, task }
}
pub async fn shutdown(self, plugin_manager: &PluginManager) {
plugin_manager.unsubscribe(&self.rx_id).await;
self.task.abort();
let _ = self.task.await;
}
}
fn build_plugin_reply(
query_manager: &QueryManager,
event: &InternalEvent,
plugin_name: &str,
) -> Option<InternalEventPayload> {
match handle_shared_plugin_event(
query_manager,
&event.payload,
SharedPluginEventContext {
plugin_name,
workspace_id: event.context.workspace_id.as_deref(),
},
) {
GroupedPluginEvent::Handled(payload) => payload,
GroupedPluginEvent::ToHandle(host_request) => match host_request {
HostRequest::ErrorResponse(resp) => {
eprintln!("[plugin:{}] error: {}", plugin_name, resp.error);
None
}
HostRequest::ReloadResponse(_) => None,
HostRequest::ShowToast(req) => {
eprintln!("[plugin:{}] {}", plugin_name, req.message);
Some(InternalEventPayload::ShowToastResponse(EmptyPayload {}))
}
HostRequest::ListOpenWorkspaces(_) => {
let workspaces = match query_manager.connect().list_workspaces() {
Ok(workspaces) => workspaces
.into_iter()
.map(|w| WorkspaceInfo { id: w.id.clone(), name: w.name, label: w.id })
.collect(),
Err(err) => {
return Some(InternalEventPayload::ErrorResponse(ErrorResponse {
error: format!("Failed to list workspaces in CLI: {err}"),
}));
}
};
Some(InternalEventPayload::ListOpenWorkspacesResponse(ListOpenWorkspacesResponse {
workspaces,
}))
}
req => Some(InternalEventPayload::ErrorResponse(ErrorResponse {
error: format!("Unsupported plugin request in CLI: {}", req.type_name()),
})),
},
}
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
use yaak_plugins::events::{GetKeyValueRequest, PluginContext, WindowInfoRequest};
fn query_manager_for_test() -> (QueryManager, TempDir) {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let db_path = temp_dir.path().join("db.sqlite");
let blob_path = temp_dir.path().join("blobs.sqlite");
let (query_manager, _blob_manager, _rx) =
yaak_models::init_standalone(&db_path, &blob_path).expect("Failed to initialize DB");
(query_manager, temp_dir)
}
fn event(payload: InternalEventPayload) -> InternalEvent {
InternalEvent {
id: "evt_1".to_string(),
plugin_ref_id: "plugin_ref_1".to_string(),
plugin_name: "@yaak/test-plugin".to_string(),
reply_id: None,
context: PluginContext::new_empty(),
payload,
}
}
#[test]
fn key_value_requests_round_trip() {
let (query_manager, _temp_dir) = query_manager_for_test();
let plugin_name = "@yaak/test-plugin";
let get_missing = build_plugin_reply(
&query_manager,
&event(InternalEventPayload::GetKeyValueRequest(GetKeyValueRequest {
key: "missing".to_string(),
})),
plugin_name,
);
match get_missing {
Some(InternalEventPayload::GetKeyValueResponse(r)) => assert_eq!(r.value, None),
other => panic!("unexpected payload for missing get: {other:?}"),
}
let set = build_plugin_reply(
&query_manager,
&event(InternalEventPayload::SetKeyValueRequest(
yaak_plugins::events::SetKeyValueRequest {
key: "token".to_string(),
value: "{\"access_token\":\"abc\"}".to_string(),
},
)),
plugin_name,
);
assert!(matches!(set, Some(InternalEventPayload::SetKeyValueResponse(_))));
let get_present = build_plugin_reply(
&query_manager,
&event(InternalEventPayload::GetKeyValueRequest(GetKeyValueRequest {
key: "token".to_string(),
})),
plugin_name,
);
match get_present {
Some(InternalEventPayload::GetKeyValueResponse(r)) => {
assert_eq!(r.value, Some("{\"access_token\":\"abc\"}".to_string()))
}
other => panic!("unexpected payload for present get: {other:?}"),
}
let delete = build_plugin_reply(
&query_manager,
&event(InternalEventPayload::DeleteKeyValueRequest(
yaak_plugins::events::DeleteKeyValueRequest { key: "token".to_string() },
)),
plugin_name,
);
match delete {
Some(InternalEventPayload::DeleteKeyValueResponse(r)) => assert!(r.deleted),
other => panic!("unexpected payload for delete: {other:?}"),
}
}
#[test]
fn unsupported_request_gets_error_reply() {
let (query_manager, _temp_dir) = query_manager_for_test();
let payload = build_plugin_reply(
&query_manager,
&event(InternalEventPayload::WindowInfoRequest(WindowInfoRequest {
label: "main".to_string(),
})),
"@yaak/test-plugin",
);
match payload {
Some(InternalEventPayload::ErrorResponse(err)) => {
assert!(err.error.contains("Unsupported plugin request in CLI"));
assert!(err.error.contains("window_info_request"));
}
other => panic!("unexpected payload for unsupported request: {other:?}"),
}
}
}

View File

@@ -25,9 +25,9 @@ pub fn parse_optional_json(
context: &str,
) -> JsonResult<Option<Value>> {
match (json_flag, json_shorthand) {
(Some(_), Some(_)) => Err(format!(
"Cannot provide both --json and positional JSON for {context}"
)),
(Some(_), Some(_)) => {
Err(format!("Cannot provide both --json and positional JSON for {context}"))
}
(Some(raw), None) => parse_json_object(&raw, context).map(Some),
(None, Some(raw)) => parse_json_object(&raw, context).map(Some),
(None, None) => Ok(None),
@@ -39,9 +39,8 @@ pub fn parse_required_json(
json_shorthand: Option<String>,
context: &str,
) -> JsonResult<Value> {
parse_optional_json(json_flag, json_shorthand, context)?.ok_or_else(|| {
format!("Missing JSON payload for {context}. Use --json or positional JSON")
})
parse_optional_json(json_flag, json_shorthand, context)?
.ok_or_else(|| format!("Missing JSON payload for {context}. Use --json or positional JSON"))
}
pub fn require_id(payload: &Value, context: &str) -> JsonResult<String> {
@@ -60,9 +59,7 @@ pub fn validate_create_id(payload: &Value, context: &str) -> JsonResult<()> {
match id_value {
Value::String(id) if id.is_empty() => Ok(()),
_ => Err(format!(
"{context} create JSON must omit \"id\" or set it to an empty string"
)),
_ => Err(format!("{context} create JSON must omit \"id\" or set it to an empty string")),
}
}

View File

@@ -5,7 +5,7 @@ pub mod http_server;
use assert_cmd::Command;
use assert_cmd::cargo::cargo_bin_cmd;
use std::path::Path;
use yaak_models::models::{HttpRequest, Workspace};
use yaak_models::models::{Folder, GrpcRequest, HttpRequest, WebsocketRequest, Workspace};
use yaak_models::query_manager::QueryManager;
use yaak_models::util::UpdateSource;
@@ -60,3 +60,47 @@ pub fn seed_request(data_dir: &Path, workspace_id: &str, request_id: &str) {
.upsert_http_request(&request, &UpdateSource::Sync)
.expect("Failed to seed request");
}
pub fn seed_folder(data_dir: &Path, workspace_id: &str, folder_id: &str) {
let folder = Folder {
id: folder_id.to_string(),
workspace_id: workspace_id.to_string(),
name: "Seed Folder".to_string(),
..Default::default()
};
query_manager(data_dir)
.connect()
.upsert_folder(&folder, &UpdateSource::Sync)
.expect("Failed to seed folder");
}
pub fn seed_grpc_request(data_dir: &Path, workspace_id: &str, request_id: &str) {
let request = GrpcRequest {
id: request_id.to_string(),
workspace_id: workspace_id.to_string(),
name: "Seeded gRPC Request".to_string(),
url: "https://example.com".to_string(),
..Default::default()
};
query_manager(data_dir)
.connect()
.upsert_grpc_request(&request, &UpdateSource::Sync)
.expect("Failed to seed gRPC request");
}
pub fn seed_websocket_request(data_dir: &Path, workspace_id: &str, request_id: &str) {
let request = WebsocketRequest {
id: request_id.to_string(),
workspace_id: workspace_id.to_string(),
name: "Seeded WebSocket Request".to_string(),
url: "wss://example.com/socket".to_string(),
..Default::default()
};
query_manager(data_dir)
.connect()
.upsert_websocket_request(&request, &UpdateSource::Sync)
.expect("Failed to seed WebSocket request");
}

View File

@@ -1,7 +1,10 @@
mod common;
use common::http_server::TestHttpServer;
use common::{cli_cmd, parse_created_id, query_manager, seed_request, seed_workspace};
use common::{
cli_cmd, parse_created_id, query_manager, seed_grpc_request, seed_request,
seed_websocket_request, seed_workspace,
};
use predicates::str::contains;
use tempfile::TempDir;
use yaak_models::models::HttpResponseState;
@@ -114,8 +117,7 @@ fn create_allows_workspace_only_with_empty_defaults() {
let data_dir = temp_dir.path();
seed_workspace(data_dir, "wk_test");
let create_assert =
cli_cmd(data_dir).args(["request", "create", "wk_test"]).assert().success();
let create_assert = cli_cmd(data_dir).args(["request", "create", "wk_test"]).assert().success();
let request_id = parse_created_id(&create_assert.get_output().stdout, "request create");
let request = query_manager(data_dir)
@@ -177,3 +179,46 @@ fn request_send_persists_response_body_and_events() {
db.list_http_response_events(&response.id).expect("Failed to load response events");
assert!(!events.is_empty(), "expected at least one persisted response event");
}
#[test]
fn request_schema_http_outputs_json_schema() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let data_dir = temp_dir.path();
cli_cmd(data_dir)
.args(["request", "schema", "http"])
.assert()
.success()
.stdout(contains("\"type\": \"object\""))
.stdout(contains("\"authentication\""));
}
#[test]
fn request_send_grpc_returns_explicit_nyi_error() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let data_dir = temp_dir.path();
seed_workspace(data_dir, "wk_test");
seed_grpc_request(data_dir, "wk_test", "gr_seed_nyi");
cli_cmd(data_dir)
.args(["request", "send", "gr_seed_nyi"])
.assert()
.failure()
.code(1)
.stderr(contains("gRPC request send is not implemented yet in yaak-cli"));
}
#[test]
fn request_send_websocket_returns_explicit_nyi_error() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let data_dir = temp_dir.path();
seed_workspace(data_dir, "wk_test");
seed_websocket_request(data_dir, "wk_test", "wr_seed_nyi");
cli_cmd(data_dir)
.args(["request", "send", "wr_seed_nyi"])
.assert()
.failure()
.code(1)
.stderr(contains("WebSocket request send is not implemented yet in yaak-cli"));
}

View File

@@ -0,0 +1,81 @@
mod common;
use common::http_server::TestHttpServer;
use common::{cli_cmd, query_manager, seed_folder, seed_workspace};
use predicates::str::contains;
use tempfile::TempDir;
use yaak_models::models::HttpRequest;
use yaak_models::util::UpdateSource;
#[test]
fn top_level_send_workspace_sends_http_requests_and_prints_summary() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let data_dir = temp_dir.path();
seed_workspace(data_dir, "wk_test");
let server = TestHttpServer::spawn_ok("workspace bulk send");
let request = HttpRequest {
id: "rq_workspace_send".to_string(),
workspace_id: "wk_test".to_string(),
name: "Workspace Send".to_string(),
method: "GET".to_string(),
url: server.url.clone(),
..Default::default()
};
query_manager(data_dir)
.connect()
.upsert_http_request(&request, &UpdateSource::Sync)
.expect("Failed to seed workspace request");
cli_cmd(data_dir)
.args(["send", "wk_test"])
.assert()
.success()
.stdout(contains("HTTP 200 OK"))
.stdout(contains("workspace bulk send"))
.stdout(contains("Send summary: 1 succeeded, 0 failed"));
}
#[test]
fn top_level_send_folder_sends_http_requests_and_prints_summary() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let data_dir = temp_dir.path();
seed_workspace(data_dir, "wk_test");
seed_folder(data_dir, "wk_test", "fl_test");
let server = TestHttpServer::spawn_ok("folder bulk send");
let request = HttpRequest {
id: "rq_folder_send".to_string(),
workspace_id: "wk_test".to_string(),
folder_id: Some("fl_test".to_string()),
name: "Folder Send".to_string(),
method: "GET".to_string(),
url: server.url.clone(),
..Default::default()
};
query_manager(data_dir)
.connect()
.upsert_http_request(&request, &UpdateSource::Sync)
.expect("Failed to seed folder request");
cli_cmd(data_dir)
.args(["send", "fl_test"])
.assert()
.success()
.stdout(contains("HTTP 200 OK"))
.stdout(contains("folder bulk send"))
.stdout(contains("Send summary: 1 succeeded, 0 failed"));
}
#[test]
fn top_level_send_unknown_id_fails_with_clear_error() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let data_dir = temp_dir.path();
cli_cmd(data_dir)
.args(["send", "does_not_exist"])
.assert()
.failure()
.code(1)
.stderr(contains("Could not resolve ID 'does_not_exist' as request, folder, or workspace"));
}