# core.solver.DistributedSolver { #mlsysim.core.solver.DistributedSolver } ```python core.solver.DistributedSolver() ``` Resolves fleet-wide communication, synchronization, and pipelining constraints. Supports 3D Parallelism (DP, TP, PP) and Network Bisection/Oversubscription. ## Methods | Name | Description | | --- | --- | | [solve](#mlsysim.core.solver.DistributedSolver.solve) | Calculates distributed training performance using the 3D Parallelism model. | ### solve { #mlsysim.core.solver.DistributedSolver.solve } ```python core.solver.DistributedSolver.solve( model, fleet, batch_size=1, precision='fp16', efficiency=0.5, tp_size=1, pp_size=1, microbatch_count=1, topology_override=None, ) ``` Calculates distributed training performance using the 3D Parallelism model. #### Parameters {.doc-section .doc-section-parameters} | Name | Type | Description | Default | |-------------------|----------|------------------------------------------------------|------------| | model | Workload | The model architecture to simulate. | _required_ | | fleet | Fleet | The hardware cluster and network topology. | _required_ | | batch_size | int | Global batch size. | `1` | | precision | str | Numerical precision (fp16, fp32, int8). | `'fp16'` | | efficiency | float | Achieved compute efficiency (0.0 to 1.0). | `0.5` | | tp_size | int | Tensor Parallelism degree (usually intra-node). | `1` | | pp_size | int | Pipeline Parallelism degree (cross-node stages). | `1` | | microbatch_count | int | Number of microbatches for pipeline parallelism (M). | `1` | | topology_override | str | Force a specific topology (ring, tree). | `None` | #### Returns {.doc-section .doc-section-returns} | Name | Type | Description | |--------|------------------|--------------------------------------------------------------------------------| | | Dict\[str, Any\] | Performance metrics including scaling efficiency and pipeline bubble fraction. |