Compare commits

..

124 Commits

Author SHA1 Message Date
Owen Schwartz
542c70b326 Merge pull request #342 from fosrl/dev
1.12.4
2026-05-07 17:41:03 -07:00
Owen
663e98af60 Retry interval while we are disconnected 2026-05-07 17:27:01 -07:00
Owen
901ec71baf Increase max attempts 2026-05-07 17:25:13 -07:00
Owen
9bc0204f57 Merge branch 'main' into dev 2026-05-07 17:24:34 -07:00
Daniel Snider
1e77b09e3b fix(ping): decouple data-plane recovery trigger from backoff ramp
The trigger condition that decides whether to fire the data-plane
recovery flow in startPingCheck was AND-ed with `currentInterval <
maxInterval`. That clause was meant to throttle the *backoff ramp*
(don't widen the interval past 6s), but it also gated the recovery
trigger itself — a conflation that became invisibly load-bearing
once commit 8161fa6 (March 2026) bumped the default pingInterval
from 3s to 15s while leaving maxInterval at 6s. Under the new
defaults `currentInterval` starts at 15s and `15 < 6` is permanently
false, so the recovery branch never executed. Pings just kept
failing and the failure counter climbed forever, with no
"Connection to server lost" log line and no newt/ping/request
emitted on the websocket. Real-world recovery only happened when
the underlying network came back fast enough that a periodic ping
naturally succeeded again — which doesn't happen if the WireGuard
state on either end has rotated, so users were left stuck until
they restarted newt.

This is the proximate cause of the user reports in
fosrl/newt#284 (and dups #310, fosrl/pangolin#1004). Logs in
those issues all show ping-failure counters growing without ever
emitting "Connection to server lost", which is exactly the
fingerprint of this gate being false.

The fix is to extract the trigger decision into shouldFireRecovery
and remove currentInterval from it. Backoff is now computed in a
separate `if` in the caller, still gated by `currentInterval <
maxInterval` so the ramp is a no-op under default settings (which
is the existing behaviour, just no longer entangled with the
recovery trigger). Fixing the backoff ramp itself — making it
useful when pingInterval >= maxInterval — is a follow-up: the
priority is restoring recovery, not improving the dampening
schedule.

The new shouldFireRecovery helper is unit-tested. Its signature
intentionally omits currentInterval, so a future refactor that
re-introduces the interval-dependent gate would need to change
the function signature, which makes the historical bug harder
to reintroduce silently.
2026-05-07 16:57:31 -07:00
Owen
74fd3f3aa3 Bump version 2026-05-07 16:24:30 -07:00
Owen
e8dc19a62b Attempt to fix nix issue 2026-05-07 16:23:59 -07:00
Owen
9ff32b8a8b Fix not logging when rewriting nat 2026-05-07 16:16:47 -07:00
dependabot[bot]
9edaac9c11 chore(deps): bump aquasecurity/trivy-action from 0.35.0 to 0.36.0
Bumps [aquasecurity/trivy-action](https://github.com/aquasecurity/trivy-action) from 0.35.0 to 0.36.0.
- [Release notes](https://github.com/aquasecurity/trivy-action/releases)
- [Commits](57a97c7e78...ed142fd067)

---
updated-dependencies:
- dependency-name: aquasecurity/trivy-action
  dependency-version: 0.36.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-05-05 18:11:26 -07:00
dependabot[bot]
ced87b1d5e chore(nix): fix hash for updated go dependencies 2026-05-05 18:11:21 -07:00
dependabot[bot]
3aaebe64fb chore(deps): bump the prod-minor-updates group across 1 directory with 4 updates
Bumps the prod-minor-updates group with 3 updates in the / directory: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net) and [google.golang.org/grpc](https://github.com/grpc/grpc-go).


Updates `golang.org/x/crypto` from 0.49.0 to 0.50.0
- [Commits](https://github.com/golang/crypto/compare/v0.49.0...v0.50.0)

Updates `golang.org/x/net` from 0.52.0 to 0.53.0
- [Commits](https://github.com/golang/net/compare/v0.52.0...v0.53.0)

Updates `golang.org/x/sys` from 0.42.0 to 0.43.0
- [Commits](https://github.com/golang/sys/compare/v0.42.0...v0.43.0)

Updates `google.golang.org/grpc` from 1.80.0 to 1.81.0
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.80.0...v1.81.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-version: 0.50.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: golang.org/x/net
  dependency-version: 0.53.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: golang.org/x/sys
  dependency-version: 0.43.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: google.golang.org/grpc
  dependency-version: 1.81.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-05-05 18:11:21 -07:00
Owen
27f7ca6bb9 Try to fix failover not working 2026-05-05 11:40:39 -07:00
Owen
5090907307 Update status code 2026-04-30 15:55:52 -07:00
Owen
a6533b3fa0 Fix incorrect redirect logic 2026-04-29 21:11:07 -07:00
Owen Schwartz
57aa2e2e2c Merge pull request #336 from fosrl/dev
1.12.3
2026-04-29 16:02:49 -07:00
Owen Schwartz
5724c516dc Merge pull request #334 from LaurenceJJones/private-http-websocket
enhance(http): Support websocket upgrades
2026-04-29 15:58:30 -07:00
Owen
b33c3b8849 Add some test scripts for ws and move to testing/ 2026-04-29 15:57:31 -07:00
Laurence
8e19e475bf Support websocket upgrades in private HTTP proxy
Preserve optional ResponseWriter interfaces through statusCapture so httputil.ReverseProxy can hijack upgraded websocket connections. Add a regression test covering websocket traffic through the HTTP handler path.
2026-04-29 07:12:35 +01:00
Owen Schwartz
9e92c42876 Merge pull request #333 from fosrl/dev
Dont block tcp for http unless there are targets
2026-04-28 14:51:01 -07:00
Owen
66c72bbe2e Dont block tcp for http unless there are targets 2026-04-28 14:29:55 -07:00
Owen Schwartz
ffd26f9a6d Merge pull request #331 from fosrl/dev
Follow redirects by default for backward compat
2026-04-28 10:13:49 -07:00
Owen
7610aa40bf Follow redirects by default for backward compat
Fixes #330
2026-04-28 10:10:28 -07:00
Owen Schwartz
bf33a66043 Merge pull request #328 from fosrl/dev
Quiet message
2026-04-27 20:11:01 -07:00
Owen
23caf57bf4 Quiet message 2026-04-27 20:10:35 -07:00
Owen Schwartz
df3aa60cf5 Merge pull request #327 from fosrl/dev
1.12.0
2026-04-27 20:08:45 -07:00
Owen
5c43db466a Fix crashing when removing hc 2026-04-27 15:03:36 -07:00
Owen Schwartz
cc663f1636 Merge pull request #323 from fosrl/dev
1.12.0-rc.1
2026-04-24 13:42:38 -07:00
Owen
1a67ff30c2 Hard code the ifconfig path 2026-04-24 10:39:44 -07:00
Owen
bfd61ca511 Fix transport issue 2026-04-22 21:36:16 -07:00
Owen
294f99e024 Try to add redirect 2026-04-22 20:12:51 -07:00
Owen Schwartz
af2ecf486a Merge pull request #322 from fosrl/dev
Revert nix in cicd
2026-04-22 11:40:45 -07:00
Owen
efd6743ce4 Revert nix version in cicd 2026-04-22 11:40:12 -07:00
Owen Schwartz
a0d2bb999a Merge pull request #321 from fosrl/dev
1.12.0-rc.0
2026-04-22 11:35:31 -07:00
Owen
5d889fbc09 Merge branch 'main' into dev 2026-04-22 11:34:40 -07:00
Owen
1a7cf06ff8 Merge branch 'fix-nix' into dev 2026-04-22 11:31:58 -07:00
Owen
35a334c842 Merge branch 'http-ha' into dev 2026-04-21 15:07:05 -07:00
Owen
c8e5112a2a Merge branch 'alerting-rules' into dev 2026-04-21 15:06:50 -07:00
Owen
8bfb4659c0 Remove hc id 2026-04-20 21:52:21 -07:00
Owen
309f9caad2 Fix nil pointer 2026-04-20 15:05:07 -07:00
Owen
26de268466 Add x-forwarded-for 2026-04-20 15:04:59 -07:00
Owen
0f927a37ab Find old bins and support freebsd 2026-04-16 21:47:48 -07:00
Owen
e8961c5de5 Use follow redirects bool 2026-04-15 21:36:40 -07:00
Owen
9bb8eaeadb Updating with new methods 2026-04-15 21:01:04 -07:00
Owen Schwartz
d3d10d02e8 Merge pull request #317 from fosrl/fix-nix
fix nix
2026-04-14 14:24:26 -07:00
Owen
d133d69cb9 Update nix version in cicd 2026-04-14 14:22:52 -07:00
Owen
50be4f617e Update version 2026-04-14 14:22:48 -07:00
Owen
be1cd190e7 Merge branch 'main' into dev 2026-04-14 14:17:42 -07:00
Owen
5c9d13bcca Add ldflags version to local 2026-04-13 17:00:06 -07:00
Owen Schwartz
dc2e23380a Merge pull request #306 from LaurenceJJones/investigate/heap-leak-udp-proxy
fix(proxy): reclaim idle UDP flows and make timeout configurable
2026-04-13 10:27:37 -07:00
Marc Schäfer
3d2b73d417 Merge pull request #303 from fosrl/dependabot/go_modules/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp-1.43.0
chore(deps): bump go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp from 1.38.0 to 1.43.0
2026-04-12 22:49:53 +02:00
Owen
12776d65c1 Add logging 2026-04-11 21:56:28 -07:00
Laurence
0569525743 Merge remote-tracking branch 'upstream/dev' into investigate/heap-leak-udp-proxy
Made-with: Cursor

# Conflicts:
#	proxy/manager.go
2026-04-10 13:36:13 +01:00
Owen
342af9e42d Switch to scheme 2026-04-09 17:21:36 -04:00
Owen
092535441e Pass the new data down from the websocket 2026-04-09 16:13:19 -04:00
Owen
5848c8d4b4 Adjust to use data saved inside of the subnet rule 2026-04-09 16:04:11 -04:00
Owen Schwartz
6becf0f719 Merge pull request #277 from LaurenceJJones/refactor/proxy-udp-buffer-pool
perf(proxy): add sync.Pool for UDP buffers
2026-04-09 13:09:06 -04:00
Owen
47c646bc33 Basic http is working 2026-04-09 11:43:26 -04:00
Laurence
4d8d00241d perf(proxy): add sync.Pool for UDP buffers
- Add udpBufferPool for reusable 65507-byte UDP packet buffers
- Add getUDPBuffer() and putUDPBuffer() helper functions
- Clear buffer contents before returning to pool to prevent data leakage
- Apply pooling to both main handler buffer and per-client goroutine buffers
- Reduces GC pressure from frequent large allocations during UDP proxying

Made-with: Cursor
2026-04-09 15:59:03 +01:00
Laurence
31f899588f fix(proxy): reclaim idle UDP flows and make timeout configurable 2026-04-09 15:45:55 +01:00
dependabot[bot]
0104fb9b2d chore(nix): fix hash for updated go dependencies 2026-04-09 02:01:28 +00:00
dependabot[bot]
6dd9c4b0d1 chore(deps): bump go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp
Bumps [go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp](https://github.com/open-telemetry/opentelemetry-go) from 1.38.0 to 1.43.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.38.0...v1.43.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp
  dependency-version: 1.43.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-04-09 02:00:10 +00:00
Owen Schwartz
e4daa620c7 Merge pull request #299 from fosrl/dependabot/go_modules/prod-minor-updates-7fd0df0afe
chore(deps): bump the prod-minor-updates group with 2 updates
2026-04-08 21:58:53 -04:00
Owen Schwartz
7e1e3408d5 Merge pull request #302 from LaurenceJJones/fix/config-file-provision-save
fix: allow empty config file bootstrap before provisioning
2026-04-08 21:58:07 -04:00
Laurence
d7c3c38d24 fix: allow empty config file bootstrap before provisioning
Treat an empty CONFIG_FILE as initial state instead of failing JSON parse, so provisioning can proceed and credentials can be saved. Ref: fosrl/pangolin#2812
2026-04-08 14:13:13 +01:00
Owen
27e471942e Add CODEOWNERS 2026-04-07 11:34:18 -04:00
dependabot[bot]
f5f2ba38d7 chore(nix): fix hash for updated go dependencies 2026-04-07 09:47:01 +00:00
dependabot[bot]
8cf3942366 chore(deps): bump the prod-minor-updates group with 2 updates
Bumps the prod-minor-updates group with 2 updates: [go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp](https://github.com/open-telemetry/opentelemetry-go-contrib) and [go.opentelemetry.io/contrib/instrumentation/runtime](https://github.com/open-telemetry/opentelemetry-go-contrib).


Updates `go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp` from 0.67.0 to 0.68.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-go-contrib/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go-contrib/compare/zpages/v0.67.0...zpages/v0.68.0)

Updates `go.opentelemetry.io/contrib/instrumentation/runtime` from 0.67.0 to 0.68.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-go-contrib/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go-contrib/compare/zpages/v0.67.0...zpages/v0.68.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp
  dependency-version: 0.68.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: go.opentelemetry.io/contrib/instrumentation/runtime
  dependency-version: 0.68.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-04-07 09:45:12 +00:00
Marc Schäfer
f7bb240c74 Merge pull request #293 from fosrl/dependabot/github_actions/actions/attest-build-provenance-4.1.0
chore(deps): bump actions/attest-build-provenance from 3.2.0 to 4.1.0
2026-04-06 17:28:26 +02:00
Marc Schäfer
cbd17ff249 Merge pull request #294 from fosrl/dependabot/github_actions/sigstore/cosign-installer-4.1.1
chore(deps): bump sigstore/cosign-installer from 4.0.0 to 4.1.1
2026-04-06 17:28:05 +02:00
Marc Schäfer
b7f2445cfd Merge pull request #295 from fosrl/dependabot/github_actions/docker/login-action-4.1.0
chore(deps): bump docker/login-action from 4.0.0 to 4.1.0
2026-04-06 17:27:44 +02:00
Marc Schäfer
88d954fc64 Merge pull request #296 from fosrl/dependabot/github_actions/softprops/action-gh-release-2.6.1
chore(deps): bump softprops/action-gh-release from 2.4.2 to 2.6.1
2026-04-06 17:27:25 +02:00
Marc Schäfer
9b2d1f2a10 Merge pull request #297 from fosrl/dependabot/github_actions/docker/setup-qemu-action-4.0.0
chore(deps): bump docker/setup-qemu-action from 3.7.0 to 4.0.0
2026-04-06 17:27:07 +02:00
Marc Schäfer
caa5a6a476 Merge pull request #298 from fosrl/dependabot/go_modules/prod-minor-updates-497a73c3c2
chore(deps): bump the prod-minor-updates group with 13 updates
2026-04-06 17:23:16 +02:00
dependabot[bot]
74183952fb chore(nix): fix hash for updated go dependencies 2026-04-06 10:12:08 +00:00
dependabot[bot]
05fc12f66e chore(deps): bump the prod-minor-updates group with 13 updates
Bumps the prod-minor-updates group with 13 updates:

| Package | From | To |
| --- | --- | --- |
| [go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp](https://github.com/open-telemetry/opentelemetry-go-contrib) | `0.66.0` | `0.67.0` |
| [go.opentelemetry.io/contrib/instrumentation/runtime](https://github.com/open-telemetry/opentelemetry-go-contrib) | `0.66.0` | `0.67.0` |
| [go.opentelemetry.io/otel](https://github.com/open-telemetry/opentelemetry-go) | `1.41.0` | `1.42.0` |
| [go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc](https://github.com/open-telemetry/opentelemetry-go) | `1.41.0` | `1.43.0` |
| [go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc](https://github.com/open-telemetry/opentelemetry-go) | `1.41.0` | `1.43.0` |
| [go.opentelemetry.io/otel/exporters/prometheus](https://github.com/open-telemetry/opentelemetry-go) | `0.63.0` | `0.65.0` |
| [go.opentelemetry.io/otel/metric](https://github.com/open-telemetry/opentelemetry-go) | `1.41.0` | `1.43.0` |
| [go.opentelemetry.io/otel/sdk](https://github.com/open-telemetry/opentelemetry-go) | `1.41.0` | `1.43.0` |
| [go.opentelemetry.io/otel/sdk/metric](https://github.com/open-telemetry/opentelemetry-go) | `1.41.0` | `1.43.0` |
| [golang.org/x/crypto](https://github.com/golang/crypto) | `0.48.0` | `0.49.0` |
| [golang.org/x/net](https://github.com/golang/net) | `0.51.0` | `0.52.0` |
| [golang.org/x/sys](https://github.com/golang/sys) | `0.41.0` | `0.42.0` |
| [google.golang.org/grpc](https://github.com/grpc/grpc-go) | `1.79.3` | `1.80.0` |


Updates `go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp` from 0.66.0 to 0.67.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-go-contrib/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go-contrib/compare/zpages/v0.66.0...zpages/v0.67.0)

Updates `go.opentelemetry.io/contrib/instrumentation/runtime` from 0.66.0 to 0.67.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-go-contrib/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go-contrib/compare/zpages/v0.66.0...zpages/v0.67.0)

Updates `go.opentelemetry.io/otel` from 1.41.0 to 1.42.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.41.0...v1.42.0)

Updates `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc` from 1.41.0 to 1.43.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.41.0...v1.43.0)

Updates `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc` from 1.41.0 to 1.43.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.41.0...v1.43.0)

Updates `go.opentelemetry.io/otel/exporters/prometheus` from 0.63.0 to 0.65.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/exporters/prometheus/v0.63.0...exporters/prometheus/v0.65.0)

Updates `go.opentelemetry.io/otel/metric` from 1.41.0 to 1.43.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.41.0...v1.43.0)

Updates `go.opentelemetry.io/otel/sdk` from 1.41.0 to 1.43.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.41.0...v1.43.0)

Updates `go.opentelemetry.io/otel/sdk/metric` from 1.41.0 to 1.43.0
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.41.0...v1.43.0)

Updates `golang.org/x/crypto` from 0.48.0 to 0.49.0
- [Commits](https://github.com/golang/crypto/compare/v0.48.0...v0.49.0)

Updates `golang.org/x/net` from 0.51.0 to 0.52.0
- [Commits](https://github.com/golang/net/compare/v0.51.0...v0.52.0)

Updates `golang.org/x/sys` from 0.41.0 to 0.42.0
- [Commits](https://github.com/golang/sys/compare/v0.41.0...v0.42.0)

Updates `google.golang.org/grpc` from 1.79.3 to 1.80.0
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.79.3...v1.80.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp
  dependency-version: 0.67.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: go.opentelemetry.io/contrib/instrumentation/runtime
  dependency-version: 0.67.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: go.opentelemetry.io/otel
  dependency-version: 1.42.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc
  dependency-version: 1.43.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc
  dependency-version: 1.43.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: go.opentelemetry.io/otel/exporters/prometheus
  dependency-version: 0.65.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: go.opentelemetry.io/otel/metric
  dependency-version: 1.43.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: go.opentelemetry.io/otel/sdk
  dependency-version: 1.43.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: go.opentelemetry.io/otel/sdk/metric
  dependency-version: 1.43.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: golang.org/x/crypto
  dependency-version: 0.49.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: golang.org/x/net
  dependency-version: 0.52.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: golang.org/x/sys
  dependency-version: 0.42.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: google.golang.org/grpc
  dependency-version: 1.80.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-04-06 10:10:38 +00:00
dependabot[bot]
56cc225bd3 chore(deps): bump docker/setup-qemu-action from 3.7.0 to 4.0.0
Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 3.7.0 to 4.0.0.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](c7c5346462...ce360397dd)

---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-04-06 10:10:31 +00:00
dependabot[bot]
fee7fbe20a chore(deps): bump softprops/action-gh-release from 2.4.2 to 2.6.1
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.4.2 to 2.6.1.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](5be0e66d93...153bb8e044)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-version: 2.6.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-04-06 10:10:27 +00:00
dependabot[bot]
bc6661faa5 chore(deps): bump docker/login-action from 4.0.0 to 4.1.0
Bumps [docker/login-action](https://github.com/docker/login-action) from 4.0.0 to 4.1.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](b45d80f862...4907a6ddec)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 4.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-04-06 10:10:21 +00:00
dependabot[bot]
db6cabc6d7 chore(deps): bump sigstore/cosign-installer from 4.0.0 to 4.1.1
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 4.0.0 to 4.1.1.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](faadad0cce...cad07c2e89)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 4.1.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-04-06 10:10:17 +00:00
dependabot[bot]
3f32c9e8ef chore(deps): bump actions/attest-build-provenance from 3.2.0 to 4.1.0
Bumps [actions/attest-build-provenance](https://github.com/actions/attest-build-provenance) from 3.2.0 to 4.1.0.
- [Release notes](https://github.com/actions/attest-build-provenance/releases)
- [Changelog](https://github.com/actions/attest-build-provenance/blob/main/RELEASE.md)
- [Commits](96278af6ca...a2bbfa2537)

---
updated-dependencies:
- dependency-name: actions/attest-build-provenance
  dependency-version: 4.1.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-04-06 10:10:11 +00:00
Owen Schwartz
cd4782265a Merge pull request #290 from fosrl/dev
1.11.0
2026-04-03 17:37:05 -04:00
Owen
184bfb12d6 Delete bad bp 2026-04-03 17:36:48 -04:00
Owen
2e02c9b7a9 Remove files 2026-04-03 16:49:09 -04:00
Owen Schwartz
5c329be1f3 Merge pull request #286 from fosrl/dependabot/go_modules/prod-patch-updates-a06038febc
chore(deps): bump github.com/gaissmai/bart from 0.26.0 to 0.26.1 in the prod-patch-updates group across 1 directory
2026-04-03 16:47:50 -04:00
Marc Schäfer
732e788c66 Merge pull request #261 from fosrl/dependabot/github_actions/actions/stale-10.2.0
chore(deps): bump actions/stale from 10.1.1 to 10.2.0
2026-04-03 14:36:14 +02:00
Marc Schäfer
aa42b3623d Merge pull request #262 from fosrl/dependabot/github_actions/docker/setup-buildx-action-4.0.0
chore(deps): bump docker/setup-buildx-action from 3.12.0 to 4.0.0
2026-04-03 14:35:45 +02:00
Marc Schäfer
4f42560e26 Merge pull request #263 from fosrl/dependabot/github_actions/aquasecurity/trivy-action-0.35.0
chore(deps): bump aquasecurity/trivy-action from 0.34.2 to 0.35.0
2026-04-03 14:33:52 +02:00
dependabot[bot]
c2187de482 chore(deps): bump docker/setup-buildx-action from 3.12.0 to 4.0.0
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 3.12.0 to 4.0.0.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](8d2750c68a...4d04d5d948)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-04-03 12:33:48 +00:00
dependabot[bot]
5ced7d6909 chore(deps): bump actions/stale from 10.1.1 to 10.2.0
Bumps [actions/stale](https://github.com/actions/stale) from 10.1.1 to 10.2.0.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](997185467f...b5d41d4e1d)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-version: 10.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-04-03 12:33:07 +00:00
Marc Schäfer
4e6e79ad21 Merge pull request #264 from fosrl/dependabot/github_actions/docker/login-action-4.0.0
chore(deps): bump docker/login-action from 3.7.0 to 4.0.0
2026-04-03 14:32:59 +02:00
Marc Schäfer
abe6e2e400 Merge branch 'main' into dependabot/github_actions/aquasecurity/trivy-action-0.35.0 2026-04-03 14:32:11 +02:00
Marc Schäfer
f432a17c16 Merge pull request #265 from fosrl/dependabot/github_actions/docker/build-push-action-7.0.0
chore(deps): bump docker/build-push-action from 6.19.2 to 7.0.0
2026-04-03 14:31:45 +02:00
Marc Schäfer
6f96169ff1 Merge branch 'main' into dependabot/github_actions/docker/login-action-4.0.0 2026-04-03 14:31:00 +02:00
Marc Schäfer
575942c4be Merge branch 'main' into dependabot/github_actions/docker/build-push-action-7.0.0 2026-04-03 14:29:10 +02:00
dependabot[bot]
16864fc1d7 chore(nix): fix hash for updated go dependencies 2026-04-03 12:28:34 +00:00
dependabot[bot]
f925c681d2 chore(deps): bump github.com/gaissmai/bart
Bumps the prod-patch-updates group with 1 update in the / directory: [github.com/gaissmai/bart](https://github.com/gaissmai/bart).


Updates `github.com/gaissmai/bart` from 0.26.0 to 0.26.1
- [Release notes](https://github.com/gaissmai/bart/releases)
- [Commits](https://github.com/gaissmai/bart/compare/v0.26.0...v0.26.1)

---
updated-dependencies:
- dependency-name: github.com/gaissmai/bart
  dependency-version: 0.26.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: prod-patch-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-04-03 12:27:19 +00:00
Marc Schäfer
e01b0ae9c7 Merge pull request #281 from fosrl/dependabot/go_modules/google.golang.org/grpc-1.79.3
chore(deps): bump google.golang.org/grpc from 1.79.1 to 1.79.3
2026-04-03 14:26:08 +02:00
Owen
f4d071fe27 Add provisioning blueprint file 2026-04-02 21:39:59 -04:00
Owen
8d82460a76 Send health checks to the server on reconnect 2026-03-31 17:06:07 -07:00
Owen
5208117c56 Add name to provisioning 2026-03-30 17:18:22 -07:00
Owen
381f5a619c Merge branch 'main' into logging-provision 2026-03-29 21:19:53 -07:00
Owen Schwartz
b6f13a1b55 Merge pull request #285 from fosrl/dev
1.10.4
2026-03-29 12:25:10 -07:00
Owen
cdaf4f7898 Add chain id to ping 2026-03-29 12:00:17 -07:00
Owen
d4a5ac8682 Merge branch 'main' into dev 2026-03-29 11:40:34 -07:00
Owen
1057013b50 Add chainId based dedup 2026-03-27 11:55:34 -07:00
Owen
fc4b375bf1 Allow blueprint interpolation for env vars 2026-03-26 20:05:04 -07:00
Owen
baca04ee58 Add --config-file 2026-03-26 17:31:04 -07:00
Owen
b43572dd8d Provisioning key working 2026-03-26 17:23:19 -07:00
Owen
69019d5655 Process log to form sessions 2026-03-24 17:26:44 -07:00
Owen
0f57985b6f Saving and sending access logs pass 1 2026-03-23 16:39:01 -07:00
dependabot[bot]
212bdf765a chore(nix): fix hash for updated go dependencies 2026-03-19 02:16:03 +00:00
dependabot[bot]
b045a0f5d4 chore(deps): bump google.golang.org/grpc from 1.79.1 to 1.79.3
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.79.1 to 1.79.3.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.79.1...v1.79.3)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-version: 1.79.3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-19 02:14:44 +00:00
Owen Schwartz
a2683eb385 Merge pull request #274 from LaurenceJJones/refactor/proxy-cleanup-basics
refactor(proxy): cleanup basics - constants, remove dead code, fix de…
2026-03-18 15:39:43 -07:00
Owen Schwartz
d3722c2519 Merge pull request #280 from LaurenceJJones/fix/healthcheck-ipv6
fix(healthcheck): Support ipv6 healthchecks
2026-03-18 15:38:15 -07:00
Laurence
8fda35db4f fix(healthcheck): Support ipv6 healthchecks
Currently we are doing fmt.sprintf on hostname and port which will not properly handle ipv6 addresses, instead of changing pangolin to send bracketed address a simply net.join can do this for us since we dont need to parse a formatted string
2026-03-18 13:37:31 +00:00
Owen Schwartz
de4353f2e6 Merge pull request #269 from LaurenceJJones/feature/pprof-endpoint
feat(admin): Add pprof endpoints
2026-03-17 11:42:08 -07:00
Laurence
13448f76aa refactor(proxy): cleanup basics - constants, remove dead code, fix deprecated calls
- Add maxUDPPacketSize constant to replace magic number 65507
- Remove commented-out code in Stop()
- Replace deprecated ne.Temporary() with errors.Is(err, net.ErrClosed)
- Use errors.As instead of type assertion for net.Error
- Use errors.Is for closed connection checks instead of string matching
- Handle closed connection gracefully when reading from UDP target
2026-03-16 14:11:14 +00:00
Laurence
836144aebf feat(admin): Add pprof endpoints
To aid us in debugging user issues with memory or leaks we need to be able for the user to configure pprof, wait and then provide us the output files to see where memory/leaks occur in actual runtimes
2026-03-12 09:22:50 +00:00
dependabot[bot]
d7741df514 chore(nix): fix hash for updated go dependencies 2026-03-09 10:29:50 +00:00
dependabot[bot]
8e188933a2 chore(nix): fix hash for updated go dependencies 2026-03-09 10:29:45 +00:00
dependabot[bot]
a13c7c6e65 chore(nix): fix hash for updated go dependencies 2026-03-09 10:29:43 +00:00
dependabot[bot]
bc44ca1aba chore(deps): bump docker/build-push-action from 6.19.2 to 7.0.0
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 6.19.2 to 7.0.0.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](10e90e3645...d08e5c354a)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-version: 7.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-09 10:28:02 +00:00
dependabot[bot]
a76089db98 chore(deps): bump docker/login-action from 3.7.0 to 4.0.0
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.7.0 to 4.0.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](c94ce9fb46...b45d80f862)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-09 10:27:59 +00:00
dependabot[bot]
627ec2fdbc chore(deps): bump aquasecurity/trivy-action from 0.34.2 to 0.35.0
Bumps [aquasecurity/trivy-action](https://github.com/aquasecurity/trivy-action) from 0.34.2 to 0.35.0.
- [Release notes](https://github.com/aquasecurity/trivy-action/releases)
- [Commits](97e0b3872f...57a97c7e78)

---
updated-dependencies:
- dependency-name: aquasecurity/trivy-action
  dependency-version: 0.35.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-09 10:27:55 +00:00
34 changed files with 3469 additions and 405 deletions

1
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1 @@
* @oschwartz10612 @miloschwartz

View File

@@ -127,6 +127,11 @@ jobs:
echo "Tag $VERSION already exists" >&2
exit 1
fi
if ! git diff --quiet flake.nix; then
git add flake.nix
git commit -m "chore(nix): update version to $VERSION"
git push origin "$TARGET_BRANCH"
fi
git tag -a "$VERSION" -m "Release $VERSION"
git push origin "refs/tags/$VERSION"
@@ -232,20 +237,20 @@ jobs:
echo "Checked out $(git rev-parse --short HEAD) for tag ${TAG}"
#- name: Set up QEMU
# uses: docker/setup-qemu-action@c7c53464625b32c7a7e944ae62b3e17d2b600130 # v3.7.0
# uses: docker/setup-qemu-action@ce360397dd3f832beb865e1373c09c0e9f86d70a # v4.0.0
#- name: Set up Docker Buildx
# uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
# uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
- name: Log in to Docker Hub
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with:
registry: docker.io
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Log in to GHCR
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -259,12 +264,12 @@ jobs:
echo "DOCKERHUB_IMAGE=${DOCKERHUB_IMAGE,,}" >> "$GITHUB_ENV"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
# Build ONLY amd64 and push arch-specific tag suffixes used later for manifest creation.
- name: Build and push (amd64 -> *:amd64-TAG)
id: build_amd
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6.19.2
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0
with:
context: .
push: true
@@ -363,14 +368,14 @@ jobs:
echo "Checked out $(git rev-parse --short HEAD) for tag ${TAG}"
- name: Log in to Docker Hub
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with:
registry: docker.io
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Log in to GHCR
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -384,12 +389,12 @@ jobs:
echo "DOCKERHUB_IMAGE=${DOCKERHUB_IMAGE,,}" >> "$GITHUB_ENV"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
# Build ONLY arm64 and push arch-specific tag suffixes used later for manifest creation.
- name: Build and push (arm64 -> *:arm64-TAG)
id: build_arm
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6.19.2
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0
with:
context: .
push: true
@@ -478,14 +483,14 @@ jobs:
echo "Checked out $(git rev-parse --short HEAD) for tag ${TAG}"
- name: Log in to Docker Hub
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with:
registry: docker.io
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Log in to GHCR
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -499,14 +504,14 @@ jobs:
echo "DOCKERHUB_IMAGE=${DOCKERHUB_IMAGE,,}" >> "$GITHUB_ENV"
- name: Set up QEMU
uses: docker/setup-qemu-action@c7c53464625b32c7a7e944ae62b3e17d2b600130 # v3.7.0
uses: docker/setup-qemu-action@ce360397dd3f832beb865e1373c09c0e9f86d70a # v4.0.0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
- name: Build and push (arm/v7 -> *:armv7-TAG)
id: build_armv7
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6.19.2
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0
with:
context: .
push: true
@@ -551,14 +556,14 @@ jobs:
#PUBLISH_MINOR: ${{ github.event_name == 'workflow_dispatch' && inputs.publish_minor || vars.PUBLISH_MINOR }}
steps:
- name: Log in to Docker Hub
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with:
registry: docker.io
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Log in to GHCR
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -572,7 +577,7 @@ jobs:
echo "DOCKERHUB_IMAGE=${DOCKERHUB_IMAGE,,}" >> "$GITHUB_ENV"
- name: Set up Docker Buildx (needed for imagetools)
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
- name: Create & push multi-arch index (GHCR :TAG) via imagetools
shell: bash
@@ -656,14 +661,14 @@ jobs:
go-version-file: go.mod
- name: Log in to Docker Hub
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with:
registry: docker.io
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Log in to GHCR
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -687,7 +692,7 @@ jobs:
sudo apt-get install -y jq
- name: Set up Docker Buildx (needed for imagetools)
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
- name: Resolve multi-arch digest refs (by TAG)
shell: bash
@@ -727,7 +732,7 @@ jobs:
fi
- name: Attest build provenance (GHCR) (digest)
uses: actions/attest-build-provenance@96278af6caaf10aea03fd8d33a09a777ca52d62f # v3.2.0
uses: actions/attest-build-provenance@a2bbfa25375fe432b6a289bc6b6cd05ecd0c4c32 # v4.1.0
with:
subject-name: ${{ env.GHCR_IMAGE }}
subject-digest: ${{ env.GHCR_DIGEST }}
@@ -737,7 +742,7 @@ jobs:
- name: Attest build provenance (Docker Hub)
continue-on-error: true
if: ${{ env.DH_DIGEST != '' }}
uses: actions/attest-build-provenance@96278af6caaf10aea03fd8d33a09a777ca52d62f # v3.2.0
uses: actions/attest-build-provenance@a2bbfa25375fe432b6a289bc6b6cd05ecd0c4c32 # v4.1.0
with:
subject-name: index.docker.io/${{ github.repository_owner }}/${{ github.event.repository.name }}
subject-digest: ${{ env.DH_DIGEST }}
@@ -745,7 +750,7 @@ jobs:
show-summary: true
- name: Install cosign
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
uses: sigstore/cosign-installer@cad07c2e89fa2edd6e2d7bab4c1aa38e53f76003 # v4.1.1
with:
cosign-release: "v3.0.2"
@@ -759,7 +764,7 @@ jobs:
cosign public-key --key env://COSIGN_PRIVATE_KEY >/dev/null
- name: Generate SBOM (SPDX JSON) from GHCR digest
uses: aquasecurity/trivy-action@97e0b3872f55f89b95b2f65b3dbab56962816478 # v0.34.2
uses: aquasecurity/trivy-action@ed142fd0673e97e23eac54620cfb913e5ce36c25 # v0.36.0
with:
image-ref: ${{ env.GHCR_REF }}
format: spdx-json
@@ -893,7 +898,7 @@ jobs:
make -j 10 go-build-release VERSION="${TAG}"
- name: Create GitHub Release (draft)
uses: softprops/action-gh-release@5be0e66d93ac7ed76da52eca8bb058f665c3a5fe # v2.4.2
uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # v2.6.1
with:
tag_name: ${{ env.TAG }}
generate_release_notes: true

View File

@@ -23,7 +23,7 @@ jobs:
skopeo --version
- name: Install cosign
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
uses: sigstore/cosign-installer@cad07c2e89fa2edd6e2d7bab4c1aa38e53f76003 # v4.1.1
- name: Input check
run: |

View File

@@ -14,7 +14,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10.1.1
- uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0
with:
days-before-stale: 14
days-before-close: 14

View File

@@ -6,7 +6,7 @@ VERSION ?= dev
LDFLAGS = -X main.newtVersion=$(VERSION)
local:
CGO_ENABLED=0 go build -o ./bin/newt
CGO_ENABLED=0 go build -ldflags "$(LDFLAGS)" -o ./bin/newt
docker-build:
docker build -t fosrl/newt:latest .

View File

@@ -1,37 +0,0 @@
resources:
resource-nice-id:
name: this is my resource
protocol: http
full-domain: level1.test3.example.com
host-header: example.com
tls-server-name: example.com
auth:
pincode: 123456
password: sadfasdfadsf
sso-enabled: true
sso-roles:
- Member
sso-users:
- owen@pangolin.net
whitelist-users:
- owen@pangolin.net
targets:
# - site: glossy-plains-viscacha-rat
- hostname: localhost
method: http
port: 8000
healthcheck:
port: 8000
hostname: localhost
# - site: glossy-plains-viscacha-rat
- hostname: localhost
method: http
port: 8001
resource-nice-id2:
name: this is other resource
protocol: tcp
proxy-port: 3000
targets:
# - site: glossy-plains-viscacha-rat
- hostname: localhost
port: 3000

View File

@@ -2,6 +2,8 @@ package clients
import (
"context"
"crypto/rand"
"encoding/hex"
"encoding/json"
"fmt"
"net"
@@ -34,15 +36,21 @@ type WgConfig struct {
IpAddress string `json:"ipAddress"`
Peers []Peer `json:"peers"`
Targets []Target `json:"targets"`
ChainId string `json:"chainId"`
}
type Target struct {
SourcePrefix string `json:"sourcePrefix"`
SourcePrefixes []string `json:"sourcePrefixes"`
DestPrefix string `json:"destPrefix"`
RewriteTo string `json:"rewriteTo,omitempty"`
DisableIcmp bool `json:"disableIcmp,omitempty"`
PortRange []PortRange `json:"portRange,omitempty"`
SourcePrefix string `json:"sourcePrefix"`
SourcePrefixes []string `json:"sourcePrefixes"`
DestPrefix string `json:"destPrefix"`
RewriteTo string `json:"rewriteTo,omitempty"`
DisableIcmp bool `json:"disableIcmp,omitempty"`
PortRange []PortRange `json:"portRange,omitempty"`
ResourceId int `json:"resourceId,omitempty"`
Protocol string `json:"protocol,omitempty"` // for now practicably either http or https
HTTPTargets []netstack2.HTTPTarget `json:"httpTargets,omitempty"` // for http protocol, list of downstream services to load balance across
TLSCert string `json:"tlsCert,omitempty"` // PEM-encoded certificate for incoming HTTPS termination
TLSKey string `json:"tlsKey,omitempty"` // PEM-encoded private key for incoming HTTPS termination
}
type PortRange struct {
@@ -70,19 +78,20 @@ type PeerReading struct {
}
type WireGuardService struct {
interfaceName string
mtu int
client *websocket.Client
config WgConfig
key wgtypes.Key
newtId string
lastReadings map[string]PeerReading
mu sync.Mutex
Port uint16
host string
serverPubKey string
token string
stopGetConfig func()
interfaceName string
mtu int
client *websocket.Client
config WgConfig
key wgtypes.Key
newtId string
lastReadings map[string]PeerReading
mu sync.Mutex
Port uint16
host string
serverPubKey string
token string
stopGetConfig func()
pendingConfigChainId string
// Netstack fields
tun tun.Device
tnet *netstack2.Net
@@ -107,6 +116,13 @@ type WireGuardService struct {
wgTesterServer *wgtester.Server
}
// generateChainId generates a random chain ID for deduplicating round-trip messages.
func generateChainId() string {
b := make([]byte, 8)
_, _ = rand.Read(b)
return hex.EncodeToString(b)
}
func NewWireGuardService(interfaceName string, port uint16, mtu int, host string, newtId string, wsClient *websocket.Client, dns string, useNativeInterface bool) (*WireGuardService, error) {
key, err := wgtypes.GeneratePrivateKey()
if err != nil {
@@ -195,6 +211,15 @@ func (s *WireGuardService) Close() {
s.stopGetConfig = nil
}
// Flush access logs before tearing down the tunnel
if s.tnet != nil {
if ph := s.tnet.GetProxyHandler(); ph != nil {
if al := ph.GetAccessLogger(); al != nil {
al.Close()
}
}
}
// Stop the direct UDP relay first
s.StopDirectUDPRelay()
@@ -441,9 +466,12 @@ func (s *WireGuardService) LoadRemoteConfig() error {
s.stopGetConfig()
s.stopGetConfig = nil
}
chainId := generateChainId()
s.pendingConfigChainId = chainId
s.stopGetConfig = s.client.SendMessageInterval("newt/wg/get-config", map[string]interface{}{
"publicKey": s.key.PublicKey().String(),
"port": s.Port,
"chainId": chainId,
}, 2*time.Second)
logger.Debug("Requesting WireGuard configuration from remote server")
@@ -468,6 +496,17 @@ func (s *WireGuardService) handleConfig(msg websocket.WSMessage) {
logger.Info("Error unmarshaling target data: %v", err)
return
}
// Deduplicate using chainId: discard responses that don't match the
// pending request, or that we have already processed.
if config.ChainId != "" {
if config.ChainId != s.pendingConfigChainId {
logger.Debug("Discarding duplicate/stale newt/wg/get-config response (chainId=%s, expected=%s)", config.ChainId, s.pendingConfigChainId)
return
}
s.pendingConfigChainId = "" // consume further duplicates are rejected
}
s.config = config
if s.stopGetConfig != nil {
@@ -662,7 +701,18 @@ func (s *WireGuardService) syncTargets(desiredTargets []Target) error {
})
}
s.tnet.AddProxySubnetRule(sourcePrefix, destPrefix, target.RewriteTo, portRanges, target.DisableIcmp)
s.tnet.AddProxySubnetRule(netstack2.SubnetRule{
SourcePrefix: sourcePrefix,
DestPrefix: destPrefix,
RewriteTo: target.RewriteTo,
PortRanges: portRanges,
DisableIcmp: target.DisableIcmp,
ResourceId: target.ResourceId,
Protocol: target.Protocol,
HTTPTargets: target.HTTPTargets,
TLSCert: target.TLSCert,
TLSKey: target.TLSKey,
})
logger.Info("Added target %s -> %s during sync", target.SourcePrefix, target.DestPrefix)
}
}
@@ -793,6 +843,20 @@ func (s *WireGuardService) ensureWireguardInterface(wgconfig WgConfig) error {
s.TunnelIP = tunnelIP.String()
// Configure the access log sender to ship compressed session logs via websocket
s.tnet.SetAccessLogSender(func(data string) error {
return s.client.SendMessageNoLog("newt/access-log", map[string]interface{}{
"compressed": data,
})
})
// Configure the HTTP request log sender to ship compressed request logs via websocket
s.tnet.SetHTTPRequestLogSender(func(data string) error {
return s.client.SendMessageNoLog("newt/request-log", map[string]interface{}{
"compressed": data,
})
})
// Create WireGuard device using the shared bind
s.device = device.NewDevice(s.tun, s.sharedBind, device.NewLogger(
device.LogLevelSilent, // Use silent logging by default - could be made configurable
@@ -913,7 +977,18 @@ func (s *WireGuardService) ensureTargets(targets []Target) error {
if err != nil {
return fmt.Errorf("invalid CIDR %s: %v", sp, err)
}
s.tnet.AddProxySubnetRule(sourcePrefix, destPrefix, target.RewriteTo, portRanges, target.DisableIcmp)
s.tnet.AddProxySubnetRule(netstack2.SubnetRule{
SourcePrefix: sourcePrefix,
DestPrefix: destPrefix,
RewriteTo: target.RewriteTo,
PortRanges: portRanges,
DisableIcmp: target.DisableIcmp,
ResourceId: target.ResourceId,
Protocol: target.Protocol,
HTTPTargets: target.HTTPTargets,
TLSCert: target.TLSCert,
TLSKey: target.TLSKey,
})
logger.Info("Added target subnet from %s to %s rewrite to %s with port ranges: %v", sp, target.DestPrefix, target.RewriteTo, target.PortRange)
}
}
@@ -1306,7 +1381,18 @@ func (s *WireGuardService) handleAddTarget(msg websocket.WSMessage) {
logger.Info("Invalid CIDR %s: %v", sp, err)
continue
}
s.tnet.AddProxySubnetRule(sourcePrefix, destPrefix, target.RewriteTo, portRanges, target.DisableIcmp)
s.tnet.AddProxySubnetRule(netstack2.SubnetRule{
SourcePrefix: sourcePrefix,
DestPrefix: destPrefix,
RewriteTo: target.RewriteTo,
PortRanges: portRanges,
DisableIcmp: target.DisableIcmp,
ResourceId: target.ResourceId,
Protocol: target.Protocol,
HTTPTargets: target.HTTPTargets,
TLSCert: target.TLSCert,
TLSKey: target.TLSKey,
})
logger.Info("Added target subnet from %s to %s rewrite to %s with port ranges: %v", sp, target.DestPrefix, target.RewriteTo, target.PortRange)
}
}
@@ -1424,7 +1510,18 @@ func (s *WireGuardService) handleUpdateTarget(msg websocket.WSMessage) {
logger.Info("Invalid CIDR %s: %v", sp, err)
continue
}
s.tnet.AddProxySubnetRule(sourcePrefix, destPrefix, target.RewriteTo, portRanges, target.DisableIcmp)
s.tnet.AddProxySubnetRule(netstack2.SubnetRule{
SourcePrefix: sourcePrefix,
DestPrefix: destPrefix,
RewriteTo: target.RewriteTo,
PortRanges: portRanges,
DisableIcmp: target.DisableIcmp,
ResourceId: target.ResourceId,
Protocol: target.Protocol,
HTTPTargets: target.HTTPTargets,
TLSCert: target.TLSCert,
TLSKey: target.TLSKey,
})
logger.Info("Added target subnet from %s to %s rewrite to %s with port ranges: %v", sp, target.DestPrefix, target.RewriteTo, target.PortRange)
}
}

108
common.go
View File

@@ -8,6 +8,7 @@ import (
"net"
"os"
"os/exec"
"regexp"
"strings"
"time"
@@ -207,6 +208,7 @@ func pingWithRetry(tnet *netstack.Net, dst string, timeout time.Duration) (stopC
logger.Warn(msgHealthFileWriteFailed, err)
}
}
return
}
case <-pingStopChan:
// Stop the goroutine when signaled
@@ -219,6 +221,25 @@ func pingWithRetry(tnet *netstack.Net, dst string, timeout time.Duration) (stopC
return stopChan, fmt.Errorf("initial ping attempts failed, continuing in background")
}
// shouldFireRecovery decides whether the data-plane recovery flow in
// startPingCheck should run on this tick. Recovery fires once when the
// consecutive-failure counter first crosses the threshold; the connectionLost
// flag prevents re-firing until a successful ping resets the state.
//
// This condition was previously inlined into startPingCheck and AND-ed with
// `currentInterval < maxInterval`, which silently broke recovery once
// pingInterval's default was bumped to 15s while maxInterval stayed at 6s
// (commit 8161fa6, March 2026): the gate became permanently false on default
// settings, so the recovery code never executed and ping failures climbed
// forever — the proximate cause of fosrl/newt#284, #310 and pangolin#1004.
//
// Recovery and backoff are independent concerns; the backoff ramp is now
// computed separately in the caller. Do not re-introduce currentInterval
// here.
func shouldFireRecovery(consecutiveFailures, failureThreshold int, connectionLost bool) bool {
return consecutiveFailures >= failureThreshold && !connectionLost
}
func startPingCheck(tnet *netstack.Net, serverIP string, client *websocket.Client, tunnelID string) chan struct{} {
maxInterval := 6 * time.Second
currentInterval := pingInterval
@@ -278,35 +299,44 @@ func startPingCheck(tnet *netstack.Net, serverIP string, client *websocket.Clien
// More lenient threshold for declaring connection lost under load
failureThreshold := 4
if consecutiveFailures >= failureThreshold && currentInterval < maxInterval {
if !connectionLost {
connectionLost = true
logger.Warn("Connection to server lost after %d failures. Continuous reconnection attempts will be made.", consecutiveFailures)
if tunnelID != "" {
telemetry.IncReconnect(context.Background(), tunnelID, "client", telemetry.ReasonTimeout)
}
stopFunc = client.SendMessageInterval("newt/ping/request", map[string]interface{}{}, 3*time.Second)
// Send registration message to the server for backward compatibility
err := client.SendMessage("newt/wg/register", map[string]interface{}{
"publicKey": publicKey.String(),
"backwardsCompatible": true,
})
if shouldFireRecovery(consecutiveFailures, failureThreshold, connectionLost) {
connectionLost = true
logger.Warn("Connection to server lost after %d failures. Continuous reconnection attempts will be made.", consecutiveFailures)
if tunnelID != "" {
telemetry.IncReconnect(context.Background(), tunnelID, "client", telemetry.ReasonTimeout)
}
pingChainId := generateChainId()
pendingPingChainId = pingChainId
stopFunc = client.SendMessageInterval("newt/ping/request", map[string]interface{}{
"chainId": pingChainId,
}, 3*time.Second)
// Send registration message to the server for backward compatibility
bcChainId := generateChainId()
pendingRegisterChainId = bcChainId
err := client.SendMessage("newt/wg/register", map[string]interface{}{
"publicKey": publicKey.String(),
"backwardsCompatible": true,
"chainId": bcChainId,
})
if err != nil {
logger.Error("Failed to send registration message: %v", err)
}
if healthFile != "" {
err = os.Remove(healthFile)
if err != nil {
logger.Error("Failed to send registration message: %v", err)
}
if healthFile != "" {
err = os.Remove(healthFile)
if err != nil {
logger.Error("Failed to remove health file: %v", err)
}
logger.Error("Failed to remove health file: %v", err)
}
}
currentInterval = time.Duration(float64(currentInterval) * 1.3) // Slower increase
}
// Backoff: ramp the periodic-ping interval up while we are
// past the failure threshold, capped at maxInterval. Kept
// independent of the recovery trigger above so the trigger
// fires on every outage regardless of pingInterval.
if consecutiveFailures >= failureThreshold && currentInterval < maxInterval {
currentInterval = time.Duration(float64(currentInterval) * 1.3)
if currentInterval > maxInterval {
currentInterval = maxInterval
}
ticker.Reset(currentInterval)
logger.Debug("Increased ping check interval to %v due to consecutive failures", currentInterval)
}
} else {
// Track recent latencies
@@ -509,15 +539,41 @@ func executeUpdownScript(action, proto, target string) (string, error) {
return target, nil
}
func sendBlueprint(client *websocket.Client) error {
if blueprintFile == "" {
// interpolateBlueprint finds all {{...}} tokens in the raw blueprint bytes and
// replaces recognised schemes with their resolved values. Currently supported:
//
// - env.<VAR> replaced with the value of the named environment variable
//
// Any token that does not match a supported scheme is left as-is so that
// future schemes (e.g. tag., api.) are preserved rather than silently dropped.
func interpolateBlueprint(data []byte) []byte {
re := regexp.MustCompile(`\{\{([^}]+)\}\}`)
return re.ReplaceAllFunc(data, func(match []byte) []byte {
// strip the surrounding {{ }}
inner := strings.TrimSpace(string(match[2 : len(match)-2]))
if strings.HasPrefix(inner, "env.") {
varName := strings.TrimPrefix(inner, "env.")
return []byte(os.Getenv(varName))
}
// unrecognised scheme leave the token untouched
return match
})
}
func sendBlueprint(client *websocket.Client, file string) error {
if file == "" {
return nil
}
// try to read the blueprint file
blueprintData, err := os.ReadFile(blueprintFile)
blueprintData, err := os.ReadFile(file)
if err != nil {
logger.Error("Failed to read blueprint file: %v", err)
} else {
// interpolate {{env.VAR}} (and any future schemes) before parsing
blueprintData = interpolateBlueprint(blueprintData)
// first we should convert the yaml to json and error if the yaml is bad
var yamlObj interface{}
var blueprintJsonData string

View File

@@ -210,3 +210,42 @@ func TestParseTargetStringNetDialCompatibility(t *testing.T) {
})
}
}
// TestShouldFireRecovery is the regression guard for the broken trigger gate
// that prevented data-plane recovery from ever firing under default settings
// (fosrl/newt#284, #310, pangolin#1004). The pre-fix condition was
//
// consecutiveFailures >= failureThreshold && currentInterval < maxInterval
//
// which became permanently false once pingInterval's default was bumped from
// 3s to 15s in commit 8161fa6 — currentInterval starts at pingInterval=15s,
// maxInterval stayed at 6s, so 15<6 is false and the recovery branch never
// executed.
//
// The fix is to drop currentInterval from the trigger condition entirely;
// backoff is a separate concern computed in the caller. The cases below
// exercise the documented contract.
func TestShouldFireRecovery(t *testing.T) {
const threshold = 4
cases := []struct {
name string
failures int
connectionLost bool
want bool
}{
{"below threshold, fresh", 3, false, false},
{"below threshold, already lost", 3, true, false},
{"at threshold, fresh — recovery must fire", threshold, false, true},
{"at threshold, already lost — gate prevents re-fire", threshold, true, false},
{"far above threshold, fresh", 100, false, true},
{"far above threshold, already lost", 100, true, false},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
if got := shouldFireRecovery(c.failures, threshold, c.connectionLost); got != c.want {
t.Errorf("shouldFireRecovery(failures=%d, threshold=%d, lost=%v) = %v, want %v",
c.failures, threshold, c.connectionLost, got, c.want)
}
})
}
}

View File

@@ -25,7 +25,7 @@
inherit (pkgs) lib;
# Update version when releasing
version = "1.8.0";
version = "1.12.4";
in
{
default = self.packages.${system}.pangolin-newt;
@@ -35,7 +35,7 @@
inherit version;
src = pkgs.nix-gitignore.gitignoreSource [ ] ./.;
vendorHash = "sha256-kmQM8Yy5TuOiNpMpUme/2gfE+vrhUK+0AphN+p71wGs=";
vendorHash = "sha256-WfIK+Q8WQ372NzLw6DRapv1nYPduShi4KnVJBPk0Oz0=";
nativeInstallCheckInputs = [ pkgs.versionCheckHook ];

View File

@@ -30,41 +30,38 @@ print_error() {
# Function to get latest version from GitHub API
get_latest_version() {
local latest_info
latest_info=""
if command -v curl >/dev/null 2>&1; then
latest_info=$(curl -fsSL "$GITHUB_API_URL" 2>/dev/null)
elif command -v wget >/dev/null 2>&1; then
latest_info=$(wget -qO- "$GITHUB_API_URL" 2>/dev/null)
else
print_error "Neither curl nor wget is available. Please install one of them." >&2
print_error "Neither curl nor wget is available."
exit 1
fi
if [ -z "$latest_info" ]; then
print_error "Failed to fetch latest version information" >&2
print_error "Failed to fetch latest version info"
exit 1
fi
# Extract version from JSON response (works without jq)
local version=$(echo "$latest_info" | grep '"tag_name"' | head -1 | sed 's/.*"tag_name": *"\([^"]*\)".*/\1/')
version=$(printf '%s' "$latest_info" | grep '"tag_name"' | head -1 | sed 's/.*"tag_name": *"\([^"]*\)".*/\1/')
if [ -z "$version" ]; then
print_error "Could not parse version from GitHub API response" >&2
print_error "Could not parse version from GitHub API response"
exit 1
fi
# Remove 'v' prefix if present
version=$(echo "$version" | sed 's/^v//')
echo "$version"
version=$(printf '%s' "$version" | sed 's/^v//')
printf '%s' "$version"
}
# Detect OS and architecture
detect_platform() {
local os arch
# Detect OS
os=""
arch=""
case "$(uname -s)" in
Linux*) os="linux" ;;
Darwin*) os="darwin" ;;
@@ -75,12 +72,11 @@ detect_platform() {
exit 1
;;
esac
# Detect architecture
case "$(uname -m)" in
x86_64|amd64) arch="amd64" ;;
arm64|aarch64) arch="arm64" ;;
armv7l|armv6l)
armv7l|armv6l)
if [ "$os" = "linux" ]; then
if [ "$(uname -m)" = "armv6l" ]; then
arch="arm32v6"
@@ -88,10 +84,10 @@ detect_platform() {
arch="arm32"
fi
else
arch="arm64" # Default for non-Linux ARM
arch="arm64"
fi
;;
riscv64)
riscv64)
if [ "$os" = "linux" ]; then
arch="riscv64"
else
@@ -104,23 +100,68 @@ detect_platform() {
exit 1
;;
esac
echo "${os}_${arch}"
printf '%s_%s' "$os" "$arch"
}
# Get installation directory
# Determine installation directory (default fallback)
get_install_dir() {
if [ "$OS" = "windows" ]; then
echo "$HOME/bin"
else
# Prefer /usr/local/bin for system-wide installation
echo "/usr/local/bin"
case "$PLATFORM" in
*windows*)
echo "$HOME/bin"
;;
*)
echo "/usr/local/bin"
;;
esac
}
# Parse --path argument from args
# Returns the value after --path, or empty string if not provided
parse_path_arg() {
while [ $# -gt 0 ]; do
case "$1" in
--path)
if [ -n "$2" ]; then
printf '%s' "$2"
return
fi
;;
--path=*)
printf '%s' "${1#--path=}"
return
;;
esac
shift
done
}
# Detect an existing newt binary location.
# Tries unprivileged which first, then sudo which (for binaries only visible to root).
# Returns the full path of the binary, or empty string if not found.
detect_existing_binary() {
existing=""
# Try unprivileged which first
existing=$(command -v newt 2>/dev/null || true)
if [ -n "$existing" ]; then
printf '%s' "$existing"
return
fi
# Try sudo which — some installations land in paths only root can see in $PATH
if command -v sudo >/dev/null 2>&1; then
existing=$(sudo which newt 2>/dev/null || true)
if [ -n "$existing" ]; then
printf '%s' "$existing"
return
fi
fi
}
# Check if we need sudo for installation
needs_sudo() {
local install_dir="$1"
install_dir="$1"
if [ -w "$install_dir" ] 2>/dev/null; then
return 1 # No sudo needed
else
@@ -130,7 +171,7 @@ needs_sudo() {
# Get the appropriate command prefix (sudo or empty)
get_sudo_cmd() {
local install_dir="$1"
install_dir="$1"
if needs_sudo "$install_dir"; then
if command -v sudo >/dev/null 2>&1; then
echo "sudo"
@@ -146,40 +187,46 @@ get_sudo_cmd() {
# Download and install newt
install_newt() {
local platform="$1"
local install_dir="$2"
local sudo_cmd="$3"
local binary_name="newt_${platform}"
local exe_suffix=""
platform="$1"
install_dir="$2"
sudo_cmd="$3"
custom_path="$4"
binary_name="newt_${platform}"
final_name="newt"
# Add .exe suffix for Windows
case "$platform" in
*windows*)
binary_name="${binary_name}.exe"
exe_suffix=".exe"
final_name="newt.exe"
;;
esac
local download_url="${BASE_URL}/${binary_name}"
local temp_file="/tmp/newt${exe_suffix}"
local final_path="${install_dir}/newt${exe_suffix}"
download_url="${BASE_URL}/${binary_name}"
temp_file="/tmp/${final_name}"
# If a custom path is provided, use it directly; otherwise use install_dir/final_name
if [ -n "$custom_path" ]; then
final_path="$custom_path"
install_dir=$(dirname "$final_path")
else
final_path="${install_dir}/${final_name}"
fi
print_status "Downloading newt from ${download_url}"
# Download the binary
if command -v curl >/dev/null 2>&1; then
curl -fsSL "$download_url" -o "$temp_file"
elif command -v wget >/dev/null 2>&1; then
wget -q "$download_url" -O "$temp_file"
else
print_error "Neither curl nor wget is available. Please install one of them."
print_error "Neither curl nor wget is available."
exit 1
fi
# Make executable before moving
chmod +x "$temp_file"
# Create install directory if it doesn't exist
# Create install directory if it doesn't exist and move binary
if [ -n "$sudo_cmd" ]; then
$sudo_cmd mkdir -p "$install_dir"
print_status "Using sudo to install to ${install_dir}"
@@ -194,25 +241,25 @@ install_newt() {
# Check if install directory is in PATH
if ! echo "$PATH" | grep -q "$install_dir"; then
print_warning "Install directory ${install_dir} is not in your PATH."
print_warning "Add it to your PATH by adding this line to your shell profile:"
print_warning "Add it with:"
print_warning " export PATH=\"${install_dir}:\$PATH\""
fi
}
# Verify installation
verify_installation() {
local install_dir="$1"
local exe_suffix=""
install_dir="$1"
exe_suffix=""
case "$PLATFORM" in
*windows*) exe_suffix=".exe" ;;
esac
local newt_path="${install_dir}/newt${exe_suffix}"
if [ -f "$newt_path" ] && [ -x "$newt_path" ]; then
newt_path="${install_dir}/newt${exe_suffix}"
if [ -x "$newt_path" ]; then
print_status "Installation successful!"
print_status "newt version: $("$newt_path" --version 2>/dev/null || echo "unknown")"
print_status "newt version: $("$newt_path" --version 2>/dev/null || printf 'unknown')"
return 0
else
print_error "Installation failed. Binary not found or not executable."
@@ -222,22 +269,40 @@ verify_installation() {
# Main installation process
main() {
print_status "Installing latest version of newt..."
# --path explicitly overrides everything
CUSTOM_PATH=$(parse_path_arg "$@")
# Get latest version
print_status "Fetching latest version from GitHub..."
if [ -n "$CUSTOM_PATH" ]; then
print_status "Installing latest version of newt to ${CUSTOM_PATH} (--path override)..."
else
print_status "Installing latest version of newt..."
fi
print_status "Fetching latest version..."
VERSION=$(get_latest_version)
print_status "Latest version: v${VERSION}"
# Set base URL with the fetched version
BASE_URL="https://github.com/${REPO}/releases/download/${VERSION}"
# Detect platform
PLATFORM=$(detect_platform)
print_status "Detected platform: ${PLATFORM}"
# Get install directory
INSTALL_DIR=$(get_install_dir)
if [ -n "$CUSTOM_PATH" ]; then
# --path wins; derive INSTALL_DIR from it
INSTALL_DIR=$(dirname "$CUSTOM_PATH")
else
# Try to find an existing installation so we update the right place
EXISTING_BINARY=$(detect_existing_binary)
if [ -n "$EXISTING_BINARY" ]; then
print_status "Found existing newt binary at ${EXISTING_BINARY}"
CUSTOM_PATH="$EXISTING_BINARY"
INSTALL_DIR=$(dirname "$EXISTING_BINARY")
print_status "Will update existing installation at ${INSTALL_DIR}"
else
INSTALL_DIR=$(get_install_dir)
fi
fi
print_status "Install directory: ${INSTALL_DIR}"
# Check if we need sudo
@@ -246,13 +311,20 @@ main() {
print_status "Root privileges required for installation to ${INSTALL_DIR}"
fi
# Install newt
install_newt "$PLATFORM" "$INSTALL_DIR" "$SUDO_CMD"
install_newt "$PLATFORM" "$INSTALL_DIR" "$SUDO_CMD" "$CUSTOM_PATH"
# Verify installation
if verify_installation "$INSTALL_DIR"; then
if [ -n "$CUSTOM_PATH" ]; then
if [ -x "$CUSTOM_PATH" ]; then
print_status "Installation successful!"
print_status "newt version: $("$CUSTOM_PATH" --version 2>/dev/null || printf 'unknown')"
print_status "newt is ready to use!"
else
print_error "Installation failed. Binary not found or not executable at ${CUSTOM_PATH}."
exit 1
fi
elif verify_installation "$INSTALL_DIR"; then
print_status "newt is ready to use!"
print_status "Run 'newt --help' to get started"
print_status "Run 'newt --help' to get started."
else
exit 1
fi

52
go.mod
View File

@@ -4,27 +4,27 @@ go 1.25.0
require (
github.com/docker/docker v28.5.2+incompatible
github.com/gaissmai/bart v0.26.0
github.com/gaissmai/bart v0.26.1
github.com/gorilla/websocket v1.5.3
github.com/prometheus/client_golang v1.23.2
github.com/vishvananda/netlink v1.3.1
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.66.0
go.opentelemetry.io/contrib/instrumentation/runtime v0.66.0
go.opentelemetry.io/otel v1.41.0
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.41.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.41.0
go.opentelemetry.io/otel/exporters/prometheus v0.63.0
go.opentelemetry.io/otel/metric v1.41.0
go.opentelemetry.io/otel/sdk v1.41.0
go.opentelemetry.io/otel/sdk/metric v1.41.0
golang.org/x/crypto v0.48.0
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0
go.opentelemetry.io/contrib/instrumentation/runtime v0.68.0
go.opentelemetry.io/otel v1.43.0
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0
go.opentelemetry.io/otel/exporters/prometheus v0.65.0
go.opentelemetry.io/otel/metric v1.43.0
go.opentelemetry.io/otel/sdk v1.43.0
go.opentelemetry.io/otel/sdk/metric v1.43.0
golang.org/x/crypto v0.50.0
golang.org/x/exp v0.0.0-20251113190631-e25ba8c21ef6
golang.org/x/net v0.51.0
golang.org/x/sys v0.41.0
golang.org/x/net v0.53.0
golang.org/x/sys v0.43.0
golang.zx2c4.com/wireguard v0.0.0-20250521234502-f333402bd9cb
golang.zx2c4.com/wireguard/wgctrl v0.0.0-20241231184526-a9ab2273dd10
golang.zx2c4.com/wireguard/windows v0.5.3
google.golang.org/grpc v1.79.1
google.golang.org/grpc v1.81.0
gopkg.in/yaml.v3 v3.0.1
gvisor.dev/gvisor v0.0.0-20250503011706-39ed1f5ac29c
software.sslmate.com/src/go-pkcs12 v0.7.0
@@ -57,21 +57,21 @@ require (
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.67.5 // indirect
github.com/prometheus/otlptranslator v1.0.0 // indirect
github.com/prometheus/procfs v0.19.2 // indirect
github.com/prometheus/procfs v0.20.1 // indirect
github.com/vishvananda/netns v0.0.5 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.41.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0 // indirect
go.opentelemetry.io/otel/trace v1.41.0 // indirect
go.opentelemetry.io/proto/otlp v1.9.0 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
golang.org/x/mod v0.32.0 // indirect
golang.org/x/sync v0.19.0 // indirect
golang.org/x/text v0.34.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0 // indirect
go.opentelemetry.io/otel/trace v1.43.0 // indirect
go.opentelemetry.io/proto/otlp v1.10.0 // indirect
go.yaml.in/yaml/v2 v2.4.4 // indirect
golang.org/x/mod v0.34.0 // indirect
golang.org/x/sync v0.20.0 // indirect
golang.org/x/text v0.36.0 // indirect
golang.org/x/time v0.12.0 // indirect
golang.org/x/tools v0.41.0 // indirect
golang.org/x/tools v0.43.0 // indirect
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20260209200024-4cfbd4190f57 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20260209200024-4cfbd4190f57 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 // indirect
google.golang.org/protobuf v1.36.11 // indirect
)

108
go.sum
View File

@@ -26,8 +26,8 @@ github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/gaissmai/bart v0.26.0 h1:xOZ57E9hJLBiQaSyeZa9wgWhGuzfGACgqp4BE77OkO0=
github.com/gaissmai/bart v0.26.0/go.mod h1:GREWQfTLRWz/c5FTOsIw+KkscuFkIV5t8Rp7Nd1Td5c=
github.com/gaissmai/bart v0.26.1 h1:+w4rnLGNlA2GDVn382Tfe3jOsK5vOr5n4KmigJ9lbTo=
github.com/gaissmai/bart v0.26.1/go.mod h1:GREWQfTLRWz/c5FTOsIw+KkscuFkIV5t8Rp7Nd1Td5c=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
@@ -81,8 +81,8 @@ github.com/prometheus/common v0.67.5 h1:pIgK94WWlQt1WLwAC5j2ynLaBRDiinoAb86HZHTU
github.com/prometheus/common v0.67.5/go.mod h1:SjE/0MzDEEAyrdr5Gqc6G+sXI67maCxzaT3A2+HqjUw=
github.com/prometheus/otlptranslator v1.0.0 h1:s0LJW/iN9dkIH+EnhiD3BlkkP5QVIUVEoIwkU+A6qos=
github.com/prometheus/otlptranslator v1.0.0/go.mod h1:vRYWnXvI6aWGpsdY/mOT/cbeVRBlPWtBNDb7kGR3uKM=
github.com/prometheus/procfs v0.19.2 h1:zUMhqEW66Ex7OXIiDkll3tl9a1ZdilUOd/F6ZXw4Vws=
github.com/prometheus/procfs v0.19.2/go.mod h1:M0aotyiemPhBCM0z5w87kL22CxfcH05ZpYlu+b4J7mw=
github.com/prometheus/procfs v0.20.1 h1:XwbrGOIplXW/AU3YhIhLODXMJYyC1isLFfYCsTEycfc=
github.com/prometheus/procfs v0.20.1/go.mod h1:o9EMBZGRyvDrSPH1RqdxhojkuXstoe4UlK79eF5TGGo=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
@@ -95,56 +95,56 @@ github.com/vishvananda/netns v0.0.5 h1:DfiHV+j8bA32MFM7bfEunvT8IAqQ/NzSJHtcmW5zd
github.com/vishvananda/netns v0.0.5/go.mod h1:SpkAiCQRtJ6TvvxPnOSyH3BMl6unz3xZlaprSwhNNJM=
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.66.0 h1:PnV4kVnw0zOmwwFkAzCN5O07fw1YOIQor120zrh0AVo=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.66.0/go.mod h1:ofAwF4uinaf8SXdVzzbL4OsxJ3VfeEg3f/F6CeF49/Y=
go.opentelemetry.io/contrib/instrumentation/runtime v0.66.0 h1:JruBNmrPELWjR+PU3fsQBFQRYtsMLQ/zPfbvwDz9I/w=
go.opentelemetry.io/contrib/instrumentation/runtime v0.66.0/go.mod h1:vwNrfL6w1uAE3qX48KFii2Qoqf+NEDP5wNjus+RHz8Y=
go.opentelemetry.io/otel v1.41.0 h1:YlEwVsGAlCvczDILpUXpIpPSL/VPugt7zHThEMLce1c=
go.opentelemetry.io/otel v1.41.0/go.mod h1:Yt4UwgEKeT05QbLwbyHXEwhnjxNO6D8L5PQP51/46dE=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.41.0 h1:VO3BL6OZXRQ1yQc8W6EVfJzINeJ35BkiHx4MYfoQf44=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.41.0/go.mod h1:qRDnJ2nv3CQXMK2HUd9K9VtvedsPAce3S+/4LZHjX/s=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.41.0 h1:ao6Oe+wSebTlQ1OEht7jlYTzQKE+pnx/iNywFvTbuuI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.41.0/go.mod h1:u3T6vz0gh/NVzgDgiwkgLxpsSF6PaPmo2il0apGJbls=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.41.0 h1:mq/Qcf28TWz719lE3/hMB4KkyDuLJIvgJnFGcd0kEUI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.41.0/go.mod h1:yk5LXEYhsL2htyDNJbEq7fWzNEigeEdV5xBF/Y+kAv0=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0 h1:aTL7F04bJHUlztTsNGJ2l+6he8c+y/b//eR0jjjemT4=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0/go.mod h1:kldtb7jDTeol0l3ewcmd8SDvx3EmIE7lyvqbasU3QC4=
go.opentelemetry.io/otel/exporters/prometheus v0.63.0 h1:OLo1FNb0pBZykLqbKRZolKtGZd0Waqlr240YdMEnhhg=
go.opentelemetry.io/otel/exporters/prometheus v0.63.0/go.mod h1:8yeQAdhrK5xsWuFehO13Dk/Xb9FuhZoVpJfpoNCfJnw=
go.opentelemetry.io/otel/metric v1.41.0 h1:rFnDcs4gRzBcsO9tS8LCpgR0dxg4aaxWlJxCno7JlTQ=
go.opentelemetry.io/otel/metric v1.41.0/go.mod h1:xPvCwd9pU0VN8tPZYzDZV/BMj9CM9vs00GuBjeKhJps=
go.opentelemetry.io/otel/sdk v1.41.0 h1:YPIEXKmiAwkGl3Gu1huk1aYWwtpRLeskpV+wPisxBp8=
go.opentelemetry.io/otel/sdk v1.41.0/go.mod h1:ahFdU0G5y8IxglBf0QBJXgSe7agzjE4GiTJ6HT9ud90=
go.opentelemetry.io/otel/sdk/metric v1.41.0 h1:siZQIYBAUd1rlIWQT2uCxWJxcCO7q3TriaMlf08rXw8=
go.opentelemetry.io/otel/sdk/metric v1.41.0/go.mod h1:HNBuSvT7ROaGtGI50ArdRLUnvRTRGniSUZbxiWxSO8Y=
go.opentelemetry.io/otel/trace v1.41.0 h1:Vbk2co6bhj8L59ZJ6/xFTskY+tGAbOnCtQGVVa9TIN0=
go.opentelemetry.io/otel/trace v1.41.0/go.mod h1:U1NU4ULCoxeDKc09yCWdWe+3QoyweJcISEVa1RBzOis=
go.opentelemetry.io/proto/otlp v1.9.0 h1:l706jCMITVouPOqEnii2fIAuO3IVGBRPV5ICjceRb/A=
go.opentelemetry.io/proto/otlp v1.9.0/go.mod h1:xE+Cx5E/eEHw+ISFkwPLwCZefwVjY+pqKg1qcK03+/4=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0 h1:CqXxU8VOmDefoh0+ztfGaymYbhdB/tT3zs79QaZTNGY=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0/go.mod h1:BuhAPThV8PBHBvg8ZzZ/Ok3idOdhWIodywz2xEcRbJo=
go.opentelemetry.io/contrib/instrumentation/runtime v0.68.0 h1:jhVIQEprwUTV+KfzzliLidclhoTOoHTgdz96kAyR8mU=
go.opentelemetry.io/contrib/instrumentation/runtime v0.68.0/go.mod h1:4HsdbLUbernaTnA8CNaNE+1g026SciXb3juRYe3l8EY=
go.opentelemetry.io/otel v1.43.0 h1:mYIM03dnh5zfN7HautFE4ieIig9amkNANT+xcVxAj9I=
go.opentelemetry.io/otel v1.43.0/go.mod h1:JuG+u74mvjvcm8vj8pI5XiHy1zDeoCS2LB1spIq7Ay0=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0 h1:8UQVDcZxOJLtX6gxtDt3vY2WTgvZqMQRzjsqiIHQdkc=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0/go.mod h1:2lmweYCiHYpEjQ/lSJBYhj9jP1zvCvQW4BqL9dnT7FQ=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 h1:88Y4s2C8oTui1LGM6bTWkw0ICGcOLCAI5l6zsD1j20k=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0/go.mod h1:Vl1/iaggsuRlrHf/hfPJPvVag77kKyvrLeD10kpMl+A=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0 h1:RAE+JPfvEmvy+0LzyUA25/SGawPwIUbZ6u0Wug54sLc=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0/go.mod h1:AGmbycVGEsRx9mXMZ75CsOyhSP6MFIcj/6dnG+vhVjk=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0 h1:3iZJKlCZufyRzPzlQhUIWVmfltrXuGyfjREgGP3UUjc=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0/go.mod h1:/G+nUPfhq2e+qiXMGxMwumDrP5jtzU+mWN7/sjT2rak=
go.opentelemetry.io/otel/exporters/prometheus v0.65.0 h1:jOveH/b4lU9HT7y+Gfamf18BqlOuz2PWEvs8yM7Q6XE=
go.opentelemetry.io/otel/exporters/prometheus v0.65.0/go.mod h1:i1P8pcumauPtUI4YNopea1dhzEMuEqWP1xoUZDylLHo=
go.opentelemetry.io/otel/metric v1.43.0 h1:d7638QeInOnuwOONPp4JAOGfbCEpYb+K6DVWvdxGzgM=
go.opentelemetry.io/otel/metric v1.43.0/go.mod h1:RDnPtIxvqlgO8GRW18W6Z/4P462ldprJtfxHxyKd2PY=
go.opentelemetry.io/otel/sdk v1.43.0 h1:pi5mE86i5rTeLXqoF/hhiBtUNcrAGHLKQdhg4h4V9Dg=
go.opentelemetry.io/otel/sdk v1.43.0/go.mod h1:P+IkVU3iWukmiit/Yf9AWvpyRDlUeBaRg6Y+C58QHzg=
go.opentelemetry.io/otel/sdk/metric v1.43.0 h1:S88dyqXjJkuBNLeMcVPRFXpRw2fuwdvfCGLEo89fDkw=
go.opentelemetry.io/otel/sdk/metric v1.43.0/go.mod h1:C/RJtwSEJ5hzTiUz5pXF1kILHStzb9zFlIEe85bhj6A=
go.opentelemetry.io/otel/trace v1.43.0 h1:BkNrHpup+4k4w+ZZ86CZoHHEkohws8AY+WTX09nk+3A=
go.opentelemetry.io/otel/trace v1.43.0/go.mod h1:/QJhyVBUUswCphDVxq+8mld+AvhXZLhe+8WVFxiFff0=
go.opentelemetry.io/proto/otlp v1.10.0 h1:IQRWgT5srOCYfiWnpqUYz9CVmbO8bFmKcwYxpuCSL2g=
go.opentelemetry.io/proto/otlp v1.10.0/go.mod h1:/CV4QoCR/S9yaPj8utp3lvQPoqMtxXdzn7ozvvozVqk=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
golang.org/x/crypto v0.48.0 h1:/VRzVqiRSggnhY7gNRxPauEQ5Drw9haKdM0jqfcCFts=
golang.org/x/crypto v0.48.0/go.mod h1:r0kV5h3qnFPlQnBSrULhlsRfryS2pmewsg+XfMgkVos=
go.yaml.in/yaml/v2 v2.4.4 h1:tuyd0P+2Ont/d6e2rl3be67goVK4R6deVxCUX5vyPaQ=
go.yaml.in/yaml/v2 v2.4.4/go.mod h1:gMZqIpDtDqOfM0uNfy0SkpRhvUryYH0Z6wdMYcacYXQ=
golang.org/x/crypto v0.50.0 h1:zO47/JPrL6vsNkINmLoo/PH1gcxpls50DNogFvB5ZGI=
golang.org/x/crypto v0.50.0/go.mod h1:3muZ7vA7PBCE6xgPX7nkzzjiUq87kRItoJQM1Yo8S+Q=
golang.org/x/exp v0.0.0-20251113190631-e25ba8c21ef6 h1:zfMcR1Cs4KNuomFFgGefv5N0czO2XZpUbxGUy8i8ug0=
golang.org/x/exp v0.0.0-20251113190631-e25ba8c21ef6/go.mod h1:46edojNIoXTNOhySWIWdix628clX9ODXwPsQuG6hsK0=
golang.org/x/mod v0.32.0 h1:9F4d3PHLljb6x//jOyokMv3eX+YDeepZSEo3mFJy93c=
golang.org/x/mod v0.32.0/go.mod h1:SgipZ/3h2Ci89DlEtEXWUk/HteuRin+HHhN+WbNhguU=
golang.org/x/net v0.51.0 h1:94R/GTO7mt3/4wIKpcR5gkGmRLOuE/2hNGeWq/GBIFo=
golang.org/x/net v0.51.0/go.mod h1:aamm+2QF5ogm02fjy5Bb7CQ0WMt1/WVM7FtyaTLlA9Y=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/mod v0.34.0 h1:xIHgNUUnW6sYkcM5Jleh05DvLOtwc6RitGHbDk4akRI=
golang.org/x/mod v0.34.0/go.mod h1:ykgH52iCZe79kzLLMhyCUzhMci+nQj+0XkbXpNYtVjY=
golang.org/x/net v0.53.0 h1:d+qAbo5L0orcWAr0a9JweQpjXF19LMXJE8Ey7hwOdUA=
golang.org/x/net v0.53.0/go.mod h1:JvMuJH7rrdiCfbeHoo3fCQU24Lf5JJwT9W3sJFulfgs=
golang.org/x/sync v0.20.0 h1:e0PTpb7pjO8GAtTs2dQ6jYa5BWYlMuX047Dco/pItO4=
golang.org/x/sync v0.20.0/go.mod h1:9xrNwdLfx4jkKbNva9FpL6vEN7evnE43NNNJQ2LF3+0=
golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.41.0 h1:Ivj+2Cp/ylzLiEU89QhWblYnOE9zerudt9Ftecq2C6k=
golang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/text v0.34.0 h1:oL/Qq0Kdaqxa1KbNeMKwQq0reLCCaFtqu2eNuSeNHbk=
golang.org/x/text v0.34.0/go.mod h1:homfLqTYRFyVYemLBFl5GgL/DWEiH5wcsQ5gSh1yziA=
golang.org/x/sys v0.43.0 h1:Rlag2XtaFTxp19wS8MXlJwTvoh8ArU6ezoyFsMyCTNI=
golang.org/x/sys v0.43.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
golang.org/x/text v0.36.0/go.mod h1:NIdBknypM8iqVmPiuco0Dh6P5Jcdk8lJL0CUebqK164=
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/tools v0.41.0 h1:a9b8iMweWG+S0OBnlU36rzLp20z1Rp10w+IY2czHTQc=
golang.org/x/tools v0.41.0/go.mod h1:XSY6eDqxVNiYgezAVqqCeihT4j1U2CCsqvH3WhQpnlg=
golang.org/x/tools v0.43.0 h1:12BdW9CeB3Z+J/I/wj34VMl8X+fEXBxVR90JeMX5E7s=
golang.org/x/tools v0.43.0/go.mod h1:uHkMso649BX2cZK6+RpuIPXS3ho2hZo4FVwfoy1vIk0=
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2 h1:B82qJJgjvYKsXS9jeunTOisW56dUokqW/FOteYJJ/yg=
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2/go.mod h1:deeaetjYA+DHMHg+sMSMI58GrEteJUUzzw7en6TJQcI=
golang.zx2c4.com/wireguard v0.0.0-20250521234502-f333402bd9cb h1:whnFRlWMcXI9d+ZbWg+4sHnLp52d5yiIPUxMBSt4X9A=
@@ -153,14 +153,14 @@ golang.zx2c4.com/wireguard/wgctrl v0.0.0-20241231184526-a9ab2273dd10 h1:3GDAcqdI
golang.zx2c4.com/wireguard/wgctrl v0.0.0-20241231184526-a9ab2273dd10/go.mod h1:T97yPqesLiNrOYxkwmhMI0ZIlJDm+p0PMR8eRVeR5tQ=
golang.zx2c4.com/wireguard/windows v0.5.3 h1:On6j2Rpn3OEMXqBq00QEDC7bWSZrPIHKIus8eIuExIE=
golang.zx2c4.com/wireguard/windows v0.5.3/go.mod h1:9TEe8TJmtwyQebdFwAkEWOPr3prrtqm+REGFifP60hI=
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
google.golang.org/genproto/googleapis/api v0.0.0-20260209200024-4cfbd4190f57 h1:JLQynH/LBHfCTSbDWl+py8C+Rg/k1OVH3xfcaiANuF0=
google.golang.org/genproto/googleapis/api v0.0.0-20260209200024-4cfbd4190f57/go.mod h1:kSJwQxqmFXeo79zOmbrALdflXQeAYcUbgS7PbpMknCY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20260209200024-4cfbd4190f57 h1:mWPCjDEyshlQYzBpMNHaEof6UX1PmHcaUODUywQ0uac=
google.golang.org/genproto/googleapis/rpc v0.0.0-20260209200024-4cfbd4190f57/go.mod h1:j9x/tPzZkyxcgEFkiKEEGxfvyumM01BEtsW8xzOahRQ=
google.golang.org/grpc v1.79.1 h1:zGhSi45ODB9/p3VAawt9a+O/MULLl9dpizzNNpq7flY=
google.golang.org/grpc v1.79.1/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ=
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 h1:VPWxll4HlMw1Vs/qXtN7BvhZqsS9cdAittCNvVENElA=
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:7QBABkRtR8z+TEnmXTqIqwJLlzrZKVfAUm7tY3yGv0M=
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 h1:m8qni9SQFH0tJc1X0vmnpw/0t+AImlSvp30sEupozUg=
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
google.golang.org/grpc v1.81.0 h1:W3G9N3KQf3BU+YuCtGKJk0CmxQNbAISICD/9AORxLIw=
google.golang.org/grpc v1.81.0/go.mod h1:xGH9GfzOyMTGIOXBJmXt+BX/V0kcdQbdcuwQ/zNw42I=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

View File

@@ -5,7 +5,9 @@ import (
"crypto/tls"
"encoding/json"
"fmt"
"net"
"net/http"
"strconv"
"strings"
"sync"
"time"
@@ -35,33 +37,38 @@ func (s Health) String() string {
// Config holds the health check configuration for a target
type Config struct {
ID int `json:"id"`
Enabled bool `json:"hcEnabled"`
Path string `json:"hcPath"`
Scheme string `json:"hcScheme"`
Mode string `json:"hcMode"`
Hostname string `json:"hcHostname"`
Port int `json:"hcPort"`
Interval int `json:"hcInterval"` // in seconds
UnhealthyInterval int `json:"hcUnhealthyInterval"` // in seconds
Timeout int `json:"hcTimeout"` // in seconds
Headers map[string]string `json:"hcHeaders"`
Method string `json:"hcMethod"`
Status int `json:"hcStatus"` // HTTP status code
TLSServerName string `json:"hcTlsServerName"`
ID int `json:"id"`
Enabled bool `json:"hcEnabled"`
Path string `json:"hcPath"`
Scheme string `json:"hcScheme"`
Mode string `json:"hcMode"`
Hostname string `json:"hcHostname"`
Port int `json:"hcPort"`
Interval int `json:"hcInterval"` // in seconds
UnhealthyInterval int `json:"hcUnhealthyInterval"` // in seconds
Timeout int `json:"hcTimeout"` // in seconds
FollowRedirects *bool `json:"hcFollowRedirects"`
Headers map[string]string `json:"hcHeaders"`
Method string `json:"hcMethod"`
Status int `json:"hcStatus"` // HTTP status code
TLSServerName string `json:"hcTlsServerName"`
HealthyThreshold int `json:"hcHealthyThreshold"` // consecutive successes required to become healthy
UnhealthyThreshold int `json:"hcUnhealthyThreshold"` // consecutive failures required to become unhealthy
}
// Target represents a health check target with its current status
type Target struct {
Config Config `json:"config"`
Status Health `json:"status"`
LastCheck time.Time `json:"lastCheck"`
LastError string `json:"lastError,omitempty"`
CheckCount int `json:"checkCount"`
timer *time.Timer
ctx context.Context
cancel context.CancelFunc
client *http.Client
Config Config `json:"config"`
Status Health `json:"status"`
LastCheck time.Time `json:"lastCheck"`
LastError string `json:"lastError,omitempty"`
CheckCount int `json:"checkCount"`
timer *time.Timer
ctx context.Context
cancel context.CancelFunc
client *http.Client
consecutiveSuccesses int
consecutiveFailures int
}
// StatusChangeCallback is called when any target's status changes
@@ -163,9 +170,16 @@ func (m *Monitor) addTargetUnsafe(config Config) error {
if config.Timeout == 0 {
config.Timeout = 5
}
if config.HealthyThreshold == 0 {
config.HealthyThreshold = 1
}
if config.UnhealthyThreshold == 0 {
config.UnhealthyThreshold = 1
}
logger.Debug("Target %d configuration: scheme=%s, method=%s, interval=%ds, timeout=%ds",
config.ID, config.Scheme, config.Method, config.Interval, config.Timeout)
logger.Debug("Target %d configuration: mode=%s, scheme=%s, method=%s, interval=%ds, timeout=%ds, healthyThreshold=%d, unhealthyThreshold=%d",
config.ID, config.Mode, config.Scheme, config.Method, config.Interval, config.Timeout,
config.HealthyThreshold, config.UnhealthyThreshold)
// Parse headers if provided as string
if len(config.Headers) == 0 && config.Path != "" {
@@ -187,6 +201,16 @@ func (m *Monitor) addTargetUnsafe(config Config) error {
ctx: ctx,
cancel: cancel,
client: &http.Client{
CheckRedirect: func() func(*http.Request, []*http.Request) error {
// Default to following redirects if not explicitly configured
followRedirects := config.FollowRedirects == nil || *config.FollowRedirects
if !followRedirects {
return func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
}
}
return nil
}(),
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
// Configure TLS settings based on certificate enforcement
@@ -228,7 +252,7 @@ func (m *Monitor) RemoveTarget(id int) error {
// Notify callback of status change
if m.callback != nil {
go m.callback(m.GetTargets())
go m.callback(m.getAllTargetsUnsafe())
}
logger.Info("Successfully removed target %d", id)
@@ -261,7 +285,7 @@ func (m *Monitor) RemoveTargets(ids []int) error {
// Notify callback of status change if any targets were removed
if len(notFound) != len(ids) && m.callback != nil {
go m.callback(m.GetTargets())
go m.callback(m.getAllTargetsUnsafe())
}
if len(notFound) > 0 {
@@ -359,17 +383,75 @@ func (m *Monitor) monitorTarget(target *Target) {
}
}
// performHealthCheck performs a health check on a target
// performHealthCheck performs a health check on a target and applies threshold logic
func (m *Monitor) performHealthCheck(target *Target) {
target.CheckCount++
target.LastCheck = time.Now()
target.LastError = ""
// Build URL
url := fmt.Sprintf("%s://%s", target.Config.Scheme, target.Config.Hostname)
if target.Config.Port > 0 {
url = fmt.Sprintf("%s:%d", url, target.Config.Port)
var passed bool
var checkErr string
switch strings.ToLower(target.Config.Mode) {
case "tcp":
passed, checkErr = m.performTCPCheck(target)
default:
// "http", "https", or anything else falls through to HTTP
passed, checkErr = m.performHTTPCheck(target)
}
if passed {
target.consecutiveFailures = 0
target.consecutiveSuccesses++
logger.Debug("Target %d: check passed (consecutive successes: %d / threshold: %d)",
target.Config.ID, target.consecutiveSuccesses, target.Config.HealthyThreshold)
if target.consecutiveSuccesses >= target.Config.HealthyThreshold {
target.Status = StatusHealthy
target.LastError = ""
}
} else {
target.consecutiveSuccesses = 0
target.consecutiveFailures++
target.LastError = checkErr
logger.Debug("Target %d: check failed (consecutive failures: %d / threshold: %d): %s",
target.Config.ID, target.consecutiveFailures, target.Config.UnhealthyThreshold, checkErr)
if target.consecutiveFailures >= target.Config.UnhealthyThreshold {
target.Status = StatusUnhealthy
}
}
}
// performTCPCheck dials the target's host:port over TCP and returns whether it succeeded
func (m *Monitor) performTCPCheck(target *Target) (bool, string) {
address := net.JoinHostPort(target.Config.Hostname, strconv.Itoa(target.Config.Port))
timeout := time.Duration(target.Config.Timeout) * time.Second
logger.Debug("Target %d: performing TCP health check to %s (timeout: %v)",
target.Config.ID, address, timeout)
conn, err := net.DialTimeout("tcp", address, timeout)
if err != nil {
msg := fmt.Sprintf("TCP dial failed: %v", err)
logger.Warn("Target %d: %s", target.Config.ID, msg)
return false, msg
}
conn.Close()
logger.Debug("Target %d: TCP health check passed", target.Config.ID)
return true, ""
}
// performHTTPCheck performs an HTTP/HTTPS health check and returns whether it succeeded
func (m *Monitor) performHTTPCheck(target *Target) (bool, string) {
// Build URL (use net.JoinHostPort to properly handle IPv6 addresses with ports)
host := target.Config.Hostname
if target.Config.Port > 0 {
host = net.JoinHostPort(target.Config.Hostname, strconv.Itoa(target.Config.Port))
}
url := fmt.Sprintf("%s://%s", target.Config.Scheme, host)
if target.Config.Path != "" {
if !strings.HasPrefix(target.Config.Path, "/") {
url += "/"
@@ -377,7 +459,7 @@ func (m *Monitor) performHealthCheck(target *Target) {
url += target.Config.Path
}
logger.Debug("Target %d: performing health check %d to %s",
logger.Debug("Target %d: performing HTTP health check %d to %s",
target.Config.ID, target.CheckCount, url)
if target.Config.Scheme == "https" {
@@ -385,16 +467,15 @@ func (m *Monitor) performHealthCheck(target *Target) {
target.Config.ID, m.enforceCert)
}
// Create request
// Create request with timeout context
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(target.Config.Timeout)*time.Second)
defer cancel()
req, err := http.NewRequestWithContext(ctx, target.Config.Method, url, nil)
if err != nil {
target.Status = StatusUnhealthy
target.LastError = fmt.Sprintf("failed to create request: %v", err)
logger.Warn("Target %d: failed to create request: %v", target.Config.ID, err)
return
msg := fmt.Sprintf("failed to create request: %v", err)
logger.Warn("Target %d: %s", target.Config.ID, msg)
return false, msg
}
// Add headers
@@ -410,43 +491,34 @@ func (m *Monitor) performHealthCheck(target *Target) {
// Perform request
resp, err := target.client.Do(req)
if err != nil {
target.Status = StatusUnhealthy
target.LastError = fmt.Sprintf("request failed: %v", err)
msg := fmt.Sprintf("request failed: %v", err)
logger.Warn("Target %d: health check failed: %v", target.Config.ID, err)
return
return false, msg
}
defer resp.Body.Close()
// Check response status
var expectedStatus int
if target.Config.Status > 0 {
expectedStatus = target.Config.Status
} else {
expectedStatus = 0 // Use range check for 200-299
// Check for specific status code
logger.Debug("Target %d: checking status against expected code %d", target.Config.ID, target.Config.Status)
if resp.StatusCode == target.Config.Status {
logger.Debug("Target %d: health check passed (status: %d)", target.Config.ID, resp.StatusCode)
return true, ""
}
msg := fmt.Sprintf("unexpected status code: %d (expected: %d)", resp.StatusCode, target.Config.Status)
logger.Warn("Target %d: %s", target.Config.ID, msg)
return false, msg
}
if expectedStatus > 0 {
logger.Debug("Target %d: checking health status against expected code %d", target.Config.ID, expectedStatus)
// Check for specific status code
if resp.StatusCode == expectedStatus {
target.Status = StatusHealthy
logger.Debug("Target %d: health check passed (status: %d, expected: %d)", target.Config.ID, resp.StatusCode, expectedStatus)
} else {
target.Status = StatusUnhealthy
target.LastError = fmt.Sprintf("unexpected status code: %d (expected: %d)", resp.StatusCode, expectedStatus)
logger.Warn("Target %d: health check failed with status code %d (expected: %d)", target.Config.ID, resp.StatusCode, expectedStatus)
}
} else {
// Check for 2xx range
if resp.StatusCode >= 200 && resp.StatusCode < 300 {
target.Status = StatusHealthy
logger.Debug("Target %d: health check passed (status: %d)", target.Config.ID, resp.StatusCode)
} else {
target.Status = StatusUnhealthy
target.LastError = fmt.Sprintf("unhealthy status code: %d", resp.StatusCode)
logger.Warn("Target %d: health check failed with status code %d", target.Config.ID, resp.StatusCode)
}
// Default: check for 2xx range
if resp.StatusCode >= 200 && resp.StatusCode < 300 {
logger.Debug("Target %d: health check passed (status: %d)", target.Config.ID, resp.StatusCode)
return true, ""
}
msg := fmt.Sprintf("unhealthy status code: %d", resp.StatusCode)
logger.Warn("Target %d: health check failed with status code %d", target.Config.ID, resp.StatusCode)
return false, msg
}
// Stop stops monitoring all targets
@@ -513,7 +585,7 @@ func (m *Monitor) DisableTarget(id int) error {
// Notify callback of status change
if m.callback != nil {
go m.callback(m.GetTargets())
go m.callback(m.getAllTargetsUnsafe())
}
} else {
logger.Debug("Target %d is already disabled", id)

162
main.go
View File

@@ -3,13 +3,16 @@ package main
import (
"bytes"
"context"
"crypto/rand"
"crypto/tls"
"encoding/hex"
"encoding/json"
"errors"
"flag"
"fmt"
"net"
"net/http"
"net/http/pprof"
"net/netip"
"os"
"os/signal"
@@ -45,6 +48,7 @@ type WgData struct {
TunnelIP string `json:"tunnelIP"`
Targets TargetsByType `json:"targets"`
HealthCheckTargets []healthcheck.Config `json:"healthCheckTargets"`
ChainId string `json:"chainId"`
}
type TargetsByType struct {
@@ -58,6 +62,7 @@ type TargetData struct {
type ExitNodeData struct {
ExitNodes []ExitNode `json:"exitNodes"`
ChainId string `json:"chainId"`
}
// ExitNode represents an exit node with an ID, endpoint, and weight.
@@ -124,9 +129,12 @@ var (
dockerEnforceNetworkValidationBool bool
pingInterval time.Duration
pingTimeout time.Duration
udpProxyIdleTimeout time.Duration
publicKey wgtypes.Key
pingStopChan chan struct{}
stopFunc func()
pendingRegisterChainId string
pendingPingChainId string
healthFile string
useNativeInterface bool
authorizedKeysFile string
@@ -147,8 +155,10 @@ var (
adminAddr string
region string
metricsAsyncBytes bool
blueprintFile string
noCloud bool
pprofEnabled bool
blueprintFile string
provisioningBlueprintFile string
noCloud bool
// New mTLS configuration variables
tlsClientCert string
@@ -157,8 +167,24 @@ var (
// Legacy PKCS12 support (deprecated)
tlsPrivateKey string
// Provisioning key exchanged once for a permanent newt ID + secret
provisioningKey string
// Optional name for the site created during provisioning
newtName string
// Path to config file (overrides CONFIG_FILE env var and default location)
configFile string
)
// generateChainId generates a random chain ID for deduplicating round-trip messages.
func generateChainId() string {
b := make([]byte, 8)
_, _ = rand.Read(b)
return hex.EncodeToString(b)
}
func main() {
// Check for subcommands first (only principals exits early)
if len(os.Args) > 1 {
@@ -225,6 +251,7 @@ func runNewtMain(ctx context.Context) {
adminAddrEnv := os.Getenv("NEWT_ADMIN_ADDR")
regionEnv := os.Getenv("NEWT_REGION")
asyncBytesEnv := os.Getenv("NEWT_METRICS_ASYNC_BYTES")
pprofEnabledEnv := os.Getenv("NEWT_PPROF_ENABLED")
disableClientsEnv := os.Getenv("DISABLE_CLIENTS")
disableClients = disableClientsEnv == "true"
@@ -235,6 +262,7 @@ func runNewtMain(ctx context.Context) {
dockerSocket = os.Getenv("DOCKER_SOCKET")
pingIntervalStr := os.Getenv("PING_INTERVAL")
pingTimeoutStr := os.Getenv("PING_TIMEOUT")
udpProxyIdleTimeoutStr := os.Getenv("NEWT_UDP_PROXY_IDLE_TIMEOUT")
dockerEnforceNetworkValidation = os.Getenv("DOCKER_ENFORCE_NETWORK_VALIDATION")
healthFile = os.Getenv("HEALTH_FILE")
// authorizedKeysFile = os.Getenv("AUTHORIZED_KEYS_FILE")
@@ -259,8 +287,12 @@ func runNewtMain(ctx context.Context) {
tlsPrivateKey = os.Getenv("TLS_CLIENT_CERT")
}
blueprintFile = os.Getenv("BLUEPRINT_FILE")
provisioningBlueprintFile = os.Getenv("PROVISIONING_BLUEPRINT_FILE")
noCloudEnv := os.Getenv("NO_CLOUD")
noCloud = noCloudEnv == "true"
provisioningKey = os.Getenv("NEWT_PROVISIONING_KEY")
newtName = os.Getenv("NEWT_NAME")
configFile = os.Getenv("CONFIG_FILE")
if endpoint == "" {
flag.StringVar(&endpoint, "endpoint", "", "Endpoint of your pangolin server")
@@ -307,8 +339,20 @@ func runNewtMain(ctx context.Context) {
if pingTimeoutStr == "" {
flag.StringVar(&pingTimeoutStr, "ping-timeout", "7s", " Timeout for each ping (default 7s)")
}
if udpProxyIdleTimeoutStr == "" {
flag.StringVar(&udpProxyIdleTimeoutStr, "udp-proxy-idle-timeout", "90s", "Idle timeout for UDP proxied client flows before cleanup")
}
// load the prefer endpoint just as a flag
flag.StringVar(&preferEndpoint, "prefer-endpoint", "", "Prefer this endpoint for the connection (if set, will override the endpoint from the server)")
if provisioningKey == "" {
flag.StringVar(&provisioningKey, "provisioning-key", "", "One-time provisioning key used to obtain a newt ID and secret from the server")
}
if newtName == "" {
flag.StringVar(&newtName, "name", "", "Name for the site created during provisioning (supports {{env.VAR}} interpolation)")
}
if configFile == "" {
flag.StringVar(&configFile, "config-file", "", "Path to config file (overrides CONFIG_FILE env var and default location)")
}
// Add new mTLS flags
if tlsClientCert == "" {
@@ -347,6 +391,16 @@ func runNewtMain(ctx context.Context) {
pingTimeout = 7 * time.Second
}
if udpProxyIdleTimeoutStr != "" {
udpProxyIdleTimeout, err = time.ParseDuration(udpProxyIdleTimeoutStr)
if err != nil || udpProxyIdleTimeout <= 0 {
fmt.Printf("Invalid NEWT_UDP_PROXY_IDLE_TIMEOUT/--udp-proxy-idle-timeout value: %s, using default 90 seconds\n", udpProxyIdleTimeoutStr)
udpProxyIdleTimeout = 90 * time.Second
}
} else {
udpProxyIdleTimeout = 90 * time.Second
}
if dockerEnforceNetworkValidation == "" {
flag.StringVar(&dockerEnforceNetworkValidation, "docker-enforce-network-validation", "false", "Enforce validation of container on newt network (true or false)")
}
@@ -356,6 +410,9 @@ func runNewtMain(ctx context.Context) {
if blueprintFile == "" {
flag.StringVar(&blueprintFile, "blueprint-file", "", "Path to blueprint file (if unset, no blueprint will be applied)")
}
if provisioningBlueprintFile == "" {
flag.StringVar(&provisioningBlueprintFile, "provisioning-blueprint-file", "", "Path to blueprint file applied once after a provisioning credential exchange (if unset, no provisioning blueprint will be applied)")
}
if noCloudEnv == "" {
flag.BoolVar(&noCloud, "no-cloud", false, "Disable cloud failover")
}
@@ -390,6 +447,14 @@ func runNewtMain(ctx context.Context) {
metricsAsyncBytes = v
}
}
// pprof debug endpoint toggle
if pprofEnabledEnv == "" {
flag.BoolVar(&pprofEnabled, "pprof", false, "Enable pprof debug endpoints on admin server")
} else {
if v, err := strconv.ParseBool(pprofEnabledEnv); err == nil {
pprofEnabled = v
}
}
// Optional region flag (resource attribute)
if regionEnv == "" {
flag.StringVar(&region, "region", "", "Optional region resource attribute (also NEWT_REGION)")
@@ -477,7 +542,7 @@ func runNewtMain(ctx context.Context) {
if telErr != nil {
logger.Warn("Telemetry init failed: %v", telErr)
}
if tel != nil {
if tel != nil && (metricsEnabled || pprofEnabled) {
// Admin HTTP server (exposes /metrics when Prometheus exporter is enabled)
logger.Debug("Starting metrics server on %s", tcfg.AdminAddr)
mux := http.NewServeMux()
@@ -485,6 +550,14 @@ func runNewtMain(ctx context.Context) {
if tel.PrometheusHandler != nil {
mux.Handle("/metrics", tel.PrometheusHandler)
}
if pprofEnabled {
mux.HandleFunc("/debug/pprof/", pprof.Index)
mux.HandleFunc("/debug/pprof/cmdline", pprof.Cmdline)
mux.HandleFunc("/debug/pprof/profile", pprof.Profile)
mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
mux.HandleFunc("/debug/pprof/trace", pprof.Trace)
logger.Info("pprof debugging enabled on %s/debug/pprof/", tcfg.AdminAddr)
}
admin := &http.Server{
Addr: tcfg.AdminAddr,
Handler: otelhttp.NewHandler(mux, "newt-admin"),
@@ -567,10 +640,20 @@ func runNewtMain(ctx context.Context) {
endpoint,
30*time.Second,
opt,
websocket.WithConfigFile(configFile),
)
if err != nil {
logger.Fatal("Failed to create client: %v", err)
}
// If a provisioning key was supplied via CLI / env and the config file did
// not already carry one, inject it now so provisionIfNeeded() can use it.
if provisioningKey != "" && client.GetConfig().ProvisioningKey == "" {
client.GetConfig().ProvisioningKey = provisioningKey
}
if newtName != "" && client.GetConfig().Name == "" {
client.GetConfig().Name = newtName
}
endpoint = client.GetConfig().Endpoint // Update endpoint from config
id = client.GetConfig().ID // Update ID from config
// Update site labels for metrics with the resolved ID
@@ -687,6 +770,24 @@ func runNewtMain(ctx context.Context) {
defer func() {
telemetry.IncSiteRegistration(ctx, regResult)
}()
// Deduplicate using chainId: if the server echoes back a chainId we have
// already consumed (or one that doesn't match our current pending request),
// throw the message away to avoid setting up the tunnel twice.
var chainData struct {
ChainId string `json:"chainId"`
}
if jsonBytes, err := json.Marshal(msg.Data); err == nil {
_ = json.Unmarshal(jsonBytes, &chainData)
}
if chainData.ChainId != "" {
if chainData.ChainId != pendingRegisterChainId {
logger.Debug("Discarding duplicate/stale newt/wg/connect (chainId=%s, expected=%s)", chainData.ChainId, pendingRegisterChainId)
return
}
pendingRegisterChainId = "" // consume further duplicates with this id are rejected
}
if stopFunc != nil {
stopFunc() // stop the ws from sending more requests
stopFunc = nil // reset stopFunc to nil to avoid double stopping
@@ -810,6 +911,7 @@ persistent_keepalive_interval=5`, util.FixKey(privateKey.String()), util.FixKey(
// Create proxy manager
pm = proxy.NewProxyManager(tnet)
pm.SetAsyncBytes(metricsAsyncBytes)
pm.SetUDPIdleTimeout(udpProxyIdleTimeout)
// Set tunnel_id for metrics (WireGuard peer public key)
pm.SetTunnelID(wgData.PublicKey)
@@ -871,8 +973,11 @@ persistent_keepalive_interval=5`, util.FixKey(privateKey.String()), util.FixKey(
}
// Request exit nodes from the server
pingChainId := generateChainId()
pendingPingChainId = pingChainId
stopFunc = client.SendMessageInterval("newt/ping/request", map[string]interface{}{
"noCloud": noCloud,
"chainId": pingChainId,
}, 3*time.Second)
logger.Info("Tunnel destroyed, ready for reconnection")
@@ -901,6 +1006,7 @@ persistent_keepalive_interval=5`, util.FixKey(privateKey.String()), util.FixKey(
client.RegisterHandler("newt/ping/exitNodes", func(msg websocket.WSMessage) {
logger.Debug("Received ping message")
if stopFunc != nil {
stopFunc() // stop the ws from sending more requests
stopFunc = nil // reset stopFunc to nil to avoid double stopping
@@ -920,6 +1026,14 @@ persistent_keepalive_interval=5`, util.FixKey(privateKey.String()), util.FixKey(
}
exitNodes := exitNodeData.ExitNodes
if exitNodeData.ChainId != "" {
if exitNodeData.ChainId != pendingPingChainId {
logger.Debug("Discarding duplicate/stale newt/ping/exitNodes (chainId=%s, expected=%s)", exitNodeData.ChainId, pendingPingChainId)
return
}
pendingPingChainId = "" // consume further duplicates with this id are rejected
}
if len(exitNodes) == 0 {
logger.Info("No exit nodes provided")
return
@@ -952,10 +1066,13 @@ persistent_keepalive_interval=5`, util.FixKey(privateKey.String()), util.FixKey(
},
}
chainId := generateChainId()
pendingRegisterChainId = chainId
stopFunc = client.SendMessageInterval(topicWGRegister, map[string]interface{}{
"publicKey": publicKey.String(),
"pingResults": pingResults,
"newtVersion": newtVersion,
"chainId": chainId,
}, 2*time.Second)
return
@@ -1055,10 +1172,13 @@ persistent_keepalive_interval=5`, util.FixKey(privateKey.String()), util.FixKey(
}
// Send the ping results to the cloud for selection
chainId := generateChainId()
pendingRegisterChainId = chainId
stopFunc = client.SendMessageInterval(topicWGRegister, map[string]interface{}{
"publicKey": publicKey.String(),
"pingResults": pingResults,
"newtVersion": newtVersion,
"chainId": chainId,
}, 2*time.Second)
logger.Debug("Sent exit node ping results to cloud for selection: pingResults=%+v", pingResults)
@@ -1708,8 +1828,11 @@ persistent_keepalive_interval=5`, util.FixKey(privateKey.String()), util.FixKey(
stopFunc()
}
// request from the server the list of nodes to ping
pingChainId := generateChainId()
pendingPingChainId = pingChainId
stopFunc = client.SendMessageInterval("newt/ping/request", map[string]interface{}{
"noCloud": noCloud,
"chainId": pingChainId,
}, 3*time.Second)
logger.Debug("Requesting exit nodes from server")
@@ -1718,17 +1841,46 @@ persistent_keepalive_interval=5`, util.FixKey(privateKey.String()), util.FixKey(
} else {
logger.Warn("CLIENTS WILL NOT WORK ON THIS VERSION OF NEWT WITH THIS VERSION OF PANGOLIN, PLEASE UPDATE THE SERVER TO 1.13 OR HIGHER OR DOWNGRADE NEWT")
}
sendBlueprint(client, blueprintFile)
if client.WasJustProvisioned() {
logger.Info("Provisioning detected sending provisioning blueprint")
sendBlueprint(client, provisioningBlueprintFile)
}
} else {
// Resend current health check status for all targets in case the server
// missed updates while newt was disconnected.
targets := healthMonitor.GetTargets()
if len(targets) > 0 {
healthStatuses := make(map[int]interface{})
for id, target := range targets {
healthStatuses[id] = map[string]interface{}{
"status": target.Status.String(),
"lastCheck": target.LastCheck.Format(time.RFC3339),
"checkCount": target.CheckCount,
"lastError": target.LastError,
"config": target.Config,
}
}
logger.Debug("Reconnected: resending health check status for %d targets", len(healthStatuses))
if err := client.SendMessage("newt/healthcheck/status", map[string]interface{}{
"targets": healthStatuses,
}); err != nil {
logger.Error("Failed to resend health check status on reconnect: %v", err)
}
}
}
// Send registration message to the server for backward compatibility
bcChainId := generateChainId()
pendingRegisterChainId = bcChainId
err := client.SendMessage(topicWGRegister, map[string]interface{}{
"publicKey": publicKey.String(),
"newtVersion": newtVersion,
"backwardsCompatible": true,
"chainId": bcChainId,
})
sendBlueprint(client)
if err != nil {
logger.Error("Failed to send registration message: %v", err)
return err

514
netstack2/access_log.go Normal file
View File

@@ -0,0 +1,514 @@
package netstack2
import (
"bytes"
"compress/zlib"
"crypto/rand"
"encoding/base64"
"encoding/hex"
"encoding/json"
"net"
"sort"
"sync"
"time"
"github.com/fosrl/newt/logger"
)
const (
// flushInterval is how often the access logger flushes completed sessions to the server
flushInterval = 60 * time.Second
// maxBufferedSessions is the max number of completed sessions to buffer before forcing a flush
maxBufferedSessions = 100
// sessionGapThreshold is the maximum gap between the end of one connection
// and the start of the next for them to be considered part of the same session.
// If the gap exceeds this, a new consolidated session is created.
sessionGapThreshold = 5 * time.Second
// minConnectionsToConsolidate is the minimum number of connections in a group
// before we bother consolidating. Groups smaller than this are sent as-is.
minConnectionsToConsolidate = 2
)
// SendFunc is a callback that sends compressed access log data to the server.
// The data is a base64-encoded zlib-compressed JSON array of AccessSession objects.
type SendFunc func(data string) error
// AccessSession represents a tracked access session through the proxy
type AccessSession struct {
SessionID string `json:"sessionId"`
ResourceID int `json:"resourceId"`
SourceAddr string `json:"sourceAddr"`
DestAddr string `json:"destAddr"`
Protocol string `json:"protocol"`
StartedAt time.Time `json:"startedAt"`
EndedAt time.Time `json:"endedAt,omitempty"`
BytesTx int64 `json:"bytesTx"`
BytesRx int64 `json:"bytesRx"`
ConnectionCount int `json:"connectionCount,omitempty"` // number of raw connections merged into this session (0 or 1 = single)
}
// udpSessionKey identifies a unique UDP "session" by src -> dst
type udpSessionKey struct {
srcAddr string
dstAddr string
protocol string
}
// consolidationKey groups connections that may be part of the same logical session.
// Source port is intentionally excluded so that many ephemeral-port connections
// from the same source IP to the same destination are grouped together.
type consolidationKey struct {
sourceIP string // IP only, no port
destAddr string // full host:port of the destination
protocol string
resourceID int
}
// AccessLogger tracks access sessions for resources and periodically
// flushes completed sessions to the server via a configurable SendFunc.
type AccessLogger struct {
mu sync.Mutex
sessions map[string]*AccessSession // active sessions: sessionID -> session
udpSessions map[udpSessionKey]*AccessSession // active UDP sessions for dedup
completedSessions []*AccessSession // completed sessions waiting to be flushed
udpTimeout time.Duration
sendFn SendFunc
stopCh chan struct{}
flushDone chan struct{} // closed after the flush goroutine exits
}
// NewAccessLogger creates a new access logger.
// udpTimeout controls how long a UDP session is kept alive without traffic before being ended.
func NewAccessLogger(udpTimeout time.Duration) *AccessLogger {
al := &AccessLogger{
sessions: make(map[string]*AccessSession),
udpSessions: make(map[udpSessionKey]*AccessSession),
completedSessions: make([]*AccessSession, 0),
udpTimeout: udpTimeout,
stopCh: make(chan struct{}),
flushDone: make(chan struct{}),
}
go al.backgroundLoop()
return al
}
// SetSendFunc sets the callback used to send compressed access log batches
// to the server. This can be called after construction once the websocket
// client is available.
func (al *AccessLogger) SetSendFunc(fn SendFunc) {
al.mu.Lock()
defer al.mu.Unlock()
al.sendFn = fn
}
// generateSessionID creates a random session identifier
func generateSessionID() string {
b := make([]byte, 8)
rand.Read(b)
return hex.EncodeToString(b)
}
// StartTCPSession logs the start of a TCP session and returns a session ID.
func (al *AccessLogger) StartTCPSession(resourceID int, srcAddr, dstAddr string) string {
sessionID := generateSessionID()
now := time.Now()
session := &AccessSession{
SessionID: sessionID,
ResourceID: resourceID,
SourceAddr: srcAddr,
DestAddr: dstAddr,
Protocol: "tcp",
StartedAt: now,
}
al.mu.Lock()
al.sessions[sessionID] = session
al.mu.Unlock()
logger.Info("ACCESS START session=%s resource=%d proto=tcp src=%s dst=%s time=%s",
sessionID, resourceID, srcAddr, dstAddr, now.Format(time.RFC3339))
return sessionID
}
// EndTCPSession logs the end of a TCP session and queues it for sending.
func (al *AccessLogger) EndTCPSession(sessionID string) {
now := time.Now()
al.mu.Lock()
session, ok := al.sessions[sessionID]
if ok {
session.EndedAt = now
delete(al.sessions, sessionID)
al.completedSessions = append(al.completedSessions, session)
}
shouldFlush := len(al.completedSessions) >= maxBufferedSessions
al.mu.Unlock()
if ok {
duration := now.Sub(session.StartedAt)
logger.Info("ACCESS END session=%s resource=%d proto=tcp src=%s dst=%s started=%s ended=%s duration=%s",
sessionID, session.ResourceID, session.SourceAddr, session.DestAddr,
session.StartedAt.Format(time.RFC3339), now.Format(time.RFC3339), duration)
}
if shouldFlush {
al.flush()
}
}
// TrackUDPSession starts or returns an existing UDP session. Returns the session ID.
func (al *AccessLogger) TrackUDPSession(resourceID int, srcAddr, dstAddr string) string {
key := udpSessionKey{
srcAddr: srcAddr,
dstAddr: dstAddr,
protocol: "udp",
}
al.mu.Lock()
defer al.mu.Unlock()
if existing, ok := al.udpSessions[key]; ok {
return existing.SessionID
}
sessionID := generateSessionID()
now := time.Now()
session := &AccessSession{
SessionID: sessionID,
ResourceID: resourceID,
SourceAddr: srcAddr,
DestAddr: dstAddr,
Protocol: "udp",
StartedAt: now,
}
al.sessions[sessionID] = session
al.udpSessions[key] = session
logger.Info("ACCESS START session=%s resource=%d proto=udp src=%s dst=%s time=%s",
sessionID, resourceID, srcAddr, dstAddr, now.Format(time.RFC3339))
return sessionID
}
// EndUDPSession ends a UDP session and queues it for sending.
func (al *AccessLogger) EndUDPSession(sessionID string) {
now := time.Now()
al.mu.Lock()
session, ok := al.sessions[sessionID]
if ok {
session.EndedAt = now
delete(al.sessions, sessionID)
key := udpSessionKey{
srcAddr: session.SourceAddr,
dstAddr: session.DestAddr,
protocol: "udp",
}
delete(al.udpSessions, key)
al.completedSessions = append(al.completedSessions, session)
}
shouldFlush := len(al.completedSessions) >= maxBufferedSessions
al.mu.Unlock()
if ok {
duration := now.Sub(session.StartedAt)
logger.Info("ACCESS END session=%s resource=%d proto=udp src=%s dst=%s started=%s ended=%s duration=%s",
sessionID, session.ResourceID, session.SourceAddr, session.DestAddr,
session.StartedAt.Format(time.RFC3339), now.Format(time.RFC3339), duration)
}
if shouldFlush {
al.flush()
}
}
// backgroundLoop handles periodic flushing and stale session reaping.
func (al *AccessLogger) backgroundLoop() {
defer close(al.flushDone)
flushTicker := time.NewTicker(flushInterval)
defer flushTicker.Stop()
reapTicker := time.NewTicker(30 * time.Second)
defer reapTicker.Stop()
for {
select {
case <-al.stopCh:
return
case <-flushTicker.C:
al.flush()
case <-reapTicker.C:
al.reapStaleSessions()
}
}
}
// reapStaleSessions cleans up UDP sessions that were not properly ended.
func (al *AccessLogger) reapStaleSessions() {
al.mu.Lock()
defer al.mu.Unlock()
staleThreshold := time.Now().Add(-5 * time.Minute)
for key, session := range al.udpSessions {
if session.StartedAt.Before(staleThreshold) && session.EndedAt.IsZero() {
now := time.Now()
session.EndedAt = now
duration := now.Sub(session.StartedAt)
logger.Info("ACCESS END (reaped) session=%s resource=%d proto=udp src=%s dst=%s started=%s ended=%s duration=%s",
session.SessionID, session.ResourceID, session.SourceAddr, session.DestAddr,
session.StartedAt.Format(time.RFC3339), now.Format(time.RFC3339), duration)
al.completedSessions = append(al.completedSessions, session)
delete(al.sessions, session.SessionID)
delete(al.udpSessions, key)
}
}
}
// extractIP strips the port from an address string and returns just the IP.
// If the address has no port component it is returned as-is.
func extractIP(addr string) string {
host, _, err := net.SplitHostPort(addr)
if err != nil {
// Might already be a bare IP
return addr
}
return host
}
// consolidateSessions takes a slice of completed sessions and merges bursts of
// short-lived connections from the same source IP to the same destination into
// single higher-level session entries.
//
// The algorithm:
// 1. Group sessions by (sourceIP, destAddr, protocol, resourceID).
// 2. Within each group, sort by StartedAt.
// 3. Walk through the sorted list and merge consecutive sessions whose gap
// (previous EndedAt → next StartedAt) is ≤ sessionGapThreshold.
// 4. For merged sessions the earliest StartedAt and latest EndedAt are kept,
// bytes are summed, and ConnectionCount records how many raw connections
// were folded in. If the merged connections used more than one source port,
// SourceAddr is set to just the IP (port omitted).
// 5. Groups with fewer than minConnectionsToConsolidate members are passed
// through unmodified.
func consolidateSessions(sessions []*AccessSession) []*AccessSession {
if len(sessions) <= 1 {
return sessions
}
// Group sessions by consolidation key
groups := make(map[consolidationKey][]*AccessSession)
for _, s := range sessions {
key := consolidationKey{
sourceIP: extractIP(s.SourceAddr),
destAddr: s.DestAddr,
protocol: s.Protocol,
resourceID: s.ResourceID,
}
groups[key] = append(groups[key], s)
}
result := make([]*AccessSession, 0, len(sessions))
for key, group := range groups {
// Small groups don't need consolidation
if len(group) < minConnectionsToConsolidate {
result = append(result, group...)
continue
}
// Sort the group by start time so we can detect gaps
sort.Slice(group, func(i, j int) bool {
return group[i].StartedAt.Before(group[j].StartedAt)
})
// Walk through and merge runs that are within the gap threshold
var merged []*AccessSession
cur := cloneSession(group[0])
cur.ConnectionCount = 1
sourcePorts := make(map[string]struct{})
sourcePorts[cur.SourceAddr] = struct{}{}
for i := 1; i < len(group); i++ {
s := group[i]
// Determine the gap: from the latest end time we've seen so far to the
// start of the next connection.
gapRef := cur.EndedAt
if gapRef.IsZero() {
gapRef = cur.StartedAt
}
gap := s.StartedAt.Sub(gapRef)
if gap <= sessionGapThreshold {
// Merge into the current consolidated session
cur.ConnectionCount++
cur.BytesTx += s.BytesTx
cur.BytesRx += s.BytesRx
sourcePorts[s.SourceAddr] = struct{}{}
// Extend EndedAt to the latest time
endTime := s.EndedAt
if endTime.IsZero() {
endTime = s.StartedAt
}
if endTime.After(cur.EndedAt) {
cur.EndedAt = endTime
}
} else {
// Gap exceeded — finalize the current session and start a new one
finalizeMergedSourceAddr(cur, key.sourceIP, sourcePorts)
merged = append(merged, cur)
cur = cloneSession(s)
cur.ConnectionCount = 1
sourcePorts = make(map[string]struct{})
sourcePorts[s.SourceAddr] = struct{}{}
}
}
// Finalize the last accumulated session
finalizeMergedSourceAddr(cur, key.sourceIP, sourcePorts)
merged = append(merged, cur)
result = append(result, merged...)
}
return result
}
// cloneSession creates a shallow copy of an AccessSession.
func cloneSession(s *AccessSession) *AccessSession {
cp := *s
return &cp
}
// finalizeMergedSourceAddr sets the SourceAddr on a consolidated session.
// If multiple distinct source addresses (ports) were seen, the port is
// stripped and only the IP is kept so the log isn't misleading.
func finalizeMergedSourceAddr(s *AccessSession, sourceIP string, ports map[string]struct{}) {
if len(ports) > 1 {
// Multiple source ports — just report the IP
s.SourceAddr = sourceIP
}
// Otherwise keep the original SourceAddr which already has ip:port
}
// flush drains the completed sessions buffer, consolidates bursts of
// short-lived connections, compresses with zlib, and sends via the SendFunc.
func (al *AccessLogger) flush() {
al.mu.Lock()
if len(al.completedSessions) == 0 {
al.mu.Unlock()
return
}
batch := al.completedSessions
al.completedSessions = make([]*AccessSession, 0)
sendFn := al.sendFn
al.mu.Unlock()
if sendFn == nil {
logger.Debug("Access logger: no send function configured, discarding %d sessions", len(batch))
return
}
// Consolidate bursts of short-lived connections into higher-level sessions
originalCount := len(batch)
batch = consolidateSessions(batch)
if len(batch) != originalCount {
logger.Info("Access logger: consolidated %d raw connections into %d sessions", originalCount, len(batch))
}
compressed, err := compressSessions(batch)
if err != nil {
logger.Error("Access logger: failed to compress %d sessions: %v", len(batch), err)
return
}
if err := sendFn(compressed); err != nil {
logger.Error("Access logger: failed to send %d sessions: %v", len(batch), err)
// Re-queue the batch so we don't lose data
al.mu.Lock()
al.completedSessions = append(batch, al.completedSessions...)
// Cap re-queued data to prevent unbounded growth if server is unreachable
if len(al.completedSessions) > maxBufferedSessions*5 {
dropped := len(al.completedSessions) - maxBufferedSessions*5
al.completedSessions = al.completedSessions[:maxBufferedSessions*5]
logger.Warn("Access logger: buffer overflow, dropped %d oldest sessions", dropped)
}
al.mu.Unlock()
return
}
logger.Info("Access logger: sent %d sessions to server", len(batch))
}
// compressSessions JSON-encodes the sessions, compresses with zlib, and returns
// a base64-encoded string suitable for embedding in a JSON message.
func compressSessions(sessions []*AccessSession) (string, error) {
jsonData, err := json.Marshal(sessions)
if err != nil {
return "", err
}
var buf bytes.Buffer
w, err := zlib.NewWriterLevel(&buf, zlib.BestCompression)
if err != nil {
return "", err
}
if _, err := w.Write(jsonData); err != nil {
w.Close()
return "", err
}
if err := w.Close(); err != nil {
return "", err
}
return base64.StdEncoding.EncodeToString(buf.Bytes()), nil
}
// Close shuts down the background loop, ends all active sessions,
// and performs one final flush to send everything to the server.
func (al *AccessLogger) Close() {
// Signal the background loop to stop
select {
case <-al.stopCh:
// Already closed
return
default:
close(al.stopCh)
}
// Wait for the background loop to exit so we don't race on flush
<-al.flushDone
al.mu.Lock()
now := time.Now()
// End all active sessions and move them to the completed buffer
for _, session := range al.sessions {
if session.EndedAt.IsZero() {
session.EndedAt = now
duration := now.Sub(session.StartedAt)
logger.Info("ACCESS END (shutdown) session=%s resource=%d proto=%s src=%s dst=%s started=%s ended=%s duration=%s",
session.SessionID, session.ResourceID, session.Protocol, session.SourceAddr, session.DestAddr,
session.StartedAt.Format(time.RFC3339), now.Format(time.RFC3339), duration)
al.completedSessions = append(al.completedSessions, session)
}
}
al.sessions = make(map[string]*AccessSession)
al.udpSessions = make(map[udpSessionKey]*AccessSession)
al.mu.Unlock()
// Final flush to send all remaining sessions to the server
al.flush()
}

View File

@@ -0,0 +1,811 @@
package netstack2
import (
"testing"
"time"
)
func TestExtractIP(t *testing.T) {
tests := []struct {
name string
addr string
expected string
}{
{"ipv4 with port", "192.168.1.1:12345", "192.168.1.1"},
{"ipv4 without port", "192.168.1.1", "192.168.1.1"},
{"ipv6 with port", "[::1]:12345", "::1"},
{"ipv6 without port", "::1", "::1"},
{"empty string", "", ""},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := extractIP(tt.addr)
if result != tt.expected {
t.Errorf("extractIP(%q) = %q, want %q", tt.addr, result, tt.expected)
}
})
}
}
func TestConsolidateSessions_Empty(t *testing.T) {
result := consolidateSessions(nil)
if result != nil {
t.Errorf("expected nil, got %v", result)
}
result = consolidateSessions([]*AccessSession{})
if len(result) != 0 {
t.Errorf("expected empty slice, got %d items", len(result))
}
}
func TestConsolidateSessions_SingleSession(t *testing.T) {
now := time.Now()
sessions := []*AccessSession{
{
SessionID: "abc123",
ResourceID: 1,
SourceAddr: "10.0.0.1:5000",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now,
EndedAt: now.Add(1 * time.Second),
},
}
result := consolidateSessions(sessions)
if len(result) != 1 {
t.Fatalf("expected 1 session, got %d", len(result))
}
if result[0].SourceAddr != "10.0.0.1:5000" {
t.Errorf("expected source addr preserved, got %q", result[0].SourceAddr)
}
}
func TestConsolidateSessions_MergesBurstFromSameSourceIP(t *testing.T) {
now := time.Now()
sessions := []*AccessSession{
{
SessionID: "s1",
ResourceID: 1,
SourceAddr: "10.0.0.1:5000",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now,
EndedAt: now.Add(100 * time.Millisecond),
BytesTx: 100,
BytesRx: 200,
},
{
SessionID: "s2",
ResourceID: 1,
SourceAddr: "10.0.0.1:5001",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(200 * time.Millisecond),
EndedAt: now.Add(300 * time.Millisecond),
BytesTx: 150,
BytesRx: 250,
},
{
SessionID: "s3",
ResourceID: 1,
SourceAddr: "10.0.0.1:5002",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(400 * time.Millisecond),
EndedAt: now.Add(500 * time.Millisecond),
BytesTx: 50,
BytesRx: 75,
},
}
result := consolidateSessions(sessions)
if len(result) != 1 {
t.Fatalf("expected 1 consolidated session, got %d", len(result))
}
s := result[0]
if s.ConnectionCount != 3 {
t.Errorf("expected ConnectionCount=3, got %d", s.ConnectionCount)
}
if s.SourceAddr != "10.0.0.1" {
t.Errorf("expected source addr to be IP only (multiple ports), got %q", s.SourceAddr)
}
if s.DestAddr != "192.168.1.100:443" {
t.Errorf("expected dest addr preserved, got %q", s.DestAddr)
}
if s.StartedAt != now {
t.Errorf("expected StartedAt to be earliest time")
}
if s.EndedAt != now.Add(500*time.Millisecond) {
t.Errorf("expected EndedAt to be latest time")
}
expectedTx := int64(300)
expectedRx := int64(525)
if s.BytesTx != expectedTx {
t.Errorf("expected BytesTx=%d, got %d", expectedTx, s.BytesTx)
}
if s.BytesRx != expectedRx {
t.Errorf("expected BytesRx=%d, got %d", expectedRx, s.BytesRx)
}
}
func TestConsolidateSessions_SameSourcePortPreserved(t *testing.T) {
now := time.Now()
sessions := []*AccessSession{
{
SessionID: "s1",
ResourceID: 1,
SourceAddr: "10.0.0.1:5000",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now,
EndedAt: now.Add(100 * time.Millisecond),
},
{
SessionID: "s2",
ResourceID: 1,
SourceAddr: "10.0.0.1:5000",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(200 * time.Millisecond),
EndedAt: now.Add(300 * time.Millisecond),
},
}
result := consolidateSessions(sessions)
if len(result) != 1 {
t.Fatalf("expected 1 session, got %d", len(result))
}
if result[0].SourceAddr != "10.0.0.1:5000" {
t.Errorf("expected source addr with port preserved when all ports are the same, got %q", result[0].SourceAddr)
}
if result[0].ConnectionCount != 2 {
t.Errorf("expected ConnectionCount=2, got %d", result[0].ConnectionCount)
}
}
func TestConsolidateSessions_GapSplitsSessions(t *testing.T) {
now := time.Now()
// First burst
sessions := []*AccessSession{
{
SessionID: "s1",
ResourceID: 1,
SourceAddr: "10.0.0.1:5000",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now,
EndedAt: now.Add(100 * time.Millisecond),
},
{
SessionID: "s2",
ResourceID: 1,
SourceAddr: "10.0.0.1:5001",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(200 * time.Millisecond),
EndedAt: now.Add(300 * time.Millisecond),
},
// Big gap here (10 seconds)
{
SessionID: "s3",
ResourceID: 1,
SourceAddr: "10.0.0.1:5002",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(10 * time.Second),
EndedAt: now.Add(10*time.Second + 100*time.Millisecond),
},
{
SessionID: "s4",
ResourceID: 1,
SourceAddr: "10.0.0.1:5003",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(10*time.Second + 200*time.Millisecond),
EndedAt: now.Add(10*time.Second + 300*time.Millisecond),
},
}
result := consolidateSessions(sessions)
if len(result) != 2 {
t.Fatalf("expected 2 consolidated sessions (gap split), got %d", len(result))
}
// Find the sessions by their start time
var first, second *AccessSession
for _, s := range result {
if s.StartedAt.Equal(now) {
first = s
} else {
second = s
}
}
if first == nil || second == nil {
t.Fatal("could not find both consolidated sessions")
}
if first.ConnectionCount != 2 {
t.Errorf("first burst: expected ConnectionCount=2, got %d", first.ConnectionCount)
}
if second.ConnectionCount != 2 {
t.Errorf("second burst: expected ConnectionCount=2, got %d", second.ConnectionCount)
}
}
func TestConsolidateSessions_DifferentDestinationsNotMerged(t *testing.T) {
now := time.Now()
sessions := []*AccessSession{
{
SessionID: "s1",
ResourceID: 1,
SourceAddr: "10.0.0.1:5000",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now,
EndedAt: now.Add(100 * time.Millisecond),
},
{
SessionID: "s2",
ResourceID: 1,
SourceAddr: "10.0.0.1:5001",
DestAddr: "192.168.1.100:8080",
Protocol: "tcp",
StartedAt: now.Add(200 * time.Millisecond),
EndedAt: now.Add(300 * time.Millisecond),
},
}
result := consolidateSessions(sessions)
// Each goes to a different dest port so they should not be merged
if len(result) != 2 {
t.Fatalf("expected 2 sessions (different destinations), got %d", len(result))
}
}
func TestConsolidateSessions_DifferentProtocolsNotMerged(t *testing.T) {
now := time.Now()
sessions := []*AccessSession{
{
SessionID: "s1",
ResourceID: 1,
SourceAddr: "10.0.0.1:5000",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now,
EndedAt: now.Add(100 * time.Millisecond),
},
{
SessionID: "s2",
ResourceID: 1,
SourceAddr: "10.0.0.1:5001",
DestAddr: "192.168.1.100:443",
Protocol: "udp",
StartedAt: now.Add(200 * time.Millisecond),
EndedAt: now.Add(300 * time.Millisecond),
},
}
result := consolidateSessions(sessions)
if len(result) != 2 {
t.Fatalf("expected 2 sessions (different protocols), got %d", len(result))
}
}
func TestConsolidateSessions_DifferentResourceIDsNotMerged(t *testing.T) {
now := time.Now()
sessions := []*AccessSession{
{
SessionID: "s1",
ResourceID: 1,
SourceAddr: "10.0.0.1:5000",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now,
EndedAt: now.Add(100 * time.Millisecond),
},
{
SessionID: "s2",
ResourceID: 2,
SourceAddr: "10.0.0.1:5001",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(200 * time.Millisecond),
EndedAt: now.Add(300 * time.Millisecond),
},
}
result := consolidateSessions(sessions)
if len(result) != 2 {
t.Fatalf("expected 2 sessions (different resource IDs), got %d", len(result))
}
}
func TestConsolidateSessions_DifferentSourceIPsNotMerged(t *testing.T) {
now := time.Now()
sessions := []*AccessSession{
{
SessionID: "s1",
ResourceID: 1,
SourceAddr: "10.0.0.1:5000",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now,
EndedAt: now.Add(100 * time.Millisecond),
},
{
SessionID: "s2",
ResourceID: 1,
SourceAddr: "10.0.0.2:5001",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(200 * time.Millisecond),
EndedAt: now.Add(300 * time.Millisecond),
},
}
result := consolidateSessions(sessions)
if len(result) != 2 {
t.Fatalf("expected 2 sessions (different source IPs), got %d", len(result))
}
}
func TestConsolidateSessions_OutOfOrderInput(t *testing.T) {
now := time.Now()
// Provide sessions out of chronological order to verify sorting
sessions := []*AccessSession{
{
SessionID: "s3",
ResourceID: 1,
SourceAddr: "10.0.0.1:5002",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(400 * time.Millisecond),
EndedAt: now.Add(500 * time.Millisecond),
BytesTx: 30,
},
{
SessionID: "s1",
ResourceID: 1,
SourceAddr: "10.0.0.1:5000",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now,
EndedAt: now.Add(100 * time.Millisecond),
BytesTx: 10,
},
{
SessionID: "s2",
ResourceID: 1,
SourceAddr: "10.0.0.1:5001",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(200 * time.Millisecond),
EndedAt: now.Add(300 * time.Millisecond),
BytesTx: 20,
},
}
result := consolidateSessions(sessions)
if len(result) != 1 {
t.Fatalf("expected 1 consolidated session, got %d", len(result))
}
s := result[0]
if s.ConnectionCount != 3 {
t.Errorf("expected ConnectionCount=3, got %d", s.ConnectionCount)
}
if s.StartedAt != now {
t.Errorf("expected StartedAt to be earliest time")
}
if s.EndedAt != now.Add(500*time.Millisecond) {
t.Errorf("expected EndedAt to be latest time")
}
if s.BytesTx != 60 {
t.Errorf("expected BytesTx=60, got %d", s.BytesTx)
}
}
func TestConsolidateSessions_ExactlyAtGapThreshold(t *testing.T) {
now := time.Now()
sessions := []*AccessSession{
{
SessionID: "s1",
ResourceID: 1,
SourceAddr: "10.0.0.1:5000",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now,
EndedAt: now.Add(100 * time.Millisecond),
},
{
// Starts exactly sessionGapThreshold after s1 ends — should still merge
SessionID: "s2",
ResourceID: 1,
SourceAddr: "10.0.0.1:5001",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(100*time.Millisecond + sessionGapThreshold),
EndedAt: now.Add(100*time.Millisecond + sessionGapThreshold + 50*time.Millisecond),
},
}
result := consolidateSessions(sessions)
if len(result) != 1 {
t.Fatalf("expected 1 session (gap exactly at threshold merges), got %d", len(result))
}
if result[0].ConnectionCount != 2 {
t.Errorf("expected ConnectionCount=2, got %d", result[0].ConnectionCount)
}
}
func TestConsolidateSessions_JustOverGapThreshold(t *testing.T) {
now := time.Now()
sessions := []*AccessSession{
{
SessionID: "s1",
ResourceID: 1,
SourceAddr: "10.0.0.1:5000",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now,
EndedAt: now.Add(100 * time.Millisecond),
},
{
// Starts 1ms over the gap threshold after s1 ends — should split
SessionID: "s2",
ResourceID: 1,
SourceAddr: "10.0.0.1:5001",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(100*time.Millisecond + sessionGapThreshold + 1*time.Millisecond),
EndedAt: now.Add(100*time.Millisecond + sessionGapThreshold + 50*time.Millisecond),
},
}
result := consolidateSessions(sessions)
if len(result) != 2 {
t.Fatalf("expected 2 sessions (gap just over threshold splits), got %d", len(result))
}
}
func TestConsolidateSessions_UDPSessions(t *testing.T) {
now := time.Now()
sessions := []*AccessSession{
{
SessionID: "u1",
ResourceID: 5,
SourceAddr: "10.0.0.1:6000",
DestAddr: "192.168.1.100:53",
Protocol: "udp",
StartedAt: now,
EndedAt: now.Add(50 * time.Millisecond),
BytesTx: 64,
BytesRx: 512,
},
{
SessionID: "u2",
ResourceID: 5,
SourceAddr: "10.0.0.1:6001",
DestAddr: "192.168.1.100:53",
Protocol: "udp",
StartedAt: now.Add(100 * time.Millisecond),
EndedAt: now.Add(150 * time.Millisecond),
BytesTx: 64,
BytesRx: 256,
},
{
SessionID: "u3",
ResourceID: 5,
SourceAddr: "10.0.0.1:6002",
DestAddr: "192.168.1.100:53",
Protocol: "udp",
StartedAt: now.Add(200 * time.Millisecond),
EndedAt: now.Add(250 * time.Millisecond),
BytesTx: 64,
BytesRx: 128,
},
}
result := consolidateSessions(sessions)
if len(result) != 1 {
t.Fatalf("expected 1 consolidated UDP session, got %d", len(result))
}
s := result[0]
if s.Protocol != "udp" {
t.Errorf("expected protocol=udp, got %q", s.Protocol)
}
if s.ConnectionCount != 3 {
t.Errorf("expected ConnectionCount=3, got %d", s.ConnectionCount)
}
if s.SourceAddr != "10.0.0.1" {
t.Errorf("expected source addr to be IP only, got %q", s.SourceAddr)
}
if s.BytesTx != 192 {
t.Errorf("expected BytesTx=192, got %d", s.BytesTx)
}
if s.BytesRx != 896 {
t.Errorf("expected BytesRx=896, got %d", s.BytesRx)
}
}
func TestConsolidateSessions_MixedGroupsSomeConsolidatedSomeNot(t *testing.T) {
now := time.Now()
sessions := []*AccessSession{
// Group 1: 3 connections to :443 from same IP — should consolidate
{
SessionID: "s1",
ResourceID: 1,
SourceAddr: "10.0.0.1:5000",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now,
EndedAt: now.Add(100 * time.Millisecond),
},
{
SessionID: "s2",
ResourceID: 1,
SourceAddr: "10.0.0.1:5001",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(200 * time.Millisecond),
EndedAt: now.Add(300 * time.Millisecond),
},
{
SessionID: "s3",
ResourceID: 1,
SourceAddr: "10.0.0.1:5002",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(400 * time.Millisecond),
EndedAt: now.Add(500 * time.Millisecond),
},
// Group 2: 1 connection to :8080 from different IP — should pass through
{
SessionID: "s4",
ResourceID: 2,
SourceAddr: "10.0.0.2:6000",
DestAddr: "192.168.1.200:8080",
Protocol: "tcp",
StartedAt: now.Add(1 * time.Second),
EndedAt: now.Add(2 * time.Second),
},
}
result := consolidateSessions(sessions)
if len(result) != 2 {
t.Fatalf("expected 2 sessions total, got %d", len(result))
}
var consolidated, passthrough *AccessSession
for _, s := range result {
if s.ConnectionCount > 1 {
consolidated = s
} else {
passthrough = s
}
}
if consolidated == nil {
t.Fatal("expected a consolidated session")
}
if consolidated.ConnectionCount != 3 {
t.Errorf("consolidated: expected ConnectionCount=3, got %d", consolidated.ConnectionCount)
}
if passthrough == nil {
t.Fatal("expected a passthrough session")
}
if passthrough.SessionID != "s4" {
t.Errorf("passthrough: expected session s4, got %s", passthrough.SessionID)
}
}
func TestConsolidateSessions_OverlappingConnections(t *testing.T) {
now := time.Now()
// Connections that overlap in time (not sequential)
sessions := []*AccessSession{
{
SessionID: "s1",
ResourceID: 1,
SourceAddr: "10.0.0.1:5000",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now,
EndedAt: now.Add(5 * time.Second),
BytesTx: 100,
},
{
SessionID: "s2",
ResourceID: 1,
SourceAddr: "10.0.0.1:5001",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(1 * time.Second),
EndedAt: now.Add(3 * time.Second),
BytesTx: 200,
},
{
SessionID: "s3",
ResourceID: 1,
SourceAddr: "10.0.0.1:5002",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(2 * time.Second),
EndedAt: now.Add(6 * time.Second),
BytesTx: 300,
},
}
result := consolidateSessions(sessions)
if len(result) != 1 {
t.Fatalf("expected 1 consolidated session, got %d", len(result))
}
s := result[0]
if s.ConnectionCount != 3 {
t.Errorf("expected ConnectionCount=3, got %d", s.ConnectionCount)
}
if s.StartedAt != now {
t.Error("expected StartedAt to be earliest")
}
if s.EndedAt != now.Add(6*time.Second) {
t.Error("expected EndedAt to be the latest end time")
}
if s.BytesTx != 600 {
t.Errorf("expected BytesTx=600, got %d", s.BytesTx)
}
}
func TestConsolidateSessions_DoesNotMutateOriginals(t *testing.T) {
now := time.Now()
s1 := &AccessSession{
SessionID: "s1",
ResourceID: 1,
SourceAddr: "10.0.0.1:5000",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now,
EndedAt: now.Add(100 * time.Millisecond),
BytesTx: 100,
}
s2 := &AccessSession{
SessionID: "s2",
ResourceID: 1,
SourceAddr: "10.0.0.1:5001",
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(200 * time.Millisecond),
EndedAt: now.Add(300 * time.Millisecond),
BytesTx: 200,
}
// Save original values
origS1Addr := s1.SourceAddr
origS1Bytes := s1.BytesTx
origS2Addr := s2.SourceAddr
origS2Bytes := s2.BytesTx
_ = consolidateSessions([]*AccessSession{s1, s2})
if s1.SourceAddr != origS1Addr {
t.Errorf("s1.SourceAddr was mutated: %q -> %q", origS1Addr, s1.SourceAddr)
}
if s1.BytesTx != origS1Bytes {
t.Errorf("s1.BytesTx was mutated: %d -> %d", origS1Bytes, s1.BytesTx)
}
if s2.SourceAddr != origS2Addr {
t.Errorf("s2.SourceAddr was mutated: %q -> %q", origS2Addr, s2.SourceAddr)
}
if s2.BytesTx != origS2Bytes {
t.Errorf("s2.BytesTx was mutated: %d -> %d", origS2Bytes, s2.BytesTx)
}
}
func TestConsolidateSessions_ThreeBurstsWithGaps(t *testing.T) {
now := time.Now()
sessions := make([]*AccessSession, 0, 9)
// Burst 1: 3 connections at t=0
for i := 0; i < 3; i++ {
sessions = append(sessions, &AccessSession{
SessionID: generateSessionID(),
ResourceID: 1,
SourceAddr: "10.0.0.1:" + string(rune('A'+i)),
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(time.Duration(i*100) * time.Millisecond),
EndedAt: now.Add(time.Duration(i*100+50) * time.Millisecond),
})
}
// Burst 2: 3 connections at t=20s (well past the 5s gap)
for i := 0; i < 3; i++ {
sessions = append(sessions, &AccessSession{
SessionID: generateSessionID(),
ResourceID: 1,
SourceAddr: "10.0.0.1:" + string(rune('D'+i)),
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(20*time.Second + time.Duration(i*100)*time.Millisecond),
EndedAt: now.Add(20*time.Second + time.Duration(i*100+50)*time.Millisecond),
})
}
// Burst 3: 3 connections at t=40s
for i := 0; i < 3; i++ {
sessions = append(sessions, &AccessSession{
SessionID: generateSessionID(),
ResourceID: 1,
SourceAddr: "10.0.0.1:" + string(rune('G'+i)),
DestAddr: "192.168.1.100:443",
Protocol: "tcp",
StartedAt: now.Add(40*time.Second + time.Duration(i*100)*time.Millisecond),
EndedAt: now.Add(40*time.Second + time.Duration(i*100+50)*time.Millisecond),
})
}
result := consolidateSessions(sessions)
if len(result) != 3 {
t.Fatalf("expected 3 consolidated sessions (3 bursts), got %d", len(result))
}
for _, s := range result {
if s.ConnectionCount != 3 {
t.Errorf("expected each burst to have ConnectionCount=3, got %d (started=%v)", s.ConnectionCount, s.StartedAt)
}
}
}
func TestFinalizeMergedSourceAddr(t *testing.T) {
s := &AccessSession{SourceAddr: "10.0.0.1:5000"}
ports := map[string]struct{}{"10.0.0.1:5000": {}}
finalizeMergedSourceAddr(s, "10.0.0.1", ports)
if s.SourceAddr != "10.0.0.1:5000" {
t.Errorf("single port: expected addr preserved, got %q", s.SourceAddr)
}
s2 := &AccessSession{SourceAddr: "10.0.0.1:5000"}
ports2 := map[string]struct{}{"10.0.0.1:5000": {}, "10.0.0.1:5001": {}}
finalizeMergedSourceAddr(s2, "10.0.0.1", ports2)
if s2.SourceAddr != "10.0.0.1" {
t.Errorf("multiple ports: expected IP only, got %q", s2.SourceAddr)
}
}
func TestCloneSession(t *testing.T) {
original := &AccessSession{
SessionID: "test",
ResourceID: 42,
SourceAddr: "1.2.3.4:100",
DestAddr: "5.6.7.8:443",
Protocol: "tcp",
BytesTx: 999,
}
clone := cloneSession(original)
if clone == original {
t.Error("clone should be a different pointer")
}
if clone.SessionID != original.SessionID {
t.Error("clone should have same SessionID")
}
// Mutating clone should not affect original
clone.BytesTx = 0
clone.SourceAddr = "changed"
if original.BytesTx != 999 {
t.Error("mutating clone affected original BytesTx")
}
if original.SourceAddr != "1.2.3.4:100" {
t.Error("mutating clone affected original SourceAddr")
}
}

View File

@@ -137,14 +137,33 @@ func (h *TCPHandler) InstallTCPHandler() error {
// handleTCPConn handles a TCP connection by proxying it to the actual target
func (h *TCPHandler) handleTCPConn(netstackConn *gonet.TCPConn, id stack.TransportEndpointID) {
defer netstackConn.Close()
// Extract source and target address from the connection ID
// Extract source and target address from the connection ID first so they
// are available for HTTP routing before any defer is set up.
srcIP := id.RemoteAddress.String()
srcPort := id.RemotePort
dstIP := id.LocalAddress.String()
dstPort := id.LocalPort
// For HTTP/HTTPS ports, look up the matching subnet rule. If the rule has
// Protocol configured, hand the connection off to the HTTP handler which
// takes full ownership of the lifecycle (the defer close must not be
// installed before this point).
if (dstPort == 80 || dstPort == 443) && h.proxyHandler != nil && h.proxyHandler.httpHandler != nil {
srcAddr, _ := netip.ParseAddr(srcIP)
dstAddr, _ := netip.ParseAddr(dstIP)
rule := h.proxyHandler.subnetLookup.Match(srcAddr, dstAddr, dstPort, tcp.ProtocolNumber)
if rule != nil && rule.Protocol != "" && len(rule.HTTPTargets) > 0 {
logger.Info("TCP Forwarder: Routing %s:%d -> %s:%d to HTTP handler (%s)",
srcIP, srcPort, dstIP, dstPort, rule.Protocol)
h.proxyHandler.httpHandler.HandleConn(netstackConn, rule)
return
}
// Otherwise fall through to raw TCP forwarding (e.g. CIDR resources
// that happen to use port 80/443 without HTTP configuration).
}
defer netstackConn.Close()
logger.Info("TCP Forwarder: Handling connection %s:%d -> %s:%d", srcIP, srcPort, dstIP, dstPort)
// Check if there's a destination rewrite for this connection (e.g., localhost targets)
@@ -158,6 +177,18 @@ func (h *TCPHandler) handleTCPConn(netstackConn *gonet.TCPConn, id stack.Transpo
targetAddr := fmt.Sprintf("%s:%d", actualDstIP, dstPort)
// Look up resource ID and start access session if applicable
var accessSessionID string
if h.proxyHandler != nil {
resourceId := h.proxyHandler.LookupResourceId(srcIP, dstIP, dstPort, uint8(tcp.ProtocolNumber))
if resourceId != 0 {
if al := h.proxyHandler.GetAccessLogger(); al != nil {
srcAddr := fmt.Sprintf("%s:%d", srcIP, srcPort)
accessSessionID = al.StartTCPSession(resourceId, srcAddr, targetAddr)
}
}
}
// Create context with timeout for connection establishment
ctx, cancel := context.WithTimeout(context.Background(), tcpConnectTimeout)
defer cancel()
@@ -167,11 +198,26 @@ func (h *TCPHandler) handleTCPConn(netstackConn *gonet.TCPConn, id stack.Transpo
targetConn, err := d.DialContext(ctx, "tcp", targetAddr)
if err != nil {
logger.Info("TCP Forwarder: Failed to connect to %s: %v", targetAddr, err)
// End access session on connection failure
if accessSessionID != "" {
if al := h.proxyHandler.GetAccessLogger(); al != nil {
al.EndTCPSession(accessSessionID)
}
}
// Connection failed, netstack will handle RST
return
}
defer targetConn.Close()
// End access session when connection closes
if accessSessionID != "" {
defer func() {
if al := h.proxyHandler.GetAccessLogger(); al != nil {
al.EndTCPSession(accessSessionID)
}
}()
}
logger.Info("TCP Forwarder: Successfully connected to %s, starting bidirectional copy", targetAddr)
// Bidirectional copy between netstack and target
@@ -280,6 +326,27 @@ func (h *UDPHandler) handleUDPConn(netstackConn *gonet.UDPConn, id stack.Transpo
targetAddr := fmt.Sprintf("%s:%d", actualDstIP, dstPort)
// Look up resource ID and start access session if applicable
var accessSessionID string
if h.proxyHandler != nil {
resourceId := h.proxyHandler.LookupResourceId(srcIP, dstIP, dstPort, uint8(udp.ProtocolNumber))
if resourceId != 0 {
if al := h.proxyHandler.GetAccessLogger(); al != nil {
srcAddr := fmt.Sprintf("%s:%d", srcIP, srcPort)
accessSessionID = al.TrackUDPSession(resourceId, srcAddr, targetAddr)
}
}
}
// End access session when UDP handler returns (timeout or error)
if accessSessionID != "" {
defer func() {
if al := h.proxyHandler.GetAccessLogger(); al != nil {
al.EndUDPSession(accessSessionID)
}
}()
}
// Resolve target address
remoteUDPAddr, err := net.ResolveUDPAddr("udp", targetAddr)
if err != nil {

396
netstack2/http_handler.go Normal file
View File

@@ -0,0 +1,396 @@
/* SPDX-License-Identifier: MIT
*
* Copyright (C) 2017-2025 WireGuard LLC. All Rights Reserved.
*/
package netstack2
import (
"bufio"
"context"
"crypto/tls"
"errors"
"fmt"
"net"
"net/http"
"net/http/httputil"
"net/url"
"sync"
"time"
"github.com/fosrl/newt/logger"
"gvisor.dev/gvisor/pkg/tcpip/stack"
)
// ---------------------------------------------------------------------------
// HTTPTarget
// ---------------------------------------------------------------------------
// HTTPTarget describes a single downstream HTTP or HTTPS service that the
// proxy should forward requests to.
type HTTPTarget struct {
DestAddr string `json:"destAddr"` // IP address or hostname of the downstream service
DestPort uint16 `json:"destPort"` // TCP port of the downstream service
Scheme string `json:"scheme"` // When true the outbound leg uses HTTPS
}
// ---------------------------------------------------------------------------
// HTTPHandler
// ---------------------------------------------------------------------------
// HTTPHandler intercepts TCP connections from the netstack forwarder on ports
// 80 and 443 and services them as HTTP or HTTPS, reverse-proxying each request
// to downstream targets specified by the matching SubnetRule.
//
// HTTP and raw TCP are fully separate: a connection is only routed here when
// its SubnetRule has Protocol set ("http" or "https"). All other connections
// on those ports fall through to the normal raw-TCP path.
//
// Incoming TLS termination (Protocol == "https") is performed per-connection
// using the certificate and key stored in the rule, so different subnet rules
// can present different certificates without sharing any state.
//
// Outbound connections to downstream targets honour HTTPTarget.UseHTTPS
// independently of the incoming protocol.
type HTTPHandler struct {
stack *stack.Stack
proxyHandler *ProxyHandler
requestLogger *HTTPRequestLogger
listener *chanListener
server *http.Server
// proxyCache holds pre-built *httputil.ReverseProxy values keyed by the
// canonical target URL string ("scheme://host:port"). Building a proxy is
// cheap, but reusing one preserves the underlying http.Transport connection
// pool, which matters for throughput.
proxyCache sync.Map // map[string]*httputil.ReverseProxy
// tlsCache holds pre-parsed *tls.Config values keyed by the concatenation
// of the PEM certificate and key. Parsing a keypair is relatively expensive
// and the same cert is likely reused across many connections.
tlsCache sync.Map // map[string]*tls.Config
}
// ---------------------------------------------------------------------------
// chanListener net.Listener backed by a channel
// ---------------------------------------------------------------------------
// chanListener implements net.Listener by receiving net.Conn values over a
// buffered channel. This lets the netstack TCP forwarder hand off connections
// directly to a running http.Server without any real OS socket.
type chanListener struct {
connCh chan net.Conn
closed chan struct{}
once sync.Once
}
func newChanListener() *chanListener {
return &chanListener{
connCh: make(chan net.Conn, 128),
closed: make(chan struct{}),
}
}
// Accept blocks until a connection is available or the listener is closed.
func (l *chanListener) Accept() (net.Conn, error) {
select {
case conn, ok := <-l.connCh:
if !ok {
return nil, net.ErrClosed
}
return conn, nil
case <-l.closed:
return nil, net.ErrClosed
}
}
// Close shuts down the listener; subsequent Accept calls return net.ErrClosed.
func (l *chanListener) Close() error {
l.once.Do(func() { close(l.closed) })
return nil
}
// Addr returns a placeholder address (the listener has no real OS socket).
func (l *chanListener) Addr() net.Addr {
return &net.TCPAddr{}
}
// send delivers conn to the listener. Returns false if the listener is already
// closed, in which case the caller is responsible for closing conn.
func (l *chanListener) send(conn net.Conn) bool {
select {
case l.connCh <- conn:
return true
case <-l.closed:
return false
}
}
// ---------------------------------------------------------------------------
// httpConnCtx conn wrapper that carries a SubnetRule through the listener
// ---------------------------------------------------------------------------
// httpConnCtx wraps a net.Conn so the matching SubnetRule can be passed
// through the chanListener into the http.Server's ConnContext callback,
// making it available to request handlers via the request context.
type httpConnCtx struct {
net.Conn
rule *SubnetRule
}
// connCtxKey is the unexported context key used to store a *SubnetRule on the
// per-connection context created by http.Server.ConnContext.
type connCtxKey struct{}
// ---------------------------------------------------------------------------
// Constructor and lifecycle
// ---------------------------------------------------------------------------
// NewHTTPHandler creates an HTTPHandler attached to the given stack and
// ProxyHandler. Call Start to begin serving connections.
func NewHTTPHandler(s *stack.Stack, ph *ProxyHandler) *HTTPHandler {
return &HTTPHandler{
stack: s,
proxyHandler: ph,
}
}
// SetRequestLogger attaches an HTTPRequestLogger so that every proxied request
// is recorded and periodically shipped to the server.
func (h *HTTPHandler) SetRequestLogger(rl *HTTPRequestLogger) {
h.requestLogger = rl
}
// Start launches the internal http.Server that services connections delivered
// via HandleConn. The server runs for the lifetime of the HTTPHandler; call
// Close to stop it.
func (h *HTTPHandler) Start() error {
h.listener = newChanListener()
h.server = &http.Server{
Handler: http.HandlerFunc(h.handleRequest),
// ConnContext runs once per accepted connection and attaches the
// SubnetRule carried by httpConnCtx to the connection's context so
// that handleRequest can retrieve it without any global state.
ConnContext: func(ctx context.Context, c net.Conn) context.Context {
if cc, ok := c.(*httpConnCtx); ok {
return context.WithValue(ctx, connCtxKey{}, cc.rule)
}
return ctx
},
}
go func() {
if err := h.server.Serve(h.listener); err != nil && err != http.ErrServerClosed {
logger.Error("HTTP handler: server exited unexpectedly: %v", err)
}
}()
logger.Debug("HTTP handler: ready — routing determined per SubnetRule on ports 80/443")
return nil
}
// HandleConn accepts a TCP connection from the netstack forwarder together
// with the SubnetRule that matched it. The HTTP handler takes full ownership
// of the connection's lifecycle; the caller must NOT close conn after this call.
//
// When rule.Protocol is "https", TLS termination is performed on conn using
// the certificate and key stored in rule.TLSCert and rule.TLSKey before the
// connection is passed to the HTTP server. The HTTP server itself is always
// plain-HTTP; TLS is fully unwrapped at this layer.
func (h *HTTPHandler) HandleConn(conn net.Conn, rule *SubnetRule) {
var effectiveConn net.Conn = conn
if rule.Protocol == "https" {
tlsCfg, err := h.getTLSConfig(rule)
if err != nil {
logger.Error("HTTP handler: cannot build TLS config for connection from %s: %v",
conn.RemoteAddr(), err)
conn.Close()
return
}
// tls.Server wraps the raw conn; the TLS handshake is deferred until
// the first Read, which the http.Server will trigger naturally.
effectiveConn = tls.Server(conn, tlsCfg)
}
wrapped := &httpConnCtx{Conn: effectiveConn, rule: rule}
if !h.listener.send(wrapped) {
// Listener is already closed — clean up the orphaned connection.
effectiveConn.Close()
}
}
// Close gracefully shuts down the HTTP server and the underlying channel
// listener, causing the goroutine started in Start to exit.
func (h *HTTPHandler) Close() error {
if h.server != nil {
if err := h.server.Close(); err != nil {
return err
}
}
if h.listener != nil {
h.listener.Close()
}
return nil
}
// ---------------------------------------------------------------------------
// Internal helpers
// ---------------------------------------------------------------------------
// getTLSConfig returns a *tls.Config for the cert/key pair in rule, using a
// cache to avoid re-parsing the same keypair on every connection.
// The cache key is the concatenation of the PEM cert and key strings, so
// different rules that happen to share the same material hit the same entry.
func (h *HTTPHandler) getTLSConfig(rule *SubnetRule) (*tls.Config, error) {
cacheKey := rule.TLSCert + "|" + rule.TLSKey
if v, ok := h.tlsCache.Load(cacheKey); ok {
return v.(*tls.Config), nil
}
cert, err := tls.X509KeyPair([]byte(rule.TLSCert), []byte(rule.TLSKey))
if err != nil {
return nil, fmt.Errorf("failed to parse TLS keypair: %w", err)
}
cfg := &tls.Config{
Certificates: []tls.Certificate{cert},
}
// LoadOrStore is safe under concurrent calls: if two goroutines race here
// both will produce a valid config; the loser's work is discarded.
actual, _ := h.tlsCache.LoadOrStore(cacheKey, cfg)
return actual.(*tls.Config), nil
}
// getProxy returns a cached *httputil.ReverseProxy for the given target,
// creating one on first use. Reusing the proxy preserves its http.Transport
// connection pool, avoiding repeated TCP/TLS handshakes to the downstream.
func (h *HTTPHandler) getProxy(target HTTPTarget) *httputil.ReverseProxy {
scheme := target.Scheme
cacheKey := fmt.Sprintf("%s://%s:%d", scheme, target.DestAddr, target.DestPort)
if v, ok := h.proxyCache.Load(cacheKey); ok {
return v.(*httputil.ReverseProxy)
}
targetURL := &url.URL{
Scheme: scheme,
Host: fmt.Sprintf("%s:%d", target.DestAddr, target.DestPort),
}
var transport http.RoundTripper = http.DefaultTransport
if target.Scheme == "https" {
// Allow self-signed certificates on downstream HTTPS targets.
transport = &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true, //nolint:gosec // downstream self-signed certs are a supported configuration
},
}
}
proxy := &httputil.ReverseProxy{
Rewrite: func(pr *httputil.ProxyRequest) {
pr.SetURL(targetURL)
// SetXForwarded sets X-Forwarded-For from the inbound request's
// RemoteAddr (the WireGuard/netstack client address), along with
// X-Forwarded-Host and X-Forwarded-Proto. Using Rewrite instead of
// Director means the proxy does not append its own automatic
// X-Forwarded-For entry, so the header is set exactly once.
pr.SetXForwarded()
},
Transport: transport,
}
proxy.ErrorHandler = func(w http.ResponseWriter, r *http.Request, err error) {
logger.Error("HTTP handler: upstream error (%s %s -> %s): %v",
r.Method, r.URL.RequestURI(), cacheKey, err)
http.Error(w, "Bad Gateway", http.StatusBadGateway)
}
actual, _ := h.proxyCache.LoadOrStore(cacheKey, proxy)
return actual.(*httputil.ReverseProxy)
}
// statusCapture wraps an http.ResponseWriter and records the HTTP status code
// written by the upstream handler. If WriteHeader is never called the status
// defaults to 200 (http.StatusOK), matching net/http semantics.
type statusCapture struct {
http.ResponseWriter
status int
}
func (sc *statusCapture) WriteHeader(code int) {
sc.status = code
sc.ResponseWriter.WriteHeader(code)
}
func (sc *statusCapture) Unwrap() http.ResponseWriter {
return sc.ResponseWriter
}
func (sc *statusCapture) Flush() {
if flusher, ok := sc.ResponseWriter.(http.Flusher); ok {
flusher.Flush()
}
}
func (sc *statusCapture) Hijack() (net.Conn, *bufio.ReadWriter, error) {
hijacker, ok := sc.ResponseWriter.(http.Hijacker)
if !ok {
return nil, nil, errors.New("underlying response writer does not support hijacking")
}
return hijacker.Hijack()
}
// handleRequest is the http.Handler entry point. It retrieves the SubnetRule
// attached to the connection by ConnContext, selects the first configured
// downstream target, and forwards the request via the cached ReverseProxy.
//
// TODO: add host/path-based routing across multiple HTTPTargets once the
// configuration model evolves beyond a single target per rule.
func (h *HTTPHandler) handleRequest(w http.ResponseWriter, r *http.Request) {
rule, _ := r.Context().Value(connCtxKey{}).(*SubnetRule)
if rule == nil || len(rule.HTTPTargets) == 0 {
logger.Error("HTTP handler: no downstream targets for request %s %s", r.Method, r.URL.RequestURI())
http.Error(w, "no targets configured", http.StatusBadGateway)
return
}
// If the rule is HTTPS and a TLS certificate is configured, but the
// incoming request arrived over plain HTTP, redirect to HTTPS.
if rule.Protocol == "https" && rule.TLSCert != "" && rule.TLSKey != "" && r.TLS == nil {
host := r.Host
if host == "" {
host = r.URL.Host
}
httpsURL := "https://" + host + r.RequestURI
logger.Info("HTTP handler: redirecting %s %s -> %s (TLS cert present)", r.Method, r.URL.RequestURI(), httpsURL)
http.Redirect(w, r, httpsURL, http.StatusPermanentRedirect)
return
}
target := rule.HTTPTargets[0]
scheme := target.Scheme
logger.Info("HTTP handler: %s %s -> %s://%s:%d",
r.Method, r.URL.RequestURI(), scheme, target.DestAddr, target.DestPort)
timestamp := time.Now()
sc := &statusCapture{ResponseWriter: w, status: http.StatusOK}
h.getProxy(target).ServeHTTP(sc, r)
if h.requestLogger != nil && rule.ResourceId != 0 {
h.requestLogger.LogRequest(HTTPRequestLog{
ResourceID: rule.ResourceId,
Timestamp: timestamp,
Method: r.Method,
Scheme: rule.Protocol,
Host: r.Host,
Path: r.URL.Path,
RawQuery: r.URL.RawQuery,
UserAgent: r.UserAgent(),
SourceAddr: r.RemoteAddr,
TLS: rule.Protocol == "https",
})
}
}

View File

@@ -0,0 +1,97 @@
package netstack2
import (
"context"
"net"
"net/http"
"net/http/httptest"
"net/url"
"testing"
"github.com/gorilla/websocket"
)
func TestHTTPHandlerProxiesWebSocketUpgrade(t *testing.T) {
upgrader := websocket.Upgrader{}
backend := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
conn, err := upgrader.Upgrade(w, r, nil)
if err != nil {
t.Errorf("upgrade failed: %v", err)
return
}
defer conn.Close()
messageType, payload, err := conn.ReadMessage()
if err != nil {
t.Errorf("read failed: %v", err)
return
}
if err := conn.WriteMessage(messageType, append([]byte("echo:"), payload...)); err != nil {
t.Errorf("write failed: %v", err)
}
}))
defer backend.Close()
backendURL, err := url.Parse(backend.URL)
if err != nil {
t.Fatalf("parse backend URL: %v", err)
}
backendHost, backendPort, err := net.SplitHostPort(backendURL.Host)
if err != nil {
t.Fatalf("split backend host: %v", err)
}
port, err := net.LookupPort("tcp", backendPort)
if err != nil {
t.Fatalf("parse backend port: %v", err)
}
handler := NewHTTPHandler(nil, nil)
rule := &SubnetRule{
Protocol: "http",
HTTPTargets: []HTTPTarget{
{
DestAddr: backendHost,
DestPort: uint16(port),
Scheme: backendURL.Scheme,
},
},
}
frontend := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx := context.WithValue(r.Context(), connCtxKey{}, rule)
handler.handleRequest(w, r.WithContext(ctx))
}))
defer frontend.Close()
frontendURL, err := url.Parse(frontend.URL)
if err != nil {
t.Fatalf("parse frontend URL: %v", err)
}
wsURL := url.URL{
Scheme: "ws",
Host: frontendURL.Host,
Path: "/socket",
RawQuery: "token=test",
}
conn, _, err := websocket.DefaultDialer.Dial(wsURL.String(), nil)
if err != nil {
t.Fatalf("dial websocket through proxy: %v", err)
}
defer conn.Close()
if err := conn.WriteMessage(websocket.TextMessage, []byte("hello")); err != nil {
t.Fatalf("write websocket message: %v", err)
}
messageType, payload, err := conn.ReadMessage()
if err != nil {
t.Fatalf("read websocket message: %v", err)
}
if messageType != websocket.TextMessage {
t.Fatalf("message type = %d, want %d", messageType, websocket.TextMessage)
}
if got, want := string(payload), "echo:hello"; got != want {
t.Fatalf("payload = %q, want %q", got, want)
}
}

View File

@@ -0,0 +1,175 @@
package netstack2
import (
"bytes"
"compress/zlib"
"encoding/base64"
"encoding/json"
"sync"
"time"
"github.com/fosrl/newt/logger"
)
// HTTPRequestLog represents a single HTTP/HTTPS request proxied through the handler.
type HTTPRequestLog struct {
RequestID string `json:"requestId"`
ResourceID int `json:"resourceId"`
Timestamp time.Time `json:"timestamp"`
Method string `json:"method"`
Scheme string `json:"scheme"`
Host string `json:"host"`
Path string `json:"path"`
RawQuery string `json:"rawQuery,omitempty"`
UserAgent string `json:"userAgent,omitempty"`
SourceAddr string `json:"sourceAddr"`
TLS bool `json:"tls"`
}
// HTTPRequestLogger buffers HTTP request logs and periodically flushes them
// to the server via a configurable SendFunc.
type HTTPRequestLogger struct {
mu sync.Mutex
pending []HTTPRequestLog
sendFn SendFunc
stopCh chan struct{}
flushDone chan struct{}
}
// NewHTTPRequestLogger creates a new HTTPRequestLogger and starts its background flush loop.
func NewHTTPRequestLogger() *HTTPRequestLogger {
rl := &HTTPRequestLogger{
pending: make([]HTTPRequestLog, 0),
stopCh: make(chan struct{}),
flushDone: make(chan struct{}),
}
go rl.backgroundLoop()
return rl
}
// SetSendFunc sets the callback used to send compressed HTTP request log batches
// to the server. This can be called after construction once the websocket
// client is available.
func (rl *HTTPRequestLogger) SetSendFunc(fn SendFunc) {
rl.mu.Lock()
defer rl.mu.Unlock()
rl.sendFn = fn
}
// LogRequest adds an HTTP request log entry to the buffer. If the buffer
// reaches maxBufferedSessions entries a flush is triggered immediately.
func (rl *HTTPRequestLogger) LogRequest(log HTTPRequestLog) {
if log.RequestID == "" {
log.RequestID = generateSessionID()
}
rl.mu.Lock()
rl.pending = append(rl.pending, log)
shouldFlush := len(rl.pending) >= maxBufferedSessions
rl.mu.Unlock()
if shouldFlush {
rl.flush()
}
}
// backgroundLoop handles periodic flushing of buffered request logs.
func (rl *HTTPRequestLogger) backgroundLoop() {
defer close(rl.flushDone)
ticker := time.NewTicker(flushInterval)
defer ticker.Stop()
for {
select {
case <-rl.stopCh:
return
case <-ticker.C:
rl.flush()
}
}
}
// flush drains the pending buffer, compresses with zlib, and sends via the SendFunc.
// On send failure the batch is re-queued, capped at maxBufferedSessions*5 entries
// to prevent unbounded memory growth when the server is unreachable.
func (rl *HTTPRequestLogger) flush() {
rl.mu.Lock()
if len(rl.pending) == 0 {
rl.mu.Unlock()
return
}
batch := rl.pending
rl.pending = make([]HTTPRequestLog, 0)
sendFn := rl.sendFn
rl.mu.Unlock()
if sendFn == nil {
logger.Debug("HTTP request logger: no send function configured, discarding %d requests", len(batch))
return
}
compressed, err := compressRequestLogs(batch)
if err != nil {
logger.Error("HTTP request logger: failed to compress %d requests: %v", len(batch), err)
return
}
if err := sendFn(compressed); err != nil {
logger.Error("HTTP request logger: failed to send %d requests: %v", len(batch), err)
// Re-queue the batch so we don't lose data
rl.mu.Lock()
rl.pending = append(batch, rl.pending...)
// Cap re-queued data to prevent unbounded growth if server is unreachable
if len(rl.pending) > maxBufferedSessions*5 {
dropped := len(rl.pending) - maxBufferedSessions*5
rl.pending = rl.pending[:maxBufferedSessions*5]
logger.Warn("HTTP request logger: buffer overflow, dropped %d oldest requests", dropped)
}
rl.mu.Unlock()
return
}
logger.Info("HTTP request logger: sent %d requests to server", len(batch))
}
// compressRequestLogs JSON-encodes the request logs, compresses with zlib, and
// returns a base64-encoded string suitable for embedding in a JSON message.
func compressRequestLogs(logs []HTTPRequestLog) (string, error) {
jsonData, err := json.Marshal(logs)
if err != nil {
return "", err
}
var buf bytes.Buffer
w, err := zlib.NewWriterLevel(&buf, zlib.BestCompression)
if err != nil {
return "", err
}
if _, err := w.Write(jsonData); err != nil {
w.Close()
return "", err
}
if err := w.Close(); err != nil {
return "", err
}
return base64.StdEncoding.EncodeToString(buf.Bytes()), nil
}
// Close shuts down the background loop and performs one final flush to send
// any remaining buffered requests to the server.
func (rl *HTTPRequestLogger) Close() {
select {
case <-rl.stopCh:
// Already closed
return
default:
close(rl.stopCh)
}
// Wait for the background loop to exit so we don't race on flush
<-rl.flushDone
rl.flush()
}

View File

@@ -22,6 +22,12 @@ import (
"gvisor.dev/gvisor/pkg/tcpip/transport/udp"
)
const (
// udpAccessSessionTimeout is how long a UDP access session stays alive without traffic
// before being considered ended by the access logger
udpAccessSessionTimeout = 120 * time.Second
)
// PortRange represents an allowed range of ports (inclusive) with optional protocol filtering
// Protocol can be "tcp", "udp", or "" (empty string means both protocols)
type PortRange struct {
@@ -46,6 +52,15 @@ type SubnetRule struct {
DisableIcmp bool // If true, ICMP traffic is blocked for this subnet
RewriteTo string // Optional rewrite address for DNAT - can be IP/CIDR or domain name
PortRanges []PortRange // empty slice means all ports allowed
ResourceId int // Optional resource ID from the server for access logging
// HTTP proxy configuration (optional).
// When Protocol is non-empty the TCP connection is handled by HTTPHandler
// instead of the raw TCP forwarder.
Protocol string // "", "http", or "https" — controls the incoming (client-facing) protocol
HTTPTargets []HTTPTarget // downstream services to proxy requests to
TLSCert string // PEM-encoded certificate for incoming HTTPS termination
TLSKey string // PEM-encoded private key for incoming HTTPS termination
}
// GetAllRules returns a copy of all subnet rules
@@ -107,14 +122,18 @@ type ProxyHandler struct {
tcpHandler *TCPHandler
udpHandler *UDPHandler
icmpHandler *ICMPHandler
httpHandler *HTTPHandler
subnetLookup *SubnetLookup
natTable map[connKey]*natState
reverseNatTable map[reverseConnKey]*natState // Reverse lookup map for O(1) reply packet NAT
destRewriteTable map[destKey]netip.Addr // Maps original dest to rewritten dest for handler lookups
resourceTable map[destKey]int // Maps connection key to resource ID for access logging
natMu sync.RWMutex
enabled bool
icmpReplies chan []byte // Channel for ICMP reply packets to be sent back through the tunnel
notifiable channel.Notification // Notification handler for triggering reads
accessLogger *AccessLogger // Access logger for tracking sessions
httpRequestLogger *HTTPRequestLogger // HTTP request logger for proxied HTTP/HTTPS requests
}
// ProxyHandlerOptions configures the proxy handler
@@ -137,7 +156,9 @@ func NewProxyHandler(options ProxyHandlerOptions) (*ProxyHandler, error) {
natTable: make(map[connKey]*natState),
reverseNatTable: make(map[reverseConnKey]*natState),
destRewriteTable: make(map[destKey]netip.Addr),
resourceTable: make(map[destKey]int),
icmpReplies: make(chan []byte, 256), // Buffer for ICMP reply packets
accessLogger: NewAccessLogger(udpAccessSessionTimeout),
proxyEp: channel.New(1024, uint32(options.MTU), ""),
proxyStack: stack.New(stack.Options{
NetworkProtocols: []stack.NetworkProtocolFactory{
@@ -153,12 +174,24 @@ func NewProxyHandler(options ProxyHandlerOptions) (*ProxyHandler, error) {
}),
}
// Initialize TCP handler if enabled
// Initialize TCP handler if enabled. The HTTP handler piggybacks on the
// TCP forwarder — TCPHandler.handleTCPConn checks the subnet rule for
// ports 80/443 and routes matching connections to the HTTP handler, so
// the HTTP handler is always initialised alongside TCP.
if options.EnableTCP {
handler.tcpHandler = NewTCPHandler(handler.proxyStack, handler)
if err := handler.tcpHandler.InstallTCPHandler(); err != nil {
return nil, fmt.Errorf("failed to install TCP handler: %v", err)
}
handler.httpHandler = NewHTTPHandler(handler.proxyStack, handler)
if err := handler.httpHandler.Start(); err != nil {
return nil, fmt.Errorf("failed to start HTTP handler: %v", err)
}
handler.httpRequestLogger = NewHTTPRequestLogger()
handler.httpHandler.SetRequestLogger(handler.httpRequestLogger)
logger.Debug("ProxyHandler: HTTP handler enabled")
}
// Initialize UDP handler if enabled
@@ -197,16 +230,14 @@ func NewProxyHandler(options ProxyHandlerOptions) (*ProxyHandler, error) {
return handler, nil
}
// AddSubnetRule adds a subnet with optional port restrictions to the proxy handler
// sourcePrefix: The IP prefix of the peer sending the data
// destPrefix: The IP prefix of the destination
// rewriteTo: Optional address to rewrite destination to - can be IP/CIDR or domain name
// If portRanges is nil or empty, all ports are allowed for this subnet
func (p *ProxyHandler) AddSubnetRule(sourcePrefix, destPrefix netip.Prefix, rewriteTo string, portRanges []PortRange, disableIcmp bool) {
// AddSubnetRule adds a subnet rule to the proxy handler.
// HTTP proxy behaviour is configured via rule.Protocol, rule.HTTPTargets,
// rule.TLSCert, and rule.TLSKey; leave Protocol empty for raw TCP/UDP.
func (p *ProxyHandler) AddSubnetRule(rule SubnetRule) {
if p == nil || !p.enabled {
return
}
p.subnetLookup.AddSubnet(sourcePrefix, destPrefix, rewriteTo, portRanges, disableIcmp)
p.subnetLookup.AddSubnet(rule)
}
// RemoveSubnetRule removes a subnet from the proxy handler
@@ -225,6 +256,61 @@ func (p *ProxyHandler) GetAllRules() []SubnetRule {
return p.subnetLookup.GetAllRules()
}
// LookupResourceId looks up the resource ID for a connection
// Returns 0 if no resource ID is associated with this connection
func (p *ProxyHandler) LookupResourceId(srcIP, dstIP string, dstPort uint16, proto uint8) int {
if p == nil || !p.enabled {
return 0
}
key := destKey{
srcIP: srcIP,
dstIP: dstIP,
dstPort: dstPort,
proto: proto,
}
p.natMu.RLock()
defer p.natMu.RUnlock()
return p.resourceTable[key]
}
// GetAccessLogger returns the access logger for session tracking
func (p *ProxyHandler) GetAccessLogger() *AccessLogger {
if p == nil {
return nil
}
return p.accessLogger
}
// SetAccessLogSender configures the function used to send compressed access log
// batches to the server. This should be called once the websocket client is available.
func (p *ProxyHandler) SetAccessLogSender(fn SendFunc) {
if p == nil || !p.enabled || p.accessLogger == nil {
return
}
p.accessLogger.SetSendFunc(fn)
}
// GetHTTPRequestLogger returns the HTTP request logger.
func (p *ProxyHandler) GetHTTPRequestLogger() *HTTPRequestLogger {
if p == nil {
return nil
}
return p.httpRequestLogger
}
// SetHTTPRequestLogSender configures the function used to send compressed HTTP
// request log batches to the server. This should be called once the websocket
// client is available.
func (p *ProxyHandler) SetHTTPRequestLogSender(fn SendFunc) {
if p == nil || !p.enabled || p.httpRequestLogger == nil {
return
}
p.httpRequestLogger.SetSendFunc(fn)
}
// LookupDestinationRewrite looks up the rewritten destination for a connection
// This is used by TCP/UDP handlers to find the actual target address
func (p *ProxyHandler) LookupDestinationRewrite(srcIP, dstIP string, dstPort uint16, proto uint8) (netip.Addr, bool) {
@@ -387,8 +473,22 @@ func (p *ProxyHandler) HandleIncomingPacket(packet []byte) bool {
// Check if the source IP, destination IP, port, and protocol match any subnet rule
matchedRule := p.subnetLookup.Match(srcAddr, dstAddr, dstPort, protocol)
if matchedRule != nil {
logger.Debug("HandleIncomingPacket: Matched rule for %s -> %s (proto=%d, port=%d)",
srcAddr, dstAddr, protocol, dstPort)
logger.Debug("HandleIncomingPacket: Matched rule for %s -> %s (proto=%d, port=%d, resourceId=%d)",
srcAddr, dstAddr, protocol, dstPort, matchedRule.ResourceId)
// Store resource ID for connections without DNAT as well
if matchedRule.ResourceId != 0 && matchedRule.RewriteTo == "" {
dKey := destKey{
srcIP: srcAddr.String(),
dstIP: dstAddr.String(),
dstPort: dstPort,
proto: uint8(protocol),
}
p.natMu.Lock()
p.resourceTable[dKey] = matchedRule.ResourceId
p.natMu.Unlock()
}
// Check if we need to perform DNAT
if matchedRule.RewriteTo != "" {
// Create connection tracking key using original destination
@@ -420,6 +520,13 @@ func (p *ProxyHandler) HandleIncomingPacket(packet []byte) bool {
proto: uint8(protocol),
}
// Store resource ID for access logging if present
if matchedRule.ResourceId != 0 {
p.natMu.Lock()
p.resourceTable[dKey] = matchedRule.ResourceId
p.natMu.Unlock()
}
// Check if we already have a NAT entry for this connection
p.natMu.RLock()
existingEntry, exists := p.natTable[key]
@@ -465,6 +572,18 @@ func (p *ProxyHandler) HandleIncomingPacket(packet []byte) bool {
// Store destination rewrite for handler lookups
p.destRewriteTable[dKey] = newDst
// Also store the resource ID under the rewritten destination key so that
// TCP/UDP handlers can find it after DNAT (they see the post-NAT dst IP).
if matchedRule.ResourceId != 0 {
rewrittenKey := destKey{
srcIP: srcAddr.String(),
dstIP: newDst.String(),
dstPort: dstPort,
proto: uint8(protocol),
}
p.resourceTable[rewrittenKey] = matchedRule.ResourceId
}
p.natMu.Unlock()
logger.Debug("New NAT entry for connection: %s -> %s", dstAddr, newDst)
}
@@ -720,6 +839,21 @@ func (p *ProxyHandler) Close() error {
return nil
}
// Shut down access logger
if p.accessLogger != nil {
p.accessLogger.Close()
}
// Shut down HTTP request logger
if p.httpRequestLogger != nil {
p.httpRequestLogger.Close()
}
// Shut down HTTP handler
if p.httpHandler != nil {
p.httpHandler.Close()
}
// Close ICMP replies channel
if p.icmpReplies != nil {
close(p.icmpReplies)

View File

@@ -44,23 +44,18 @@ func prefixEqual(a, b netip.Prefix) bool {
return a.Masked() == b.Masked()
}
// AddSubnet adds a subnet rule with source and destination prefixes and optional port restrictions
// If portRanges is nil or empty, all ports are allowed for this subnet
// rewriteTo can be either an IP/CIDR (e.g., "192.168.1.1/32") or a domain name (e.g., "example.com")
func (sl *SubnetLookup) AddSubnet(sourcePrefix, destPrefix netip.Prefix, rewriteTo string, portRanges []PortRange, disableIcmp bool) {
// AddSubnet adds a subnet rule to the lookup table.
// If rule.PortRanges is nil or empty, all ports are allowed.
// rule.RewriteTo can be either an IP/CIDR (e.g., "192.168.1.1/32") or a domain name (e.g., "example.com").
// HTTP proxy behaviour is driven by rule.Protocol, rule.HTTPTargets, rule.TLSCert, and rule.TLSKey.
func (sl *SubnetLookup) AddSubnet(rule SubnetRule) {
sl.mu.Lock()
defer sl.mu.Unlock()
rule := &SubnetRule{
SourcePrefix: sourcePrefix,
DestPrefix: destPrefix,
DisableIcmp: disableIcmp,
RewriteTo: rewriteTo,
PortRanges: portRanges,
}
rulePtr := &rule
// Canonicalize source prefix to handle host bits correctly
canonicalSourcePrefix := sourcePrefix.Masked()
canonicalSourcePrefix := rule.SourcePrefix.Masked()
// Get or create destination trie for this source prefix
destTriePtr, exists := sl.sourceTrie.Get(canonicalSourcePrefix)
@@ -75,12 +70,12 @@ func (sl *SubnetLookup) AddSubnet(sourcePrefix, destPrefix netip.Prefix, rewrite
// Canonicalize destination prefix to handle host bits correctly
// BART masks prefixes internally, so we need to match that behavior in our bookkeeping
canonicalDestPrefix := destPrefix.Masked()
canonicalDestPrefix := rule.DestPrefix.Masked()
// Add rule to destination trie
// Original behavior: overwrite if same (sourcePrefix, destPrefix) exists
// Store as single-element slice to match original overwrite behavior
destTriePtr.trie.Insert(canonicalDestPrefix, []*SubnetRule{rule})
destTriePtr.trie.Insert(canonicalDestPrefix, []*SubnetRule{rulePtr})
// Update destTriePtr.rules - remove old rule with same canonical prefix if exists, then add new one
// Use canonical comparison to handle cases like 10.0.0.5/24 vs 10.0.0.0/24
@@ -90,7 +85,7 @@ func (sl *SubnetLookup) AddSubnet(sourcePrefix, destPrefix netip.Prefix, rewrite
newRules = append(newRules, r)
}
}
newRules = append(newRules, rule)
newRules = append(newRules, rulePtr)
destTriePtr.rules = newRules
}

View File

@@ -351,13 +351,13 @@ func (net *Net) ListenUDP(laddr *net.UDPAddr) (*gonet.UDPConn, error) {
return net.DialUDP(laddr, nil)
}
// AddProxySubnetRule adds a subnet rule to the proxy handler
// If portRanges is nil or empty, all ports are allowed for this subnet
// rewriteTo can be either an IP/CIDR (e.g., "192.168.1.1/32") or a domain name (e.g., "example.com")
func (net *Net) AddProxySubnetRule(sourcePrefix, destPrefix netip.Prefix, rewriteTo string, portRanges []PortRange, disableIcmp bool) {
// AddProxySubnetRule adds a subnet rule to the proxy handler.
// HTTP proxy behaviour is configured via rule.Protocol, rule.HTTPTargets,
// rule.TLSCert, and rule.TLSKey; leave Protocol empty for raw TCP/UDP.
func (net *Net) AddProxySubnetRule(rule SubnetRule) {
tun := (*netTun)(net)
if tun.proxyHandler != nil {
tun.proxyHandler.AddSubnetRule(sourcePrefix, destPrefix, rewriteTo, portRanges, disableIcmp)
tun.proxyHandler.AddSubnetRule(rule)
}
}
@@ -385,6 +385,25 @@ func (net *Net) GetProxyHandler() *ProxyHandler {
return tun.proxyHandler
}
// SetAccessLogSender configures the function used to send compressed access log
// batches to the server. This should be called once the websocket client is available.
func (net *Net) SetAccessLogSender(fn SendFunc) {
tun := (*netTun)(net)
if tun.proxyHandler != nil {
tun.proxyHandler.SetAccessLogSender(fn)
}
}
// SetHTTPRequestLogSender configures the function used to send compressed HTTP
// request log batches to the server. This should be called once the websocket
// client is available.
func (net *Net) SetHTTPRequestLogSender(fn SendFunc) {
tun := (*netTun)(net)
if tun.proxyHandler != nil {
tun.proxyHandler.SetHTTPRequestLogSender(fn)
}
}
type PingConn struct {
laddr PingAddr
raddr PingAddr

View File

@@ -120,7 +120,7 @@ func configureDarwin(interfaceName string, ip net.IP, ipNet *net.IPNet) error {
prefix, _ := ipNet.Mask.Size()
ipStr := fmt.Sprintf("%s/%d", ip.String(), prefix)
cmd := exec.Command("ifconfig", interfaceName, "inet", ipStr, ip.String(), "alias")
cmd := exec.Command("/sbin/ifconfig", interfaceName, "inet", ipStr, ip.String(), "alias")
logger.Info("Running command: %v", cmd)
out, err := cmd.CombinedOutput()
@@ -129,7 +129,7 @@ func configureDarwin(interfaceName string, ip net.IP, ipNet *net.IPNet) error {
}
// Bring up the interface
cmd = exec.Command("ifconfig", interfaceName, "up")
cmd = exec.Command("/sbin/ifconfig", interfaceName, "up")
logger.Info("Running command: %v", cmd)
out, err = cmd.CombinedOutput()

View File

@@ -21,7 +21,32 @@ import (
"gvisor.dev/gvisor/pkg/tcpip/adapters/gonet"
)
const errUnsupportedProtoFmt = "unsupported protocol: %s"
const (
errUnsupportedProtoFmt = "unsupported protocol: %s"
maxUDPPacketSize = 65507 // Maximum UDP packet size
defaultUDPIdleTimeout = 90 * time.Second
)
// udpBufferPool provides reusable buffers for UDP packet handling.
// This reduces GC pressure from frequent large allocations.
var udpBufferPool = sync.Pool{
New: func() any {
buf := make([]byte, maxUDPPacketSize)
return &buf
},
}
// getUDPBuffer retrieves a buffer from the pool.
func getUDPBuffer() *[]byte {
return udpBufferPool.Get().(*[]byte)
}
// putUDPBuffer clears and returns a buffer to the pool.
func putUDPBuffer(buf *[]byte) {
// Clear the buffer to prevent data leakage
clear(*buf)
udpBufferPool.Put(buf)
}
// Target represents a proxy target with its address and port
type Target struct {
@@ -44,6 +69,7 @@ type ProxyManager struct {
tunnels map[string]*tunnelEntry
asyncBytes bool
flushStop chan struct{}
udpIdleTimeout time.Duration
}
// tunnelEntry holds per-tunnel attributes and (optional) async counters.
@@ -105,13 +131,9 @@ func classifyProxyError(err error) string {
if errors.Is(err, net.ErrClosed) {
return "closed"
}
if ne, ok := err.(net.Error); ok {
if ne.Timeout() {
return "timeout"
}
if ne.Temporary() {
return "temporary"
}
var ne net.Error
if errors.As(err, &ne) && ne.Timeout() {
return "timeout"
}
msg := strings.ToLower(err.Error())
switch {
@@ -133,6 +155,7 @@ func NewProxyManager(tnet *netstack.Net) *ProxyManager {
listeners: make([]*gonet.TCPListener, 0),
udpConns: make([]*gonet.UDPConn, 0),
tunnels: make(map[string]*tunnelEntry),
udpIdleTimeout: defaultUDPIdleTimeout,
}
}
@@ -210,6 +233,7 @@ func NewProxyManagerWithoutTNet() *ProxyManager {
udpTargets: make(map[string]map[int]string),
listeners: make([]*gonet.TCPListener, 0),
udpConns: make([]*gonet.UDPConn, 0),
udpIdleTimeout: defaultUDPIdleTimeout,
}
}
@@ -346,6 +370,17 @@ func (pm *ProxyManager) SetAsyncBytes(b bool) {
go pm.flushLoop()
}
}
// SetUDPIdleTimeout configures when idle UDP client flows are reclaimed.
func (pm *ProxyManager) SetUDPIdleTimeout(d time.Duration) {
pm.mutex.Lock()
defer pm.mutex.Unlock()
if d <= 0 {
pm.udpIdleTimeout = defaultUDPIdleTimeout
return
}
pm.udpIdleTimeout = d
}
func (pm *ProxyManager) flushLoop() {
flushInterval := 2 * time.Second
if v := os.Getenv("OTEL_METRIC_EXPORT_INTERVAL"); v != "" {
@@ -437,14 +472,6 @@ func (pm *ProxyManager) Stop() error {
pm.udpConns = append(pm.udpConns[:i], pm.udpConns[i+1:]...)
}
// // Clear the target maps
// for k := range pm.tcpTargets {
// delete(pm.tcpTargets, k)
// }
// for k := range pm.udpTargets {
// delete(pm.udpTargets, k)
// }
// Give active connections a chance to close gracefully
time.Sleep(100 * time.Millisecond)
@@ -498,7 +525,7 @@ func (pm *ProxyManager) handleTCPProxy(listener net.Listener, targetAddr string)
if !pm.running {
return
}
if ne, ok := err.(net.Error); ok && !ne.Temporary() {
if errors.Is(err, net.ErrClosed) {
logger.Info("TCP listener closed, stopping proxy handler for %v", listener.Addr())
return
}
@@ -564,7 +591,9 @@ func (pm *ProxyManager) handleTCPProxy(listener net.Listener, targetAddr string)
}
func (pm *ProxyManager) handleUDPProxy(conn *gonet.UDPConn, targetAddr string) {
buffer := make([]byte, 65507) // Max UDP packet size
bufPtr := getUDPBuffer()
defer putUDPBuffer(bufPtr)
buffer := *bufPtr
clientConns := make(map[string]*net.UDPConn)
var clientsMutex sync.RWMutex
@@ -583,7 +612,7 @@ func (pm *ProxyManager) handleUDPProxy(conn *gonet.UDPConn, targetAddr string) {
}
// Check for connection closed conditions
if err == io.EOF || strings.Contains(err.Error(), "use of closed network connection") {
if errors.Is(err, io.EOF) || errors.Is(err, net.ErrClosed) {
logger.Info("UDP connection closed, stopping proxy handler")
// Clean up existing client connections
@@ -632,6 +661,9 @@ func (pm *ProxyManager) handleUDPProxy(conn *gonet.UDPConn, targetAddr string) {
telemetry.IncProxyAccept(context.Background(), pm.currentTunnelID, "udp", "failure", classifyProxyError(err))
continue
}
// Prevent idle UDP client goroutines from living forever and
// retaining large per-connection buffers.
_ = targetConn.SetReadDeadline(time.Now().Add(pm.udpIdleTimeout))
tunnelID := pm.currentTunnelID
telemetry.IncProxyAccept(context.Background(), tunnelID, "udp", "success", "")
telemetry.IncProxyConnectionEvent(context.Background(), tunnelID, "udp", telemetry.ProxyConnectionOpened)
@@ -647,7 +679,10 @@ func (pm *ProxyManager) handleUDPProxy(conn *gonet.UDPConn, targetAddr string) {
go func(clientKey string, targetConn *net.UDPConn, remoteAddr net.Addr, tunnelID string) {
start := time.Now()
result := "success"
bufPtr := getUDPBuffer()
defer func() {
// Return buffer to pool first
putUDPBuffer(bufPtr)
// Always clean up when this goroutine exits
clientsMutex.Lock()
if storedConn, exists := clientConns[clientKey]; exists && storedConn == targetConn {
@@ -662,10 +697,18 @@ func (pm *ProxyManager) handleUDPProxy(conn *gonet.UDPConn, targetAddr string) {
telemetry.IncProxyConnectionEvent(context.Background(), tunnelID, "udp", telemetry.ProxyConnectionClosed)
}()
buffer := make([]byte, 65507)
buffer := *bufPtr
for {
n, _, err := targetConn.ReadFromUDP(buffer)
if err != nil {
var netErr net.Error
if errors.As(err, &netErr) && netErr.Timeout() {
return
}
// Connection closed is normal during cleanup
if errors.Is(err, net.ErrClosed) || errors.Is(err, io.EOF) {
return // defer will handle cleanup, result stays "success"
}
logger.Error("Error reading from target: %v", err)
result = "failure"
return // defer will handle cleanup
@@ -704,6 +747,8 @@ func (pm *ProxyManager) handleUDPProxy(conn *gonet.UDPConn, targetAddr string) {
delete(clientConns, clientKey)
clientsMutex.Unlock()
} else if pm.currentTunnelID != "" && written > 0 {
// Extend idle timeout whenever client traffic is observed.
_ = targetConn.SetReadDeadline(time.Now().Add(pm.udpIdleTimeout))
if pm.asyncBytes {
if e := pm.getEntry(pm.currentTunnelID); e != nil {
e.bytesInUDP.Add(uint64(written))

60
testing/ws_client.py Normal file
View File

@@ -0,0 +1,60 @@
import asyncio
import sys
import websockets
# Argument parsing: Check if HOST and PORT are provided
if len(sys.argv) < 3 or len(sys.argv) > 4:
print("Usage: python ws_client.py <HOST_IP> <HOST_PORT> [ws|wss]")
# Example: python ws_client.py 127.0.0.1 8765
# Example: python ws_client.py 127.0.0.1 8765 wss
sys.exit(1)
HOST = sys.argv[1]
try:
PORT = int(sys.argv[2])
except ValueError:
print("Error: HOST_PORT must be an integer.")
sys.exit(1)
if len(sys.argv) == 4:
SCHEME = sys.argv[3].lower()
if SCHEME not in ("ws", "wss"):
print("Error: scheme must be 'ws' or 'wss'.")
sys.exit(1)
else:
SCHEME = "ws"
URI = f"{SCHEME}://{HOST}:{PORT}"
# The message to send to the server
MESSAGE = "Hello WebSocket Server! How are you?"
async def main():
print(f"Connecting to {URI}...")
try:
async with websockets.connect(URI) as websocket:
print(f"Connected to server.")
print(f"Sending message: '{MESSAGE}'")
await websocket.send(MESSAGE)
response = await websocket.recv()
print("-" * 30)
print(f"Received response from server:")
print(f"-> Data: '{response}'")
except ConnectionRefusedError:
print(f"Error: Connection to {URI} was refused. Is the server running?")
except websockets.exceptions.InvalidMessage as e:
print(f"Error: Server did not respond with a valid WebSocket handshake: {e}")
except Exception as e:
print(f"Error during communication: {e}")
print("-" * 30)
print("Client finished.")
asyncio.run(main())

49
testing/ws_server.py Normal file
View File

@@ -0,0 +1,49 @@
import asyncio
import sys
import websockets
# Optionally take in a positional arg for the port
if len(sys.argv) > 1:
try:
PORT = int(sys.argv[1])
except ValueError:
print("Invalid port number. Using default port 8765.")
PORT = 8765
else:
PORT = 8765
# Define the server host
HOST = "0.0.0.0"
async def handle_client(websocket):
client_address = websocket.remote_address
print(f"Client connected: {client_address[0]}:{client_address[1]}")
try:
async for message in websocket:
print("-" * 30)
print(f"Received message from {client_address[0]}:{client_address[1]}:")
print(f"-> Data: '{message}'")
response = f"Hello client! Server received: '{message.upper()}'"
await websocket.send(response)
print(f"Sent response back to client.")
except websockets.exceptions.ConnectionClosedOK:
print(f"Client {client_address[0]}:{client_address[1]} disconnected cleanly.")
except websockets.exceptions.ConnectionClosedError as e:
print(f"Client {client_address[0]}:{client_address[1]} disconnected with error: {e}")
async def main():
print(f"WebSocket Server listening on {HOST}:{PORT}")
async with websockets.serve(handle_client, HOST, PORT):
await asyncio.Future() # Run forever
try:
asyncio.run(main())
except KeyboardInterrupt:
print("\nServer stopped.")

View File

@@ -42,16 +42,18 @@ type Client struct {
onTokenUpdate func(token string)
writeMux sync.Mutex
clientType string // Type of client (e.g., "newt", "olm")
configFilePath string // Optional override for the config file path
tlsConfig TLSConfig
metricsCtxMu sync.RWMutex
metricsCtx context.Context
configNeedsSave bool // Flag to track if config needs to be saved
serverVersion string
configVersion int64 // Latest config version received from server
configVersion int64 // Latest config version received from server
configVersionMux sync.RWMutex
processingMessage bool // Flag to track if a message is currently being processed
processingMux sync.RWMutex // Protects processingMessage
processingWg sync.WaitGroup // WaitGroup to wait for message processing to complete
justProvisioned bool // Set to true when provisionIfNeeded exchanges a key for permanent credentials
}
type ClientOption func(*Client)
@@ -77,6 +79,12 @@ func WithBaseURL(url string) ClientOption {
}
// WithTLSConfig sets the TLS configuration for the client
func WithConfigFile(path string) ClientOption {
return func(c *Client) {
c.configFilePath = path
}
}
func WithTLSConfig(config TLSConfig) ClientOption {
return func(c *Client) {
c.tlsConfig = config
@@ -95,6 +103,16 @@ func (c *Client) OnTokenUpdate(callback func(token string)) {
c.onTokenUpdate = callback
}
// WasJustProvisioned reports whether the client exchanged a provisioning key
// for permanent credentials during the most recent connection attempt. It
// consumes the flag subsequent calls return false until provisioning occurs
// again (which, in practice, never happens once credentials are persisted).
func (c *Client) WasJustProvisioned() bool {
v := c.justProvisioned
c.justProvisioned = false
return v
}
func (c *Client) metricsContext() context.Context {
c.metricsCtxMu.RLock()
defer c.metricsCtxMu.RUnlock()
@@ -253,13 +271,17 @@ func (c *Client) SendMessageInterval(messageType string, data interface{}, inter
stopChan := make(chan struct{})
go func() {
count := 0
maxAttempts := 10
maxAttempts := 16
c.reconnectMux.RLock()
connected := c.isConnected
c.reconnectMux.RUnlock()
err := c.SendMessage(messageType, data) // Send immediately
if err != nil {
logger.Error("Failed to send initial message: %v", err)
} else if connected {
count++
}
count++
ticker := time.NewTicker(interval)
defer ticker.Stop()
@@ -270,11 +292,15 @@ func (c *Client) SendMessageInterval(messageType string, data interface{}, inter
logger.Info("SendMessageInterval timed out after %d attempts for message type: %s", maxAttempts, messageType)
return
}
c.reconnectMux.RLock()
connected = c.isConnected
c.reconnectMux.RUnlock()
err = c.SendMessage(messageType, data)
if err != nil {
logger.Error("Failed to send message: %v", err)
} else if connected {
count++
}
count++
case <-stopChan:
return
}
@@ -481,6 +507,11 @@ func (c *Client) connectWithRetry() {
func (c *Client) establishConnection() error {
ctx := context.Background()
// Exchange provisioning key for permanent credentials if needed.
if err := c.provisionIfNeeded(); err != nil {
return fmt.Errorf("failed to provision newt credentials: %w", err)
}
// Get token for authentication
token, err := c.getToken()
if err != nil {
@@ -684,6 +715,10 @@ func (c *Client) sendPing() {
}
c.writeMux.Lock()
if c.conn == nil {
c.writeMux.Unlock()
return
}
err := c.conn.WriteJSON(pingMsg)
if err == nil {
telemetry.IncWSMessage(c.metricsContext(), "out", "ping")
@@ -809,7 +844,7 @@ func (c *Client) readPumpWithDisconnectDetection(started time.Time) {
logger.Error("WebSocket failed to parse message: %v", err)
continue
}
c.setConfigVersion(msg.ConfigVersion)
c.handlersMux.RLock()
@@ -836,10 +871,12 @@ func (c *Client) readPumpWithDisconnectDetection(started time.Time) {
func (c *Client) reconnect() {
c.setConnected(false)
telemetry.SetWSConnectionState(false)
c.writeMux.Lock()
if c.conn != nil {
c.conn.Close()
c.conn = nil
}
c.writeMux.Unlock()
// Only reconnect if we're not shutting down
select {

View File

@@ -1,16 +1,29 @@
package websocket
import (
"bytes"
"context"
"crypto/tls"
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"net/url"
"os"
"path/filepath"
"regexp"
"runtime"
"strings"
"time"
"github.com/fosrl/newt/logger"
)
func getConfigPath(clientType string) string {
func getConfigPath(clientType string, overridePath string) string {
if overridePath != "" {
return overridePath
}
configFile := os.Getenv("CONFIG_FILE")
if configFile == "" {
var configDir string
@@ -36,7 +49,7 @@ func getConfigPath(clientType string) string {
func (c *Client) loadConfig() error {
originalConfig := *c.config // Store original config to detect changes
configPath := getConfigPath(c.clientType)
configPath := getConfigPath(c.clientType, c.configFilePath)
if c.config.ID != "" && c.config.Secret != "" && c.config.Endpoint != "" {
logger.Debug("Config already provided, skipping loading from file")
@@ -58,6 +71,11 @@ func (c *Client) loadConfig() error {
}
return err
}
if len(bytes.TrimSpace(data)) == 0 {
logger.Info("Config file at %s is empty, will initialize it with provided values", configPath)
c.configNeedsSave = true
return nil
}
var config Config
if err := json.Unmarshal(data, &config); err != nil {
@@ -83,6 +101,14 @@ func (c *Client) loadConfig() error {
c.config.Endpoint = config.Endpoint
c.baseURL = config.Endpoint
}
// Always load the provisioning key from the file if not already set
if c.config.ProvisioningKey == "" {
c.config.ProvisioningKey = config.ProvisioningKey
}
// Always load the name from the file if not already set
if c.config.Name == "" {
c.config.Name = config.Name
}
// Check if CLI args provided values that override file values
if (!fileHadID && originalConfig.ID != "") ||
@@ -105,7 +131,7 @@ func (c *Client) saveConfig() error {
return nil
}
configPath := getConfigPath(c.clientType)
configPath := getConfigPath(c.clientType, c.configFilePath)
data, err := json.MarshalIndent(c.config, "", " ")
if err != nil {
return err
@@ -118,3 +144,139 @@ func (c *Client) saveConfig() error {
}
return err
}
// interpolateString replaces {{env.VAR}} tokens in s with the corresponding
// environment variable values. Tokens that do not match a supported scheme are
// left unchanged, mirroring the blueprint interpolation logic.
func interpolateString(s string) string {
re := regexp.MustCompile(`\{\{([^}]+)\}\}`)
return re.ReplaceAllStringFunc(s, func(match string) string {
inner := strings.TrimSpace(match[2 : len(match)-2])
if strings.HasPrefix(inner, "env.") {
varName := strings.TrimPrefix(inner, "env.")
return os.Getenv(varName)
}
return match
})
}
// provisionIfNeeded checks whether a provisioning key is present and, if so,
// exchanges it for a newt ID and secret by calling the registration endpoint.
// On success the config is updated in-place and flagged for saving so that
// subsequent runs use the permanent credentials directly.
func (c *Client) provisionIfNeeded() error {
if c.config.ProvisioningKey == "" {
return nil
}
// If we already have both credentials there is nothing to provision.
if c.config.ID != "" && c.config.Secret != "" {
logger.Debug("Credentials already present, skipping provisioning")
return nil
}
logger.Info("Provisioning key found exchanging for newt credentials...")
baseURL, err := url.Parse(c.baseURL)
if err != nil {
return fmt.Errorf("failed to parse base URL for provisioning: %w", err)
}
baseEndpoint := strings.TrimRight(baseURL.String(), "/")
// Interpolate any {{env.VAR}} tokens in the name before sending.
name := interpolateString(c.config.Name)
reqBody := map[string]interface{}{
"provisioningKey": c.config.ProvisioningKey,
}
if name != "" {
reqBody["name"] = name
}
jsonData, err := json.Marshal(reqBody)
if err != nil {
return fmt.Errorf("failed to marshal provisioning request: %w", err)
}
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
req, err := http.NewRequestWithContext(
ctx,
"POST",
baseEndpoint+"/api/v1/auth/newt/register",
bytes.NewBuffer(jsonData),
)
if err != nil {
return fmt.Errorf("failed to create provisioning request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("X-CSRF-Token", "x-csrf-protection")
// Mirror the TLS setup used by getToken so mTLS / self-signed CAs work.
var tlsCfg *tls.Config
if c.tlsConfig.ClientCertFile != "" || c.tlsConfig.ClientKeyFile != "" ||
len(c.tlsConfig.CAFiles) > 0 || c.tlsConfig.PKCS12File != "" {
tlsCfg, err = c.setupTLS()
if err != nil {
return fmt.Errorf("failed to setup TLS for provisioning: %w", err)
}
}
if os.Getenv("SKIP_TLS_VERIFY") == "true" {
if tlsCfg == nil {
tlsCfg = &tls.Config{}
}
tlsCfg.InsecureSkipVerify = true
logger.Debug("TLS certificate verification disabled for provisioning via SKIP_TLS_VERIFY")
}
httpClient := &http.Client{}
if tlsCfg != nil {
httpClient.Transport = &http.Transport{TLSClientConfig: tlsCfg}
}
resp, err := httpClient.Do(req)
if err != nil {
return fmt.Errorf("provisioning request failed: %w", err)
}
defer resp.Body.Close()
body, _ := io.ReadAll(resp.Body)
logger.Debug("Provisioning response body: %s", string(body))
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
return fmt.Errorf("provisioning endpoint returned status %d: %s", resp.StatusCode, string(body))
}
var provResp ProvisioningResponse
if err := json.Unmarshal(body, &provResp); err != nil {
return fmt.Errorf("failed to decode provisioning response: %w", err)
}
if !provResp.Success {
return fmt.Errorf("provisioning failed: %s", provResp.Message)
}
if provResp.Data.NewtID == "" || provResp.Data.Secret == "" {
return fmt.Errorf("provisioning response is missing newt ID or secret")
}
logger.Info("Successfully provisioned newt ID: %s", provResp.Data.NewtID)
// Persist the returned credentials and clear the one-time provisioning key
// so subsequent runs authenticate normally.
c.config.ID = provResp.Data.NewtID
c.config.Secret = provResp.Data.Secret
c.config.ProvisioningKey = ""
c.config.Name = ""
c.configNeedsSave = true
c.justProvisioned = true
// Save immediately so that if the subsequent connection attempt fails the
// provisioning key is already gone from disk and the next retry uses the
// permanent credentials instead of trying to provision again.
if err := c.saveConfig(); err != nil {
logger.Error("Failed to save config after provisioning: %v", err)
}
return nil
}

35
websocket/config_test.go Normal file
View File

@@ -0,0 +1,35 @@
package websocket
import (
"os"
"path/filepath"
"testing"
)
func TestLoadConfig_EmptyFileMarksConfigForSave(t *testing.T) {
t.Setenv("CONFIG_FILE", "")
tmpDir := t.TempDir()
configPath := filepath.Join(tmpDir, "config.json")
if err := os.WriteFile(configPath, []byte(""), 0o644); err != nil {
t.Fatalf("failed to create empty config file: %v", err)
}
client := &Client{
config: &Config{
Endpoint: "https://example.com",
ProvisioningKey: "spk-test",
},
clientType: "newt",
configFilePath: configPath,
}
if err := client.loadConfig(); err != nil {
t.Fatalf("loadConfig returned error for empty file: %v", err)
}
if !client.configNeedsSave {
t.Fatal("expected empty config file to mark configNeedsSave")
}
}

View File

@@ -1,10 +1,12 @@
package websocket
type Config struct {
ID string `json:"id"`
Secret string `json:"secret"`
Endpoint string `json:"endpoint"`
TlsClientCert string `json:"tlsClientCert"`
ID string `json:"id"`
Secret string `json:"secret"`
Endpoint string `json:"endpoint"`
TlsClientCert string `json:"tlsClientCert"`
ProvisioningKey string `json:"provisioningKey,omitempty"`
Name string `json:"name,omitempty"`
}
type TokenResponse struct {
@@ -16,8 +18,17 @@ type TokenResponse struct {
Message string `json:"message"`
}
type ProvisioningResponse struct {
Data struct {
NewtID string `json:"newtId"`
Secret string `json:"secret"`
} `json:"data"`
Success bool `json:"success"`
Message string `json:"message"`
}
type WSMessage struct {
Type string `json:"type"`
Data interface{} `json:"data"`
ConfigVersion int64 `json:"configVersion,omitempty"`
}
}