-
Notifications
You must be signed in to change notification settings - Fork 5.6k
[Feature] Salt Resources #68888
Description
Salt Resources
Salt Resources are a first-class mechanism for targeting and executing commands
against entities that are managed through a Salt minion rather than running
a Salt minion themselves. Resources solve the scalability and operational
problems of the proxy and deltaproxy architecture without breaking any existing
functionality.
A working implementation exists on the
resources branch.
Problem Statement
Salt's proxy and deltaproxy architecture does not scale to high-density or
dynamic environments. Three specific constraints drive this:
Memory
Deltaproxy consolidates OS processes but not memory. Each managed device
requires a full, independent Salt loader with its own copy of __salt__,
__grains__, __pillar__, and the rest of the dunder namespace. Memory usage
grows linearly per device. A minion managing 500 network devices in a single
deltaproxy process can easily require several gigabytes of RAM.
Measured on a standard Salt installation:
- Baseline minion_mods loader: ~130 MB RSS
- Each additional per-device loader: ~10 MB RSS
For 1,000 devices that is ~10 GB of loader memory alone, before any connection
state is considered.
PKI Overhead
Every proxied device requires a unique RSA key pair. Restarting a deltaproxy
process triggers a simultaneous re-authentication wave across all managed
devices that can overwhelm the master's key signing capacity. Stale keys from
decommissioned devices accumulate on the master indefinitely, creating
operational burden and cluttering salt-key -L.
Dynamic Infrastructure
The proxy architecture is connection-oriented and assumes static, long-lived
devices. Adding or removing a device requires manual Pillar updates and a
process reload. This model is incompatible with ephemeral workloads like
Kubernetes pods, auto-scaling cloud instances, or dynamically discovered
network devices.
Core Concepts
Resource
A Resource is any entity that Salt can target and execute commands against,
managed through an existing minion. Resources do not run their own minion
process and do not require their own key pair.
Salt Resource Name (SRN)
Every resource is identified by a Salt Resource Name combining a type and an
ID:
<type>:<id>
Examples: ssh:web-01, network_device:core-router, k8s_pod:nginx-abc123
Resource IDs are globally unique across all types and all minions.
Resource Type
A resource type defines the category of a resource and maps to a module in
salt/resource/. All resources of the same type on a given minion share a
single Salt execution-module loader. This is the key memory saving over
deltaproxy:
| Scenario | Proxy/Deltaproxy | Salt Resources |
|---|---|---|
| 10 devices, 1 type | ~100 MB | ~10 MB |
| 1,000 devices, 1 type | ~10 GB | ~10 MB |
| 1,000 devices, 5 types | ~10 GB | ~50 MB |
Resource Registry
The master-side registry records which resources exist and which minion manages
each one. It is backed by Salt's existing pluggable cache
(salt.cache.factory(opts)), using a new resources bank alongside the
existing grains and pillar banks. No new storage infrastructure is
required.
bank: "grains", key: "<id>" → {grain_dict}
bank: "pillar", key: "<id>" → {pillar_dict}
bank: "resources", key: "<id>" → {"type": "<type>", "managing_minions": [...]}
Resource IDs are globally unique, so the type is never part of the cache key.
The registry is populated two ways:
- Reported by minions — minions discover and report their managed resources
viasaltutil.refresh_resources, analogous to grain reporting today. - Statically configured — resources defined in Pillar are registered with
the master on minion connect.
Managing Minion
The minion responsible for executing jobs against a resource. A resource may
have more than one managing minion (e.g. for redundancy). When a job targets
T@ssh:web-01, all managing minions receive the job and each executes it
against web-01.
Targeting
Resources are first-class targeting citizens. All existing targeting methods
continue to work unchanged. Two new targeting engines are added:
| Engine | Form | Meaning |
|---|---|---|
T@ |
T@<type> |
All resources of a given type |
T@ |
T@<type>:<id> |
One specific resource by SRN |
M@ |
M@<minion_id> |
The named managing minion directly |
salt '*' # all minions AND all managed resources
salt -C 'T@ssh' test.ping # all SSH resources
salt -C 'T@ssh:web-01' cmd.run 'uptime' # one specific SSH resource
salt -C 'T@ssh and G@os:Debian' pkg.install vim # compound targeting
salt -C 'M@gateway-01 and T@network_device' test.ping # resources on one managing minionWhen salt '*' is used, all registered resources are automatically included
alongside traditional minions.
T@ and M@ are evaluated on the minion side by two new matcher modules:
salt/matchers/resource_match.py— evaluatesT@by checking
opts["resources"], analogous to howgrain_matchreadsopts["grains"].salt/matchers/managing_minion_match.py— evaluatesM@by comparing
the target against the minion's own ID.
Architecture
salt/resource/
Resource modules live in salt/resource/, following the same loadable-module
pattern as salt/proxy/. A resource module defines how to connect to and
interact with a specific resource type. Required interface:
def init(opts):
"""One-time setup for this resource type on this minion."""
def ping():
"""Return True if the current resource is reachable."""
def grains():
"""Return a grains dict for the current resource."""The __resource__ dunder provides per-call context — the id and type of
the resource currently being operated on. It is injected by the minion
dispatcher before each execution, keeping all resource module functions
stateless with respect to which specific resource they operate on.
Loader model
The minion maintains two sets of loaders for resources:
-
self.resource_funcs— oneLazyLoaderfor allsalt/resource/*.py
modules, created bysalt.loader.resource(). Packed as__resource_funcs__
into resource execution modules so they can call back into the resource
connection layer. -
self.resource_loaders— a{type: LazyLoader}dict, one entry per
managed resource type, created bysalt.loader.resource_modules(opts, resource_type). Each loader hasopts["resource_type"]set so execution
modules can gate their__virtual__on it.
Both are initialised in gen_modules() alongside self.functions,
self.returners, etc.
Execution module overrides
Resource-type-specific execution modules live in salt/modules/ and follow
the naming convention <type>resource_<module>.py. They declare
__virtualname__ = "<module>" and gate their __virtual__ on
opts["resource_type"] == "<type>" so they only load into the per-type loader.
Standard modules that have resource-specific overrides yield from their own
__virtual__ when opts["resource_type"] is set:
# salt/modules/test.py
def __virtual__():
if __opts__.get("resource_type"):
return False, "test: not loaded in resource-type loaders"
return TrueThis ensures:
- Resource jobs use the resource-type loader (which has the override).
- Regular minion jobs use
self.functions(the standard module). - Functions with no resource override fail loudly rather than silently
executing on the managing minion.
Execution flow
1. User runs: salt -C 'T@ssh:web-01' cmd.run 'uptime'
2. Master resolves T@ssh:web-01 → resource ID 'web-01'
and publishes the job; managing minion self-selects via T@ matcher
3. Managing minion receives the job
4. _handle_payload detects resource_targets and dispatches sub-jobs
5. Each sub-job runs in its own thread/process with:
- functions_to_use = self.resource_loaders["ssh"]
- __resource__ = {"id": "web-01", "type": "ssh"}
- __resource_funcs__ = self.resource_funcs
6. sshresource_cmd.run() is called, delegates to ssh.cmd_run()
7. Return is keyed by resource ID ("web-01"), not managing minion ID
The return ID remapping happens on the master after the transport security
check, so CLI output shows web-01: ... rather than the managing minion name.
Configuration
Resources are configured via Pillar. The Pillar key defaults to resources
and is configurable via resource_pillar_key in the minion config:
# pillar
resources:
ssh:
hosts:
web-01:
host: 192.168.1.10
user: root
priv: /etc/salt/ssh_keys/web-01
web-02:
host: 192.168.1.11
user: deploy
passwd: secretpassword
network_device:
driver: napalm
hosts:
core-router:
host: 10.0.0.1
username: admin
password: {{ pillar['vault:network:password'] }}This follows the same pattern as proxy minion configuration today.