Skip to content

Add registration admission control for httpd#495

Draft
pablomh wants to merge 4 commits intotheforeman:masterfrom
pablomh:httpd-registration-admission-control
Draft

Add registration admission control for httpd#495
pablomh wants to merge 4 commits intotheforeman:masterfrom
pablomh:httpd-registration-admission-control

Conversation

@pablomh
Copy link
Copy Markdown
Contributor

@pablomh pablomh commented May 5, 2026

Summary

Add registration admission control for /rhsm and /register endpoints via an Apache mod_proxy_balancer pool. When httpd_registration_admission_max > 0, requests beyond that limit are queued by Apache instead of overwhelming Puma during registration bursts.

Builds on top of #483 (MPM event module configuration).

Problem

Under high-concurrency registration (e.g., 912+ simultaneous hosts), all requests arrive at Puma simultaneously, creating a deep backlog where tail hosts exceed subscription-manager's 180s timeout. Without admission control, pass rates drop sharply above 760 concurrent.

Solution

A <Proxy balancer://foreman-registration> block with a max= limit on the BalancerMember directive limits how many registration requests reach Puma concurrently. Excess requests are held in Apache's event loop (non-blocking) until a slot opens.

The balancer block is injected conditionally in both foreman-ssl-vhost.conf.j2 and foreman-vhost.conf.j2 before the catch-all ProxyPass /, so /rhsm and /register match first while all other traffic (Web UI, API, Pulp) is unaffected.

Configuration

Disabled by default (httpd_registration_admission_max: 0). Tuning profiles set values based on puma_workers * threads * 5:

Profile Value
default 0 (disabled)
medium 300
large 600
extra-large 1200
extra-extra-large 2400

Results (16-CPU large Satellite)

Level Without throttle With throttle (max=600)
912 72.8% 100%
1064 71.8% 99.8%
1368 72.5% 97.0%

Note

A similar mechanism could be implemented for foreman-installer deployments via puppet-foreman using the existing foreman::config::apache::fragment mechanism.

How to test

  1. Deploy with a tuning profile: foremanctl deploy --tuning large
  2. Verify throttle block: grep foreman-registration /etc/httpd/conf.d/foreman-ssl.conf
  3. Verify config syntax: httpd -t
  4. Register hosts concurrently and observe improved pass rates at high concurrency

pablomh and others added 4 commits May 4, 2026 22:23
Deploy event.conf template for the Apache MPM event module with
configurable parameters via role defaults. This establishes foremanctl
management of ServerLimit, MaxRequestWorkers, ListenBacklog, and
related parameters.

Defaults: ServerLimit=25, ThreadsPerChild=16. MaxRequestWorkers is
only rendered when explicitly set, allowing Apache to derive it as
ServerLimit * ThreadsPerChild by default.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add httpd ServerLimit and MaxRequestWorkers overrides to each tuning
profile, matching the values from foreman-installer custom-hiera
tuning sizes.

All tuning profiles (medium, large, extra-large, extra-extra-large):
  ServerLimit=64, MaxRequestWorkers=1024

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Verify that event.conf is deployed with the expected directives
and that httpd config syntax is valid after deployment.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Limit concurrent registration connections to Puma for /rhsm and
/register endpoints via an Apache balancer pool. When
httpd_registration_admission_max > 0, requests beyond that limit
are queued by Apache instead of overwhelming Puma during bursts.

Disabled by default (httpd_registration_admission_max: 0). Tuning
profiles set values based on puma_workers * threads * 5:
medium=300, large=600, extra-large=1200, extra-extra-large=2400.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@pablomh pablomh marked this pull request as draft May 5, 2026 22:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant