You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue continues the discussion from #345 , which was closed by #350 . That PR added documentation guidance for managing multiple gRPC connections, but left an actual in-library implementation as future work. This PR delivers that implementation.
@Nezteb (the original author of #345) and I had been planning to work on this together for very long time. The implementation is now ready and I'm submitting it for review.
What this adds
A supervised connection pool for gRPC channels built directly into the client, with:
Multiple concurrent HTTP/2 connections managed under an Erlang/OTP supervisor tree
Load distribution across connections, addressing the :max_concurrent_streams bottleneck that gRPC HTTP/2 Channel Connection Pooling #345 highlighted (Mint's default cap of 100 concurrent streams per channel)
A legacy mode opt-out flag so existing users are not affected by the change — the pool is strictly opt-in
The original issue correctly identified that a single reused channel becomes a bottleneck at scale. The recommended approach from that discussion — "keeping multiple connections open, monitoring their status and spreading the load across them" — is exactly what this implements, natively in the library rather than requiring users to roll their own solution or reach for external libraries like conn_grpc.
Disclaimer
The core pooling logic predates this PR. It has been running in production as a standalone internal library for approximately two years. The integration work to fit it cleanly into the existing elixir-grpc codebase was assisted by AI tooling, but all original pooling logic and design decisions were written by a human. I want to be transparent about this so reviewers can weigh the maturity of the logic separately from the integration code as well as giving it a bit more attention to the review.
Background
This issue continues the discussion from #345 , which was closed by #350 . That PR added documentation guidance for managing multiple gRPC connections, but left an actual in-library implementation as future work. This PR delivers that implementation.
@Nezteb (the original author of #345) and I had been planning to work on this together for very long time. The implementation is now ready and I'm submitting it for review.
What this adds
A supervised connection pool for gRPC channels built directly into the client, with:
Motivation
The original issue correctly identified that a single reused channel becomes a bottleneck at scale. The recommended approach from that discussion — "keeping multiple connections open, monitoring their status and spreading the load across them" — is exactly what this implements, natively in the library rather than requiring users to roll their own solution or reach for external libraries like conn_grpc.
Disclaimer
The core pooling logic predates this PR. It has been running in production as a standalone internal library for approximately two years. The integration work to fit it cleanly into the existing elixir-grpc codebase was assisted by AI tooling, but all original pooling logic and design decisions were written by a human. I want to be transparent about this so reviewers can weigh the maturity of the logic separately from the integration code as well as giving it a bit more attention to the review.
PR Ref: #523