-
Notifications
You must be signed in to change notification settings - Fork 40.8k
Bump default API QPS limits for Kubelet #116121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@@ -96,10 +96,10 @@ func SetDefaults_KubeletConfiguration(obj *kubeletconfigv1beta1.KubeletConfigura | |||
obj.RegistryBurst = 10 | |||
} | |||
if obj.EventRecordQPS == nil { | |||
obj.EventRecordQPS = utilpointer.Int32Ptr(5) | |||
obj.EventRecordQPS = utilpointer.Int32Ptr(1000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is obviously not the right way of doing that, as we can't change defaults.
The question is how exactly we should do that:
- we seem to already have v1 API, but that don't even have any defaulting:
https://github.com/kubernetes/kubernetes/tree/master/pkg/kubelet/apis/config/v1 - given the config is already in v1 - what options do we have to change it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure this is so terrible in a config type?
Do you want to remove the limit or increase it a lot? At 1k you might as well remove it completely?
In that case an alternative could be to add a "UseRateLimit" field and default it to false?
I'd get kubelet authors to weigh in first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don’t know why we wouldn’t go to 25 or 50 as a first step, then a release later with soak jump another bit. I struggle to think of kubelet even on 512 pod nodes being able to saturate 50qps anyway
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I struggle to think of kubelet even on 512 pod nodes being able to saturate 50qps anyway
For extended period of time - I agree. As a spike - I actually can - let's say some large pod just finished there and we have a place to start next new small 30 pods, with 10 secrets each. That gives O(400) API calls...
That said - I would be fine with doing it in steps, if it's not a big overhead like creating a new config version to avoid changing defaults within a single (group, version) or sth.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I looked at this for EKS, I found that I got most of the benefit just by doubling the QPS/burst. We've used 10 QPS 20 Burst as the default kubelet values for EKS AMIs since K8s 1.22. I'm happy to see the defaults increased.
The below graph is 3k pods and 30 nodes showing time to pod readiness.
The other situation where it helps is in auto-scaling. E.g. New pods are created, causing new nodes to be launched and large numbers of pods then schedule to the same node at roughly the same time as it goes ready.
Refs:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At 1k you might as well remove it completely?
I think having a safeguard at a very high, but not extremely high value would protect API server from errors in kubelet. Virtually unlimited normal operation sounds reasonable to me.
@wojtek-t when I looked yesterday I thought the limit increased to 100/500. I might not remember correctly since I don't se force push there =). For me 100/500 was the good first step as @smarterclayton suggested.
One risk of 1k is potentially we can make kubelet to become a noisy neighbor in the moment of rescheduling (like @wojtek-t described in #116121 (comment) with 400 requests). It may be a problem when there are other API server clients on the node or network is very limited and close to be saturated... But we are keeping configuration settings and for set ups with limited network connectivity one can change defaults back.
I'd vote for 100/500 as a first step in 1.27.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems better than --use-rate-limit
that is false
by default because at least if someone previously specified these flags they would continue to work as the author expected.
I think the question for sig-node is more about whether we can change the default at all. And to us (api-machinery) about how confident we are in disabling the client-side rate limiter. We (redhat) have not yet tried disabling the rate limiter for kubelets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wojtek-t when I looked yesterday I thought the limit increased to 100/500. I might not remember correctly since I don't se force push there =). For me 100/500 was the good first step as @smarterclayton suggested.
I didn't change that in the meantime.
FWIW - Clayton suggested even smaller, but for now I switched to 50/100 - if we're ok with changing the defaults on the config level - this isn't much work so it's for sure ok to go gradually..
One risk of 1k is potentially we can make kubelet to become a noisy neighbor
Agree - but the fact that we're allowing clients to send requests, doesn't mean that kube-apiserver won't reject them anyway. So it's on kube-apiserver to decide whether it has capacity to process it, and APF should do the job here.
This seems better than --use-rate-limit that is false by default because at least if someone previously specified these flags they would continue to work as the author expected.
Agree - this is why I also started with this WIP PR.
And to us (api-machinery) about how confident we are in disabling the client-side rate limiter. We (redhat) have not yet tried disabling the rate limiter for kubelets.
As long as I don't have full confidence in other components, I think we're pretty confident in bumping Kubelets at this point from Google side.
@SergeyKanzhelev - with whom from the node team I should talk about it? |
Some related info, I doubled the default kublet QPS/burst for both the EKS AL2 and Bottlerocket AMIs last year for the same reasons. There are some metrics/numbers in the BR PR at bottlerocket-os/bottlerocket#2436 |
I think it is very good improvement. I cannot comment from the backend side if there will be scalability issues, but for majority of clusters these new defaults should simply work. Maybe release notes on this PR may be expanded to note that action is required - review the new defaults and adjust if needed and provide the link back to docs. |
From PR mechanics, we also have these values documented in a field description:
It is not a backward compatible change, but I don't think it is breaking anything and I'd suggest we take it with the appropriate release notes. |
I have a question. Is it ok to increase the default limit with v-2(v1.25) kube-apiserver? |
Version skew backwards is not supported. API server must have higher version |
cec4184
to
10ead07
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SergeyKanzhelev - thanks for comments, PTAL
@@ -96,10 +96,10 @@ func SetDefaults_KubeletConfiguration(obj *kubeletconfigv1beta1.KubeletConfigura | |||
obj.RegistryBurst = 10 | |||
} | |||
if obj.EventRecordQPS == nil { | |||
obj.EventRecordQPS = utilpointer.Int32Ptr(5) | |||
obj.EventRecordQPS = utilpointer.Int32Ptr(1000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wojtek-t when I looked yesterday I thought the limit increased to 100/500. I might not remember correctly since I don't se force push there =). For me 100/500 was the good first step as @smarterclayton suggested.
I didn't change that in the meantime.
FWIW - Clayton suggested even smaller, but for now I switched to 50/100 - if we're ok with changing the defaults on the config level - this isn't much work so it's for sure ok to go gradually..
One risk of 1k is potentially we can make kubelet to become a noisy neighbor
Agree - but the fact that we're allowing clients to send requests, doesn't mean that kube-apiserver won't reject them anyway. So it's on kube-apiserver to decide whether it has capacity to process it, and APF should do the job here.
This seems better than --use-rate-limit that is false by default because at least if someone previously specified these flags they would continue to work as the author expected.
Agree - this is why I also started with this WIP PR.
And to us (api-machinery) about how confident we are in disabling the client-side rate limiter. We (redhat) have not yet tried disabling the rate limiter for kubelets.
As long as I don't have full confidence in other components, I think we're pretty confident in bumping Kubelets at this point from Google side.
This PR may require API review. If so, when the changes are ready, complete the pre-review checklist and request an API review. Status of requested reviews is tracked in the API Review project. |
I saw the original proposed 1000 reset back to 50, and burst is 100 from 5. I am ok to start from here, especially there is no concerns from API Machinery. The original configuration was chosen largely due to the limitation of API Machinery side. /lgtm |
LGTM label has been added. Git tree hash: ec365c9044cd8410100dbe00e66411174c01332e
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy to see these increased.
/lgtm
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: lavalamp, tzneal, wojtek-t The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Thanks Dawn and Daniel! /retest |
/triage accepted |
/label api-review We discussed the mechanics of it on SIG Node and agreed it is acceptable to change this default. |
This helps in our clusters, and the first case for throttling that we meet is |
Ref kubernetes/enhancements#1040
Based on different experiments with APF, we believe that we're ready to start our journey towards getting rid of client-side rate-limiting. Kubelet is the best first candidate, because:
/kind feature
/priority important-longterm
/sig node
/assign @lavalamp @deads2k
/cc @tkashem @MikeSpreitzer