-
Notifications
You must be signed in to change notification settings - Fork 327
Description
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Tell us about your request
I would love to see Auto Mode support for Security Groups Per Pod.
Which service(s) is this request for?
EKS (Auto Mode)
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
We use SGPP on Fargate-scheduled workloads today to manage "north-south" access to AWS resources in our VPC that support Security Group network rule management, such as AWS RDS or VPC Interface Endpoint. (Since Fargate doesn't support k8s NetworkPolicy, we also use Security Groups to manage "east-west" network traffic in our EKS Cluster between Pods scheduled on Fargate.)
As described in this document (https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html), we create a SecurityGroupPolicy for Pods that we schedule to Fargate. I've observed that when we attempt to schedule a Pod on Auto Mode-managed compute, the resource request for "vpc.amazonaws.com/pod-eni" cannot be met by Karpenter. And indeed the AWS docs are clear that Auto Mode does not support SGPP (https://docs.aws.amazon.com/eks/latest/userguide/auto-networking.html)
With our need to apply Security Groups to our workloads, the inability to apply SGPP in Auto Mode might lead us towards the securityGroupSelectorTerms/podSecurityGroupSelectorTerms config described here (https://docs.aws.amazon.com/eks/latest/userguide/create-node-class.html). However this seems to imply we'd need to maintain a NodeClass and NodePool per workload, in addition to the taint/toleration/nodeAffinity/nodeSelector config necessary to schedule each workload appropriately on its own Node.
Are you currently working around this issue?
Currently, this issue has led us to stay on Fargate for Clusters where we deploy multiple workloads. We would like to migrate our use of Fargate to EKS Auto Mode.
Additional context
I'll be the first to admit that I'm still maturing on EKS and k8s usage, so, would welcome insight into workarounds or better approaches. One ultimate solution would be to outgrow the use of Fargate and Auto Mode, but we're not quite there yet, and I'd suppose there are other AWS customers in this situation.