Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need IAM permissions for Session Manager added to Boskos-managed AWS accounts #984

Open
randomvariable opened this issue Jun 25, 2020 · 6 comments
Assignees
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/k8s-infra Categorizes an issue or PR as relevant to SIG K8s Infra.

Comments

@randomvariable
Copy link
Member

Porting over from kubernetes/test-infra#17190 by @detiber

What would you like to be added:

Recently we removed creation of a Bastion host by default from cluster-api-provider-aws and would prefer to avoid creating one for the purposes of testing and would rather use ssh proxied over AWS Session Manager to connect to remote instances for retrieving logs during testing instead.

Will need one of the below done against each AWS subaccount users that is registered as an aws-account resource in Boskos:

Attach the AmazonSSMFullAccess policy
Create a policy with the permissions outlined in docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-restrict-access.html and attach it, or add it as an inline policy

@detiber
Copy link
Member

detiber commented Aug 12, 2020

/assign @justinsb

@randomvariable
Copy link
Member Author

Is there anything we can do to get this moving? We kind of missed an upgrade bug for Kubernetes 1.19 for a few reasons, but would have been compounded by not having the logs that SSM would have provided.

@justinsb
Copy link
Member

I added the permission to #1016 (and it's in the latest accounts that I just created, but we should wait till #1016 merges before applying to boskos)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 17, 2021
@detiber
Copy link
Member

detiber commented Mar 8, 2021

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 8, 2021
@spiffxp
Copy link
Member

spiffxp commented Sep 2, 2021

/priority backlog
I have lost track of where we're at with AWS account management

@k8s-ci-robot k8s-ci-robot added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Sep 2, 2021
@k8s-ci-robot k8s-ci-robot added sig/k8s-infra Categorizes an issue or PR as relevant to SIG K8s Infra. and removed wg/k8s-infra labels Sep 29, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/k8s-infra Categorizes an issue or PR as relevant to SIG K8s Infra.
Projects
Status: Backlog
Development

No branches or pull requests

6 participants