From 705d0b2ca084213b38c32ff8b60f0c6c683bdc41 Mon Sep 17 00:00:00 2001 From: CFS Docs Date: Tue, 26 Nov 2019 14:02:45 +0000 Subject: [PATCH] Derek Poindexter: Update run_travis.sh --- cs_access_reference.md | 58 +++++++++++++++++++------------------- cs_cluster_plan_ha.md | 10 +++---- cs_cluster_scaling.md | 10 +++---- cs_cluster_update.md | 42 +++++++++++++-------------- cs_clusters.md | 38 ++++++++++++------------- cs_encrypt.md | 6 ++-- cs_ingress.md | 22 +++++++-------- cs_ingress_about.md | 30 ++++++++++---------- cs_ingress_settings.md | 6 ++-- cs_ingress_user_managed.md | 8 +++--- cs_limitations.md | 10 +++---- cs_loadbalancer.md | 6 ++-- cs_loadbalancer_about.md | 8 +++--- cs_loadbalancer_dns.md | 8 +++--- cs_network_cluster.md | 8 +++--- cs_network_policy.md | 8 +++--- cs_nodeport.md | 6 ++-- cs_overview.md | 8 +++--- cs_secure.md | 28 +++++++++--------- cs_storage_cos.md | 6 ++-- cs_subnets.md | 10 +++---- cs_troubleshoot_storage.md | 10 +++---- cs_users.md | 44 ++++++++++++++--------------- cs_why.md | 6 ++-- cs_worker_add.md | 12 ++++---- cs_worker_plan.md | 32 ++++++++++----------- release_notes.md | 10 +++---- 27 files changed, 225 insertions(+), 225 deletions(-) diff --git a/cs_access_reference.md b/cs_access_reference.md index 0f8106d01..a86e9ea57 100644 --- a/cs_access_reference.md +++ b/cs_access_reference.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-14" +lastupdated: "2019-11-26" keywords: kubernetes, iks, infrastructure, rbac, policy @@ -26,7 +26,7 @@ subcollection: containers # User access permissions {: #access_reference} -When you [assign cluster permissions](/docs/containers?topic=containers-users), it can be hard to judge which role you need to assign to a user. Use the tables in the following sections to determine the minimum level of permissions that are required to perform common tasks in {{site.data.keyword.containerlong}}. +When you [assign cluster permissions](/docs/containers?topic=containers-users), it can be hard to judge which role you need to assign to a user. Use the tables in the following sections to determine the minimum level of permissions that are required to perform common tasks in {{site.data.keyword.containerlong}}. {: shortdesc} ## {{site.data.keyword.cloud_notm}} IAM platform roles @@ -59,7 +59,7 @@ The following table shows the permissions granted by each {{site.data.keyword.cl | Deprecated: List the available regions. | [`ibmcloud ks region ls`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_regions) | [`GET /v1/regions`](https://containers.cloud.ibm.com/global/swagger-global-api/#/util/GetRegions) | | View a list of supported locations in {{site.data.keyword.containerlong_notm}}. | [`ibmcloud ks supported-locations`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_supported-locations) | [`GET /v1/locations`](https://containers.cloud.ibm.com/global/swagger-global-api/#/util/ListLocations) | | View a list of supported versions in {{site.data.keyword.containerlong_notm}}. | [`ibmcloud ks versions`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_versions_command) | - | -| View a list of available zones that you can create a cluster in. | [`ibmcloud ks zone ls`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_datacenters) | | +| View a list of available zones that you can create a cluster in. | [`ibmcloud ks zone ls`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_datacenters) | | {: class="simple-tab-table"} {: caption="Overview of permissions required for CLI commands and API calls in {{site.data.keyword.containerlong_notm}}." caption-side="top"} {: #accessreftabtablenone} @@ -68,14 +68,14 @@ The following table shows the permissions granted by each {{site.data.keyword.cl | Action | CLI command | API call | |----|----|----| -| View information for an Ingress ALB. | [`ibmcloud ks alb get`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_alb_get) | | +| View information for an Ingress ALB. | [`ibmcloud ks alb get`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_alb_get) | | | View ALB types that are supported in the region. | [`ibmcloud ks alb types`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_alb_types) | [`GET /v1/albtypes`](https://containers.cloud.ibm.com/global/swagger-global-api/#/util/GetAvailableALBTypes) | -| List all Ingress ALBs in a cluster. | [`ibmcloud ks alb ls`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_albs) | | +| List all Ingress ALBs in a cluster. | [`ibmcloud ks alb ls`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_albs) | | | View the name and email address for the owner of the {{site.data.keyword.cloud_notm}} IAM API key for a resource group and region. | [`ibmcloud ks api-key info`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_api_key_info) | [`GET /v1/logging/{idOrName}/clusterkeyowner`](https://containers.cloud.ibm.com/global/swagger-global-api/#/logging/GetClusterKeyOwner) | | Download Kubernetes configuration data and certificates to connect to your cluster and run kubectl commands. | [`ibmcloud ks cluster config`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_cluster_config) | [`GET /v1/clusters/{idOrName}/config`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/GetClusterConfig) | -| View information for a cluster. | [`ibmcloud ks cluster get`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_cluster_get) | | +| View information for a cluster. | [`ibmcloud ks cluster get`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_cluster_get) | | | List all services in all namespaces that are bound to a cluster. | [`ibmcloud ks cluster service ls`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_cluster_services) | [`GET /v1/clusters/{idOrName}/services`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/ListServicesForAllNamespaces) | -| List all clusters. | [`ibmcloud ks cluster ls`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_clusters) | | +| List all clusters. | [`ibmcloud ks cluster ls`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_clusters) | | | Get the infrastructure credentials that are set for the {{site.data.keyword.cloud_notm}} account to access a different IBM Cloud infrastructure portfolio. | [`ibmcloud ks credential get`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_credential_get) | [`GET /v1/credentials`](https://containers.cloud.ibm.com/global/swagger-global-api/#/accounts/GetUserCredentials) | | Check whether the credentials that allow access to the IBM Cloud infrastructure portfolio for the targeted region and resource group are missing suggested or required infrastructure permissions. | [`ibmcloud ks infra-permissions get`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#infra_permissions_get) | [`GET /v1/infra-permissions`](https://containers.cloud.ibm.com/global/swagger-global-api/#/accounts/GetInfraPermissions) | | View the status for automatic updates of the Fluentd add-on. | [`ibmcloud ks logging autoupdate get`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_log_autoupdate_get) | [`GET /v1/logging/{idOrName}/updatepolicy`](https://containers.cloud.ibm.com/global/swagger-global-api/#/logging/GetUpdatePolicy) | @@ -86,15 +86,15 @@ The following table shows the permissions granted by each {{site.data.keyword.cl | List all services that are bound to a specific namespace. | - | [`GET /v1/clusters/{idOrName}/services/{namespace}`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/ListServicesInNamespace) | | List all IBM Cloud infrastructure subnets that are bound to a cluster. | - | [`GET /v1/clusters/{idOrName}/subnets`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/GetClusterSubnets) | | List all user-managed subnets that are bound to a cluster. | - | [`GET /v1/clusters/{idOrName}/usersubnets`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/GetClusterUserSubnet) | -| List available subnets. | [`ibmcloud ks subnets`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_subnets) | | +| List available subnets. | [`ibmcloud ks subnets`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_subnets) | | | View the VLAN spanning status for the infrastructure account. | [`ibmcloud ks vlan spanning get`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_vlan_spanning_get) | [`GET /v1/subnets/vlan-spanning`](https://containers.cloud.ibm.com/global/swagger-global-api/#/accounts/GetVlanSpanning) | -| When set for one cluster: List VLANs that the cluster is connected to in a zone.
When set for all clusters in the account: List all available VLANs in a zone. | [`ibmcloud ks vlan ls`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_vlans) | [`GET /v1/datacenters/{datacenter}/vlans`](https://containers.cloud.ibm.com/global/swagger-global-api/#/properties/GetDatacenterVLANs) | -| List all VPCs in the targeted resource group. | [`ibmcloud ks vpcs`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_vpcs) | [`GET /v2​/vpc​/getVPCs`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/getVPCs) | +| When set for one cluster: List VLANs that the cluster is connected to in a zone.
When set for all clusters in the account: List all available VLANs in a zone. | [`ibmcloud ks vlan ls`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_vlans) | [`GET /v1/datacenters/{datacenter}/vlans`](https://containers.cloud.ibm.com/global/swagger-global-api/#/properties/GetDatacenterVLANs) | +| List all VPCs in the targeted resource group. | [`ibmcloud ks vpcs`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_vpcs) | [`GET /v2​/vpc​/getVPCs`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/getVPCs) | | List all webhooks for a cluster. | - | [`GET /v1/clusters/{idOrName}/webhooks`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/GetClusterWebhooks) | -| View information for a worker node. | [`ibmcloud ks worker get`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_worker_get) | | +| View information for a worker node. | [`ibmcloud ks worker get`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_worker_get) | | | View information for a worker pool. | [`ibmcloud ks worker-pool get`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_worker_pool_get) | | | List all worker pools in a cluster. | [`ibmcloud ks worker-pool ls`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_worker_pools) | | -| List all worker nodes in a cluster. | [`ibmcloud ks worker ls`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_workers) | | +| List all worker nodes in a cluster. | [`ibmcloud ks worker ls`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_workers) | | {: class="simple-tab-table"} {: caption="Overview of permissions required for CLI commands and API calls in {{site.data.keyword.containerlong_notm}}." caption-side="top"} {: #accessreftabtableview} @@ -106,8 +106,8 @@ The following table shows the permissions granted by each {{site.data.keyword.cl | Disable automatic updates for the Ingress ALB add-on. | [`ibmcloud ks alb autoupdate disable`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_alb_autoupdate_disable) | [`PUT /v1/clusters/{idOrName}/updatepolicy`](https://containers.cloud.ibm.com/global/swagger-global-api/#/alb/ChangeUpdatePolicy) | | Enable automatic updates for the Ingress ALB add-on. | [`ibmcloud ks alb autoupdate enable`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_alb_autoupdate_enable) | [`PUT /v1/clusters/{idOrName}/updatepolicy`](https://containers.cloud.ibm.com/global/swagger-global-api/#/alb/ChangeUpdatePolicy) | | Check whether automatic updates for the Ingress ALB add-on are enabled. | [`ibmcloud ks alb autoupdate get`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_alb_autoupdate_get) | [`GET /v1/clusters/{idOrName}/updatepolicy`](https://containers.cloud.ibm.com/global/swagger-global-api/#/alb/GetUpdatePolicy) | -| Enable or disable an Ingress ALB in a classic cluster. | [`ibmcloud ks alb configure classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_alb_configure) | [`POST /v1/albs`](https://containers.cloud.ibm.com/global/swagger-global-api/#/alb/EnableALB) and [`DELETE /v1/albs/{albId}`](https://containers.cloud.ibm.com/global/swagger-global-api/#/) | -| Enable or disable an Ingress ALB in a VPC cluster. | [`ibmcloud ks alb configure vpc-classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cli_alb_configure_vpc_classic) | [`POST /v2/alb/vpc/enableAlb`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/VpcEnableALB) and [`POST /v2/alb/vpc/disableAlb`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/VpcDisableALB) | +| Enable or disable an Ingress ALB in a classic cluster. | [`ibmcloud ks alb configure classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_alb_configure) | [`POST /v1/albs`](https://containers.cloud.ibm.com/global/swagger-global-api/#/alb/EnableALB) and [`DELETE /v1/albs/{albId}`](https://containers.cloud.ibm.com/global/swagger-global-api/#/) | +| Enable or disable an Ingress ALB in a VPC cluster. | [`ibmcloud ks alb configure vpc-classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cli_alb_configure_vpc_classic) | [`POST /v2/alb/vpc/enableAlb`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/VpcEnableALB) and [`POST /v2/alb/vpc/disableAlb`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/VpcDisableALB) | | Roll back the Ingress ALB add-on update to the build that your ALB pods were previously running. | [`ibmcloud ks alb rollback`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_alb_rollback) | [`PUT /v1/clusters/{idOrName}/updaterollback`](https://containers.cloud.ibm.com/global/swagger-global-api/#/alb/RollbackUpdate) | | Force a one-time update of your ALB pods by manually updating the Ingress ALB add-on. | [`ibmcloud ks alb update`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_alb_update) | [`PUT /v1/clusters/{idOrName}/update`](https://containers.cloud.ibm.com/global/swagger-global-api/#/alb/UpdateALBs) | | Create an API server audit webhook. | [`ibmcloud ks cluster master audit-webhook set`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_apiserver_config_set) | [`PUT /v1/clusters/{idOrName}/apiserverconfigs/auditwebhook`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/apiserverconfigs/UpdateAuditWebhook) | @@ -124,12 +124,12 @@ The following table shows the permissions granted by each {{site.data.keyword.cl | Delete all logging filter configurations for the Kubernetes cluster. | - | [`DELETE /v1/logging/{idOrName}/filterconfigs`](https://containers.cloud.ibm.com/global/swagger-global-api/#/filter/DeleteFilterConfigs) | | Update a log filtering configuration. | [`ibmcloud ks logging filter update`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_log_filter_update) | [`PUT /v1/logging/{idOrName}/filterconfigs/{id}`](https://containers.cloud.ibm.com/global/swagger-global-api/#/filter/UpdateFilterConfig) | | Add one NLB IP address to an existing NLB subdomain. | [`ibmcloud ks nlb-dns add`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_nlb-dns-add) | [`PUT /v1/clusters/{idOrName}/add`](https://containers.cloud.ibm.com/global/swagger-global-api/#/nlb-dns/UpdateDNSWithIP) | -| Create a DNS subdomain to register an NLB IP address. | [`ibmcloud ks nlb-dns create classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_nlb-dns-create) | [`POST /v1/clusters/{idOrName}/register`](https://containers.cloud.ibm.com/global/swagger-global-api/#/nlb-dns/RegisterDNSWithIP) | -| Create a DNS subdomain to register a VPC load balancer hostname. | [`ibmcloud ks nlb-dns create vpc-classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_nlb-dns-create-vpc) | [`POST /v2/nlb-dns/vpc/createNlbDNS`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/CreateNlbDNS) | -| List NLB subdomains and either the NLB IP addresses (classic clusters) or the load balancer hostnames (VPC clusters) that are registered with the DNS provider for each NLB subdomain. | [`ibmcloud ks nlb-dns ls`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_nlb-dns-ls) | | -| Replace the VPC load balancer hostname for a subdomain. | [`ibmcloud ks nlb-dns replace`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_nlb-dns-replace) | [`POST /v2/nlb-dns/vpc/replaceLBHostname`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/ReplaceLBHostname) | -| Remove an NLB IP address from a subdomain. | [`ibmcloud ks nlb-dns rm classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_nlb-dns-rm) | [`DELETE /v1/clusters/{idOrName}/host/{nlbHost}/ip/{nlbIP}/remove`](https://containers.cloud.ibm.com/global/swagger-global-api/#/nlb-dns/UnregisterDNSWithIP) | -| Remove a VPC load balancer hostname from a subdomain. | [`ibmcloud ks nlb-dns rm vpc-classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_nlb-dns-rm-vpc) | [`POST /v2/nlb-dns/vpc/removeLBHostname`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/RemoveLBHostname) | +| Create a DNS subdomain to register an NLB IP address. | [`ibmcloud ks nlb-dns create classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_nlb-dns-create) | [`POST /v1/clusters/{idOrName}/register`](https://containers.cloud.ibm.com/global/swagger-global-api/#/nlb-dns/RegisterDNSWithIP) | +| Create a DNS subdomain to register a VPC load balancer hostname. | [`ibmcloud ks nlb-dns create vpc-classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_nlb-dns-create-vpc) | [`POST /v2/nlb-dns/vpc/createNlbDNS`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/CreateNlbDNS) | +| List NLB subdomains and either the NLB IP addresses (classic clusters) or the load balancer hostnames (VPC clusters) that are registered with the DNS provider for each NLB subdomain. | [`ibmcloud ks nlb-dns ls`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_nlb-dns-ls) | | +| Replace the VPC load balancer hostname for a subdomain. | [`ibmcloud ks nlb-dns replace`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_nlb-dns-replace) | [`POST /v2/nlb-dns/vpc/replaceLBHostname`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/ReplaceLBHostname) | +| Remove an NLB IP address from a subdomain. | [`ibmcloud ks nlb-dns rm classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_nlb-dns-rm) | [`DELETE /v1/clusters/{idOrName}/host/{nlbHost}/ip/{nlbIP}/remove`](https://containers.cloud.ibm.com/global/swagger-global-api/#/nlb-dns/UnregisterDNSWithIP) | +| Remove a VPC load balancer hostname from a subdomain. | [`ibmcloud ks nlb-dns rm vpc-classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_nlb-dns-rm-vpc) | [`POST /v2/nlb-dns/vpc/removeLBHostname`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/RemoveLBHostname) | | Configure and optionally enable a health check monitor for an existing NLB subdomain in a cluster. | [`ibmcloud ks nlb-dns monitor configure`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_nlb-dns-monitor-configure) | [`POST /v1/health/clusters/{idOrName}/config`](https://containers.cloud.ibm.com/global/swagger-global-api/#/nlb-health-monitor/AddNlbDNSHealthMonitor) | | View the settings for an existing health check monitor. | [`ibmcloud ks nlb-dns monitor get`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_nlb-dns-monitor-get) | [`GET /v1/health/clusters/{idOrName}/host/{nlbHost}/config`](https://containers.cloud.ibm.com/global/swagger-global-api/#/nlb-health-monitor/GetNlbDNSHealthMonitor) | | Disable an existing health check monitor for a subdomain in a cluster. | [`ibmcloud ks nlb-dns monitor disable`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_nlb-dns-monitor-disable) | [`PUT /v1/clusters/{idOrName}/health`](https://containers.cloud.ibm.com/global/swagger-global-api/#/nlb-health-monitor/UpdateNlbDNSHealthMonitor) | @@ -154,18 +154,18 @@ The following table shows the permissions granted by each {{site.data.keyword.cl | Add a user-managed subnet to a cluster. | [`ibmcloud ks cluster user-subnet add`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_cluster_user_subnet_add) | [`POST /v1/clusters/{idOrName}/usersubnets`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/AddClusterUserSubnet) | | Remove a user-managed subnet from a cluster. | [`ibmcloud ks cluster user-subnet rm`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_cluster_user_subnet_rm) | [`DELETE /v1/clusters/{idOrName}/usersubnets/{subnetId}/vlans/{vlanId}`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/RemoveClusterUserSubnet) | | Add worker nodes. | [`ibmcloud ks worker add (deprecated)`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_worker_add) | [`POST /v1/clusters/{idOrName}/workers`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/AddClusterWorkers) | -| Create a worker pool in a classic cluster. | [`ibmcloud ks worker-pool create classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_worker_pool_create) | [`POST /v1/clusters/{idOrName}/workerpools`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/CreateWorkerPool) | -| Create a worker pool in a VPC cluster. | [`ibmcloud ks worker-pool create vpc-classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cli_worker_pool_create_vpc_classic) | [`POST ​/v2​/vpc​/createWorkerPool`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/vpcCreateWorkerPool) | +| Create a worker pool in a classic cluster. | [`ibmcloud ks worker-pool create classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_worker_pool_create) | [`POST /v1/clusters/{idOrName}/workerpools`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/CreateWorkerPool) | +| Create a worker pool in a VPC cluster. | [`ibmcloud ks worker-pool create vpc-classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cli_worker_pool_create_vpc_classic) | [`POST ​/v2​/vpc​/createWorkerPool`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/vpcCreateWorkerPool) | | Rebalance a worker pool. | [`ibmcloud ks worker-pool rebalance`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_rebalance) | [`PATCH /v1/clusters/{idOrName}/workerpools/{poolidOrName}`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/PatchWorkerPool) | | Resize a worker pool. | [`ibmcloud ks worker-pool resize`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_worker_pool_resize) | [`PATCH /v1/clusters/{idOrName}/workerpools/{poolidOrName}`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/PatchWorkerPool) | | Delete a worker pool. | [`ibmcloud ks worker-pool rm`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_worker_pool_rm) | [`DELETE /v1/clusters/{idOrName}/workerpools/{poolidOrName}`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/RemoveWorkerPool) | | Reboot a worker node. | [`ibmcloud ks worker reboot`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_worker_reboot) | [`PUT /v1/clusters/{idOrName}/workers/{workerId}`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/UpdateClusterWorker) | | Reload a worker node. | [`ibmcloud ks worker reload`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_worker_reload) | [`PUT /v1/clusters/{idOrName}/workers/{workerId}`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/UpdateClusterWorker) | -| Replace a worker node. | [`ibmcloud ks worker replace`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cli_worker_replace) | | +| Replace a worker node. | [`ibmcloud ks worker replace`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cli_worker_replace) | | | Remove a worker node. | [`ibmcloud ks worker rm`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_worker_rm) | [`DELETE /v1/clusters/{idOrName}/workers/{workerId}`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/RemoveClusterWorker) | | Update a worker node. | [`ibmcloud ks worker update`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_worker_update) | [`PUT /v1/clusters/{idOrName}/workers/{workerId}`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/UpdateClusterWorker) | -| Add a zone to a worker pool. | [`ibmcloud ks zone add classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_zone_add) | [`POST /v1/clusters/{idOrName}/workerpools/{poolidOrName}/zones`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/AddWorkerPoolZone) | -| Add a zone to a worker pool. | [`ibmcloud ks zone add vpc-classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cli_zone-add-vpc-classic) | [`POST ​/v2​/vpc​/createWorkerPoolZone`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/vpcCreateWorkerPoolZone) | +| Add a zone to a worker pool. | [`ibmcloud ks zone add classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_zone_add) | [`POST /v1/clusters/{idOrName}/workerpools/{poolidOrName}/zones`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/AddWorkerPoolZone) | +| Add a zone to a worker pool. | [`ibmcloud ks zone add vpc-classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cli_zone-add-vpc-classic) | [`POST ​/v2​/vpc​/createWorkerPoolZone`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/vpcCreateWorkerPoolZone) | | Update the network configuration for a given zone in a worker pool. | [`ibmcloud ks zone network-set`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_zone_network_set) | [`PATCH /v1/clusters/{idOrName}/workerpools/{poolidOrName}/zones/{zoneid}`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/AddWorkerPoolZoneNetwork) | | Remove a zone a from worker pool. | [`ibmcloud ks zone rm`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_zone_rm) | [`DELETE /v1/clusters/{idOrName}/workerpools/{poolidOrName}/zones/{zoneid}`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/RemoveWorkerPoolZone) | {: class="simple-tab-table"} @@ -184,8 +184,8 @@ The following table shows the permissions granted by each {{site.data.keyword.cl | Disable a managed add-on, such Istio or Knative, in a cluster. | [`ibmcloud ks cluster addon disable`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_cluster_addon_disable) | [`PATCH /v1/clusters/{idOrName}/addons`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/ManageClusterAddons) | | Enable a managed add-on, such Istio or Knative, in a cluster. | [`ibmcloud ks cluster addon enable`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_cluster_addon_enable) | [`PATCH /v1/clusters/{idOrName}/addons`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/ManageClusterAddons) | | List managed add-ons, such as Istio or Knative, that are enabled in a cluster. | [`ibmcloud ks cluster addon ls`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_cluster_addons) | [`GET /v1/clusters/{idOrName}/addons`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/GetClusterAddons) | -| Create a free or standard cluster on classic infrastructure. **Note**: The Administrator platform role for {{site.data.keyword.registrylong_notm}} and the Super User infrastructure role are also required. | [`ibmcloud ks cluster create classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_cluster_create) | [`POST /v1/clusters`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/CreateCluster) | -| Create a classic cluster in your Virtual Private Cloud (VPC). **Note**: The Administrator platform role for VPC Infrastructure, the Administrator platform role for {{site.data.keyword.registrylong_notm}} at the account level, and the Writer or Manager service role for {{site.data.keyword.containerlong_notm}} are also required. | [`ibmcloud ks cluster create vpc-classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cli_cluster-create-vpc-classic) | [`POST /v2/vpc/createCluster`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/vpcCreateCluster) | +| Create a free or standard cluster on classic infrastructure. **Note**: The Administrator platform role for {{site.data.keyword.registrylong_notm}} and the Super User infrastructure role are also required. | [`ibmcloud ks cluster create classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_cluster_create) | [`POST /v1/clusters`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/CreateCluster) | +| Create a classic cluster in your Virtual Private Cloud (VPC). **Note**: The Administrator platform role for VPC Infrastructure, the Administrator platform role for {{site.data.keyword.registrylong_notm}} at the account level, and the Writer or Manager service role for {{site.data.keyword.containerlong_notm}} are also required. | [`ibmcloud ks cluster create vpc-classic`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cli_cluster-create-vpc-classic) | [`POST /v2/vpc/createCluster`](https://containers.cloud.ibm.com/global/swagger-global-api/#/v2/vpcCreateCluster) | | Disable a specified feature for a cluster, such as the public service endpoint for the cluster master. | [`ibmcloud ks cluster feature disable`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_cluster_feature_disable) | - | | Enable a specified feature for a cluster, such as the private service endpoint for the cluster master. | [`ibmcloud ks cluster feature enable`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_cluster_feature_enable) | - | | Delete a cluster. | [`ibmcloud ks cluster rm`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_cluster_rm) | [`DELETE /v1/clusters/{idOrName}`](https://containers.cloud.ibm.com/global/swagger-global-api/#/clusters/RemoveCluster) | @@ -579,10 +579,10 @@ The following table shows the Cloud Foundry roles that are required for cluster A user with the **Super User** infrastructure access role [sets the API key for a region and resource group](/docs/containers?topic=containers-users#api_key) so that infrastructure actions can be performed (or more rarely, [manually sets different account credentials](/docs/containers?topic=containers-users#credentials)). Then, the infrastructure actions that other users in the account can perform is authorized through {{site.data.keyword.cloud_notm}} IAM platform roles. You do not need to edit the other users' classic infrastructure permissions. Use the following table to customize users' classic infrastructure permissions only when you can't assign **Super User** to the user who sets the API key. For instructions to assign permissions, see [Customizing infrastructure permissions](/docs/containers?topic=containers-users#infra_access). {: shortdesc} - + Classic infrastructure permissions apply only to classic clusters. For VPC clusters, see [Assigning role-based access to VPC resources](/docs/vpc-on-classic?topic=vpc-on-classic-setting-up-access-to-your-classic-infrastructure-from-vpc). {: note} - + Need to check that the API key or manually-set credentials have the required and suggested infrastructure permissions? Use the `ibmcloud ks infra-permissions get` [command](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#infra_permissions_get). {: tip} diff --git a/cs_cluster_plan_ha.md b/cs_cluster_plan_ha.md index 83ec476d9..fc8e2efdc 100644 --- a/cs_cluster_plan_ha.md +++ b/cs_cluster_plan_ha.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, multi az, multi-az, szr, mzr @@ -42,10 +42,10 @@ Your users are less likely to experience downtime when you distribute your apps {: #single_zone} Single zone clusters can be created in one of the supported [single zone cities or multizone metro locations](/docs/containers?topic=containers-regions-and-zones#zones). To improve availability for your app and to allow failover for the case that one worker node is not available in your cluster, add additional worker nodes to your single zone cluster. -{: shortdesc} +{: shortdesc} VPC infrastructure provider icon VPC clusters are supported only in [multizone metro locations](/docs/containers?topic=containers-regions-and-zones#zones). If your cluster must reside in one of the single zone cities, create a classic cluster instead. -{: note} +{: note} High availability for clusters in a single zone @@ -81,10 +81,10 @@ Let's say you need a worker node with six cores to handle the workload for your When you create a cluster in a [multizone metro location](/docs/containers?topic=containers-regions-and-zones#zones), a highly available master is automatically deployed and three replicas are spread across the zones of the metro. For example, if the cluster is in `dal10`, `dal12`, or `dal13` zones, the replicas of the master are spread across each zone in the Dallas multizone metro. **Do I have to do anything so that the master can communicate with the workers across zones?**
-If you created a VPC multizone cluster, the subnets in each zone are automatically set up with Access Control Lists (ACLs) that allow communication between the master and the worker nodes across zones. In classic clusters, if you have multiple VLANs for your cluster, multiple subnets on the same VLAN, or a multizone classic cluster, you must enable a [Virtual Router Function (VRF)](/docs/resources?topic=direct-link-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud) for your IBM Cloud infrastructure account so your worker nodes can communicate with each other on the private network. To enable VRF, [contact your IBM Cloud infrastructure account representative](/docs/infrastructure/direct-link?topic=direct-link-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud#how-you-can-initiate-the-conversion). To check whether a VRF is already enabled, use the `ibmcloud account show` command. If you cannot or do not want to enable VRF, enable [VLAN spanning](/docs/infrastructure/vlans?topic=vlans-vlan-spanning#vlan-spanning). To perform this action, you need the **Network > Manage Network VLAN Spanning** [infrastructure permission](/docs/containers?topic=containers-users#infra_access), or you can request the account owner to enable it. To check whether VLAN spanning is already enabled, use the `ibmcloud ks vlan spanning get --region ` [command](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_vlan_spanning_get). +If you created a VPC multizone cluster, the subnets in each zone are automatically set up with Access Control Lists (ACLs) that allow communication between the master and the worker nodes across zones. In classic clusters, if you have multiple VLANs for your cluster, multiple subnets on the same VLAN, or a multizone classic cluster, you must enable a [Virtual Router Function (VRF)](/docs/resources?topic=direct-link-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud) for your IBM Cloud infrastructure account so your worker nodes can communicate with each other on the private network. To enable VRF, [contact your IBM Cloud infrastructure account representative](/docs/infrastructure/direct-link?topic=direct-link-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud#how-you-can-initiate-the-conversion). To check whether a VRF is already enabled, use the `ibmcloud account show` command. If you cannot or do not want to enable VRF, enable [VLAN spanning](/docs/infrastructure/vlans?topic=vlans-vlan-spanning#vlan-spanning). To perform this action, you need the **Network > Manage Network VLAN Spanning** [infrastructure permission](/docs/containers?topic=containers-users#infra_access), or you can request the account owner to enable it. To check whether VLAN spanning is already enabled, use the `ibmcloud ks vlan spanning get --region ` [command](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_vlan_spanning_get). **Can I convert my single zone cluster to a multizone cluster?**
-To convert a single zone cluster to a multizone cluster, your cluster must be set up in one of the supported [multizone metro locations](/docs/containers?topic=containers-regions-and-zones#zones). VPC clusters can be set up only in multizone metro locations, and as such can always be converted from a single zone cluster to a multizone cluster. Classic clusters that are set up in a single zone data center cannot be converted to a multizone cluster. To convert a single zone cluster to a multizone cluster, see [Adding worker nodes by adding a zone to a worker pool](/docs/containers?topic=containers-add_workers#add_zone). +To convert a single zone cluster to a multizone cluster, your cluster must be set up in one of the supported [multizone metro locations](/docs/containers?topic=containers-regions-and-zones#zones). VPC clusters can be set up only in multizone metro locations, and as such can always be converted from a single zone cluster to a multizone cluster. Classic clusters that are set up in a single zone data center cannot be converted to a multizone cluster. To convert a single zone cluster to a multizone cluster, see [Adding worker nodes by adding a zone to a worker pool](/docs/containers?topic=containers-add_workers#add_zone). diff --git a/cs_cluster_scaling.md b/cs_cluster_scaling.md index 73f103c15..5ee58d076 100644 --- a/cs_cluster_scaling.md +++ b/cs_cluster_scaling.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, node scaling, ca, autoscaler @@ -695,18 +695,18 @@ To limit a pod deployment to a specific worker pool that is managed by the clust **To limit pods to run on certain autoscaled worker pools**: -1. Create the worker pool with the label that you want to use. For example, your label might be `app: nginx`. - **For classic clusters**: +1. Create the worker pool with the label that you want to use. For example, your label might be `app: nginx`. + **For classic clusters**: ``` ibmcloud ks worker-pool create classic --name --cluster --machine-type --size-per-zone --label = ``` - {: pre} + {: pre} **For VPC clusters**: ``` ibmcloud ks worker-pool create vpc-classic --name --cluster --flavor --size-per-zone --label = ``` {: pre} - + 2. [Add the worker pool to the cluster autoscaler configuration](#ca_cm). 3. In your pod spec template, match the `nodeSelector` or `nodeAffinity` to the label that you used in your worker pool. diff --git a/cs_cluster_update.md b/cs_cluster_update.md index d2cc729e5..1ce437642 100644 --- a/cs_cluster_update.md +++ b/cs_cluster_update.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, upgrade, version @@ -83,21 +83,21 @@ To update the Kubernetes master _major_ or _minor_ version: 4. Install the version of the [`kubectl cli`](/docs/containers?topic=containers-cs_cli_install#kubectl) that matches the API server version that runs in the master. [Kubernetes does not support ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/setup/release/version-skew-policy/) `kubectl` client versions that are two or more versions apart from the server version (n +/- 2). -When the master update is complete, you can update your worker nodes, depending on the type of cluster infrastructure provider that you have. +When the master update is complete, you can update your worker nodes, depending on the type of cluster infrastructure provider that you have. * [Updating classic worker nodes](#worker_node). -* [Updating VPC worker nodes](#vpc_worker_node). +* [Updating VPC worker nodes](#vpc_worker_node).
-## Updating classic worker nodes +## Updating classic worker nodes {: #worker_node} -You received a notification to update your worker nodes in a [classic infrastructure](/docs/containers?topic=containers-infrastructure_providers) cluster. What does that mean? As security updates and patches are put in place for the API server and other master components, you must be sure that the worker nodes remain in sync. -{: shortdesc} +You received a notification to update your worker nodes in a [classic infrastructure](/docs/containers?topic=containers-infrastructure_providers) cluster. What does that mean? As security updates and patches are put in place for the API server and other master components, you must be sure that the worker nodes remain in sync. +{: shortdesc} Classic infrastructure provider icon Applies to only classic clusters. Have a VPC cluster? See [Updating VPC worker nodes](#vpc_worker_node) instead. -{: note} +{: note} **What happens to my apps during an update?**
If you run apps as part of a deployment on worker nodes that you update, the apps are rescheduled onto other worker nodes in the cluster. These worker nodes might be in a different worker pool, or if you have stand-alone worker nodes, apps might be scheduled onto stand-alone worker nodes. To avoid downtime for your app, you must ensure that you have enough capacity in the cluster to carry the workload. @@ -113,7 +113,7 @@ When the config map is not defined, the default is used. By default, a maximum o ### Prerequisites {: #worker-up-prereqs} -Before you update your classic infrastructure worker nodes, review the prerequisite steps. +Before you update your classic infrastructure worker nodes, review the prerequisite steps. {: shortdesc} Updates to worker nodes can cause downtime for your apps and services. Your worker node machine is reimaged, and data is deleted if not [stored outside the pod](/docs/containers?topic=containers-storage_planning#persistent_storage_overview). @@ -126,10 +126,10 @@ Updates to worker nodes can cause downtime for your apps and services. Your work - Consider [adding more worker nodes](/docs/containers?topic=containers-add_workers) so that your cluster has enough capacity to rescheduling your workloads during the update. - Make sure that you have the [**Operator** or **Administrator** {{site.data.keyword.cloud_notm}} IAM platform role](/docs/containers?topic=containers-users#platform). -### Updating classic worker nodes in the CLI with a configmap +### Updating classic worker nodes in the CLI with a configmap {: #worker-up-configmap} -Set up a configmap to perform a rolling update of your classic worker nodes. +Set up a configmap to perform a rolling update of your classic worker nodes. {: shortdesc} 1. Complete the [prerequisite steps](#worker-up-prereqs). @@ -279,7 +279,7 @@ Next steps: - Inform developers who work in the cluster to update their `kubectl` CLI to the version of the Kubernetes master. - If the Kubernetes dashboard does not display utilization graphs, [delete the `kube-dashboard` pod](/docs/containers?topic=containers-cs_troubleshoot_health#cs_dashboard_graphs). -### Updating classic worker nodes in the console +### Updating classic worker nodes in the console {: #worker_up_console} After you set up the config map for the first time, you can then update worker nodes by using the {{site.data.keyword.cloud_notm}} console. @@ -293,7 +293,7 @@ To update worker nodes from the console: 5. From the action bar, click **Update**.
- + ## Updating VPC worker nodes {: #vpc_worker_node} @@ -385,7 +385,7 @@ You can update your VPC worker nodes in the console. Before you begin, consider 5. From the action bar, click **Update**.
- +
## Updating flavors (machine types) {: #machine_type} @@ -441,17 +441,17 @@ To update flavors: 3. Create a worker node with the new machine type. - **For worker nodes in a worker pool**: - 1. Create a worker pool with the number of worker nodes that you want to replace. - * Classic clusters: + 1. Create a worker pool with the number of worker nodes that you want to replace. + * Classic clusters: ``` ibmcloud ks worker-pool create classic --name --cluster --machine-type --size-per-zone ``` - {: pre} + {: pre} * VPC clusters: ``` ibmcloud ks worker-pool create vpc-classic --cluster --flavor --size-per-zone ``` - {: pre} + {: pre} 2. Verify that the worker pool is created. ``` @@ -459,17 +459,17 @@ To update flavors: ``` {: pre} - 3. Add the zone to your worker pool that you retrieved earlier. When you add a zone, the worker nodes that are defined in your worker pool are provisioned in the zone and considered for future workload scheduling. If you want to spread your worker nodes across multiple zones, choose a [multizone-capable zone](/docs/containers?topic=containers-regions-and-zones#zones). - * Classic clusters: + 3. Add the zone to your worker pool that you retrieved earlier. When you add a zone, the worker nodes that are defined in your worker pool are provisioned in the zone and considered for future workload scheduling. If you want to spread your worker nodes across multiple zones, choose a [multizone-capable zone](/docs/containers?topic=containers-regions-and-zones#zones). + * Classic clusters: ``` ibmcloud ks zone add classic --zone --cluster --worker-pool --private-vlan --public-vlan ``` - {: pre} + {: pre} * VPC clusters: ``` ibmcloud ks zone add vpc-classic --zone --cluster --worker-pool --subnet-id ``` - {: pre} + {: pre} - **Deprecated: For stand-alone worker nodes**: ``` diff --git a/cs_clusters.md b/cs_clusters.md index 766ad4510..1e66faf13 100644 --- a/cs_clusters.md +++ b/cs_clusters.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, clusters, worker nodes, worker pools @@ -34,7 +34,7 @@ After [getting started](/docs/containers?topic=containers-getting-started), you 1. [Prepare your account to create clusters](/docs/containers?topic=containers-clusters#cluster_prepare). This step includes creating a billable account, setting up an API key with infrastructure permissions, making sure that you have Administrator access in {{site.data.keyword.cloud_notm}} IAM, planning resource groups, and setting up account networking. 2. [Decide on your cluster setup](/docs/containers?topic=containers-clusters#prepare_cluster_level). This step includes planning cluster network and HA setup, estimating costs, and if applicable, allowing network traffic through a firewall. -3. Create your [VPC](#clusters_vpc_standard) or [classic](#clusters_standard) cluster by following the steps in the {{site.data.keyword.cloud_notm}} console or CLI. +3. Create your [VPC](#clusters_vpc_standard) or [classic](#clusters_standard) cluster by following the steps in the {{site.data.keyword.cloud_notm}} console or CLI.
@@ -51,7 +51,7 @@ ibmcloud ks cluster create classic --name my_cluster ``` {: pre} -**Classic clusters**: +**Classic clusters**: * Classic cluster, shared virtual machine: ``` ibmcloud ks cluster create classic --name my_cluster --zone dal10 --machine-type b3c.4x16 --hardware shared --workers 3 --public-vlan --private-vlan @@ -78,7 +78,7 @@ ibmcloud ks cluster create classic --name my_cluster ``` {: pre} - + **VPC clusters**: * VPC Gen 1 compute cluster: @@ -91,7 +91,7 @@ ibmcloud ks cluster create classic --name my_cluster ibmcloud ks zone add vpc-classic --zone --cluster --worker-pool --subnet-id ``` {: pre} - +
@@ -112,8 +112,8 @@ Prepare your {{site.data.keyword.cloud_notm}} account for {{site.data.keyword.co * [**Administrator** platform role](/docs/containers?topic=containers-users#platform) for Container Registry at the account level. If your account predates 4 October 2018, you need to [enable {{site.data.keyword.cloud_notm}} IAM policies for {{site.data.keyword.registryshort_notm}}](/docs/services/Registry?topic=registry-user#existing_users). With IAM policies, you can control access to resources such as registry namespaces. **Infrastructure**: - * Classic clusters only: **Super User** role or the [minimum required permissions](/docs/containers?topic=containers-access_reference#infra) for classic infrastructure. - * VPC clusters only: [**Administrator** platform role for VPC Infrastructure](/docs/vpc-on-classic?topic=vpc-on-classic-managing-user-permissions-for-vpc-resources). + * Classic clusters only: **Super User** role or the [minimum required permissions](/docs/containers?topic=containers-access_reference#infra) for classic infrastructure. + * VPC clusters only: [**Administrator** platform role for VPC Infrastructure](/docs/vpc-on-classic?topic=vpc-on-classic-managing-user-permissions-for-vpc-resources). 3. Verify that you as a user (not just the API key) have the **Administrator** platform role for {{site.data.keyword.containerlong_notm}}. To allow your cluster to pull images from the private registry, you also need the **Administrator** platform role for {{site.data.keyword.registrylong_notm}}. If you are the account owner, you already have these permissions. 1. From the [{{site.data.keyword.cloud_notm}} console ![External link icon](../icons/launch-glyph.svg "External link icon")](https://cloud.ibm.com/) menu bar, click **Manage > Access (IAM)**. @@ -127,10 +127,10 @@ Prepare your {{site.data.keyword.cloud_notm}} account for {{site.data.keyword.co * You cannot change a cluster's resource group. Furthermore, if you need to use the `ibmcloud ks cluster service bind` [command](/docs/containers-cli-plugin?topic=containers-cli-plugin-kubernetes-service-cli#cs_cluster_service_bind) to [integrate with an {{site.data.keyword.cloud_notm}} service](/docs/containers?topic=containers-service-binding#bind-services), that service must be in the same resource group as the cluster. Services that do not use resource groups like {{site.data.keyword.registrylong_notm}} or that do not need service binding like {{site.data.keyword.la_full_notm}} work even if the cluster is in a different resource group. * Consider giving clusters unique names across resource groups and regions in your account to avoid naming conflicts. You cannot rename a cluster. -5. **Standard clusters**: Plan your cluster network setup so that your cluster meets the needs of your workloads and environment. Then, set up your IBM Cloud infrastructure networking to allow worker-to-master and user-to-master communication. Your cluster network setup varies with the infrastructure provider that you choose (classic or VPC). For more information, see [Planning your cluster network setup](/docs/containers?topic=containers-plan_clusters). +5. **Standard clusters**: Plan your cluster network setup so that your cluster meets the needs of your workloads and environment. Then, set up your IBM Cloud infrastructure networking to allow worker-to-master and user-to-master communication. Your cluster network setup varies with the infrastructure provider that you choose (classic or VPC). For more information, see [Planning your cluster network setup](/docs/containers?topic=containers-plan_clusters). * **VPC clusters only**: Your VPC clusters are created with a public and a private service endpoint by default. 1. [Enable your {{site.data.keyword.cloud_notm}} account to use service endpoints](/docs/account?topic=account-vrf-service-endpoint#service-endpoint). - 2. Optional: If you want your VPC clusters to communicate with classic clusters over the private network interface, you can choose to set up classic infrastructure access from the VPC that your cluster is in. Note that you can set up classic infrastructure access for only one VPC per region and [Virtual Routing and Forwarding (VRF)](/docs/resources?topic=direct-link-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud) is required in your {{site.data.keyword.cloud_notm}} account. For more information, see [Setting up access to your Classic Infrastructure from VPC](/docs/vpc-on-classic?topic=vpc-on-classic-setting-up-access-to-your-classic-infrastructure-from-vpc). + 2. Optional: If you want your VPC clusters to communicate with classic clusters over the private network interface, you can choose to set up classic infrastructure access from the VPC that your cluster is in. Note that you can set up classic infrastructure access for only one VPC per region and [Virtual Routing and Forwarding (VRF)](/docs/resources?topic=direct-link-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud) is required in your {{site.data.keyword.cloud_notm}} account. For more information, see [Setting up access to your Classic Infrastructure from VPC](/docs/vpc-on-classic?topic=vpc-on-classic-setting-up-access-to-your-classic-infrastructure-from-vpc). * **Classic clusters only, VRF and service endpoint enabled accounts**: You must set up your account to use VRF and service endpoints to support scenarios such as running internet-facing workloads and extending your on-premises data center. After you set up the account, your VPC and classic clusters are created with a public and a private service endpoint by default. 1. Enable [VRF](/docs/resources?topic=direct-link-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud) in your IBM Cloud infrastructure account. To check whether a VRF is already enabled, use the `ibmcloud account show` command. @@ -157,7 +157,7 @@ After you set up your account to create clusters, decide on the setup for your c - + This image walks you through choosing the setup that you want for your cluster. Free and standard cluster comparison @@ -171,12 +171,12 @@ After you set up your account to create clusters, decide on the setup for your c Classic firewall VPC ACLs and firewall Estimate costs (cluster create page) - +
- + ## Creating a standard classic cluster {: #clusters_standard} @@ -184,9 +184,9 @@ After you set up your account to create clusters, decide on the setup for your c Use the {{site.data.keyword.cloud_notm}} CLI or the {{site.data.keyword.cloud_notm}} console to create a fully-customizable standard cluster with your choice of hardware isolation and access to features like multiple worker nodes for a highly available environment. {: shortdesc} + - -### Creating a standard classic cluster in the console +### Creating a standard classic cluster in the console {: #clusters_ui} Create your single zone or multizone classic Kubernetes cluster by using the {{site.data.keyword.cloud_notm}} console. @@ -237,7 +237,7 @@ Create your single zone or multizone classic Kubernetes cluster by using the {{s
-### Creating a standard classic cluster in the CLI +### Creating a standard classic cluster in the CLI {: #clusters_cli_steps} Create your single zone or multizone classic cluster by using the {{site.data.keyword.cloud_notm}} CLI. @@ -599,7 +599,7 @@ When you enable a gateway on a classic cluster, the cluster is created with a `c
- + ## Creating a standard VPC Gen 1 compute cluster {: #clusters_vpc_standard} @@ -773,7 +773,7 @@ Create your single zone or multizone VPC Generation 1 compute cluster by using t
- +
## Next steps {: #next_steps} @@ -792,10 +792,10 @@ Then, you can check out the following network configuration steps for your clust * Expose your apps with [public networking services](/docs/containers?topic=containers-cs_network_planning#public_access) or [private networking services](/docs/containers?topic=containers-cs_network_planning#private_access). * Connect your cluster with services in private networks outside of your {{site.data.keyword.cloud_notm}} account by setting up [{{site.data.keyword.cloud_notm}} Direct Link](/docs/infrastructure/direct-link?topic=direct-link-get-started-with-ibm-cloud-direct-link) or the [strongSwan IPSec VPN service](/docs/containers?topic=containers-vpn). * Create Calico host network policies to isolate your cluster on the [public network](/docs/containers?topic=containers-network_policies#isolate_workers_public) and on the [private network](/docs/containers?topic=containers-network_policies#isolate_workers). - * If you use a gateway appliance, such as a Virtual Router Appliance (VRA), [open up the required ports and IP addresses](/docs/containers?topic=containers-firewall#firewall_inbound) in the public firewall to permit inbound traffic to networking services. If you also have a firewall on the private network, [allow communication between worker nodes and let your cluster access infrastructure resources over the private network](/docs/containers?topic=containers-firewall#firewall_private). + * If you use a gateway appliance, such as a Virtual Router Appliance (VRA), [open up the required ports and IP addresses](/docs/containers?topic=containers-firewall#firewall_inbound) in the public firewall to permit inbound traffic to networking services. If you also have a firewall on the private network, [allow communication between worker nodes and let your cluster access infrastructure resources over the private network](/docs/containers?topic=containers-firewall#firewall_private). * VPC clusters: * Expose your apps with [public networking services](/docs/containers?topic=containers-cs_network_planning#public_access) or [private networking services](/docs/containers?topic=containers-cs_network_planning#private_access). * [Connect your cluster with services in private networks outside of your {{site.data.keyword.cloud_notm}} account](/docs/containers?topic=containers-vpc-vpnaas) by setting up the {{site.data.keyword.cloud_notm}} VPC VPN or the strongSwan IPSec VPN service. - * [Create access control lists (ACLs)](/docs/containers?topic=containers-vpc-network-policy) to control ingress and egress traffic to your VPC subnets. + * [Create access control lists (ACLs)](/docs/containers?topic=containers-vpc-network-policy) to control ingress and egress traffic to your VPC subnets. diff --git a/cs_encrypt.md b/cs_encrypt.md index 1382e278d..b2bb6de9f 100644 --- a/cs_encrypt.md +++ b/cs_encrypt.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-14" +lastupdated: "2019-11-26" keywords: kubernetes, iks, encrypt, security, kms, root key, crk @@ -265,10 +265,10 @@ Before you begin: [Log in to your account. If applicable, target the appropriate {: #datashield} {{site.data.keyword.datashield_short}} is integrated with Intel® Software Guard Extensions (SGX) and Fortanix® technology so that the app code and data of your containerized workloads are protected in use. The app code and data run in CPU-hardened enclaves, which are trusted areas of memory on the worker node that protect critical aspects of the app, which helps to keep the code and data confidential and unmodified. -{: shortdesc} +{: shortdesc} Classic infrastructure provider icon Applies to only classic clusters. VPC clusters cannot have bare metal worker nodes, which are required to use {{site.data.keyword.datashield_short}}. -{: note} +{: note} When it comes to protecting your data, encryption is one of the most popular and effective controls. But, the data must be encrypted at each step of its lifecycle for your data to be protected. During its lifecycle, data has three phases. It can be at rest, in motion, or in use. Data at rest and in motion are generally the area of focus when you think of securing your data. But, after an application starts to run, data that is in use by CPU and memory is vulnerable to various attacks. The attacks might include malicious insiders, root users, credential compromise, OS zero-day, network intruders, and others. Taking that protection one step further, you can now encrypt data in use. diff --git a/cs_ingress.md b/cs_ingress.md index 509c8d624..2e5769247 100644 --- a/cs_ingress.md +++ b/cs_ingress.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, nginx, ingress controller @@ -123,7 +123,7 @@ Before you get started with Ingress, review the following prerequisites. - Ingress is available for standard clusters only and requires at least two worker nodes per zone to ensure high availability and that periodic updates are applied. If you have only one worker in a zone, the ALB cannot receive automatic updates. When automatic updates are rolled out to ALB pods, the pod is reloaded. However, ALB pods have anti-affinity rules to ensure that only one pod is scheduled to each worker node for high availability. Because there is only one ALB pod on one worker, the pod is not restarted so that traffic is not interrupted. The ALB pod is updated to the latest version only when you delete the old pod manually so that the new, updated pod can be scheduled. - If a zone fails, you might see intermittent failures in requests to the Ingress ALB in that zone. - If you restrict network traffic to edge worker nodes, ensure that at least two [edge worker nodes](/docs/containers?topic=containers-edge) are enabled in each zone so that ALBs deploy uniformly. -* Classic clusters: Enable a [Virtual Router Function (VRF)](/docs/resources?topic=direct-link-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud) for your IBM Cloud infrastructure account. To enable VRF, [contact your IBM Cloud infrastructure account representative](/docs/infrastructure/direct-link?topic=direct-link-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud#how-you-can-initiate-the-conversion). To check whether a VRF is already enabled, use the `ibmcloud account show` command. If you cannot or do not want to enable VRF, enable [VLAN spanning](/docs/infrastructure/vlans?topic=vlans-vlan-spanning#vlan-spanning). When a VRF or VLAN spanning is enabled, the ALB can route packets to various subnets in the account. +* Classic clusters: Enable a [Virtual Router Function (VRF)](/docs/resources?topic=direct-link-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud) for your IBM Cloud infrastructure account. To enable VRF, [contact your IBM Cloud infrastructure account representative](/docs/infrastructure/direct-link?topic=direct-link-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud#how-you-can-initiate-the-conversion). To check whether a VRF is already enabled, use the `ibmcloud account show` command. If you cannot or do not want to enable VRF, enable [VLAN spanning](/docs/infrastructure/vlans?topic=vlans-vlan-spanning#vlan-spanning). When a VRF or VLAN spanning is enabled, the ALB can route packets to various subnets in the account.
@@ -526,8 +526,8 @@ Forward requests directly to the IP address of your external service by setting Before you begin: * Review the Ingress [prerequisites](#config_prereqs). -* Ensure that the external app that you want to include into the cluster load balancing can be accessed by using a public IP address. -* VPC clusters: In order to forward requests to the public external endpoint of your app, your VPC subnets must have a public gateway attached. +* Ensure that the external app that you want to include into the cluster load balancing can be accessed by using a public IP address. +* VPC clusters: In order to forward requests to the public external endpoint of your app, your VPC subnets must have a public gateway attached. * [Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.](/docs/containers?topic=containers-cs_cli_install#cs_cli_configure) To expose apps that are outside of your cluster to the public: @@ -623,8 +623,8 @@ Route requests through the Ingress ALB to your external service by using the `pr Before you begin: -* Review the Ingress [prerequisites](#config_prereqs). -* VPC clusters: In order to forward requests to the public external endpoint of your app, your VPC subnets must have a public gateway attached. +* Review the Ingress [prerequisites](#config_prereqs). +* VPC clusters: In order to forward requests to the public external endpoint of your app, your VPC subnets must have a public gateway attached. * [Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.](/docs/containers?topic=containers-cs_cli_install#cs_cli_configure) To expose apps that are outside of your cluster to the public: @@ -691,13 +691,13 @@ To expose apps that are outside of your cluster to the public:
-## Classic clusters: Exposing apps to a private network +## Classic clusters: Exposing apps to a private network {: #ingress_expose_private} -Expose apps to a private network by using the private Ingress ALBs in a classic cluster. +Expose apps to a private network by using the private Ingress ALBs in a classic cluster. {:shortdesc} -To use a private ALB, you must first enable the private ALB. Because private VLAN-only classic clusters are not assigned an IBM-provided Ingress subdomain, no Ingress secret is created during cluster setup. To expose your apps to the private network, you must register your ALB with a custom domain and, optionally, import your own TLS certificate. +To use a private ALB, you must first enable the private ALB. Because private VLAN-only classic clusters are not assigned an IBM-provided Ingress subdomain, no Ingress secret is created during cluster setup. To expose your apps to the private network, you must register your ALB with a custom domain and, optionally, import your own TLS certificate. Before you begin: * Review the Ingress [prerequisites](#config_prereqs). @@ -984,7 +984,7 @@ For a comprehensive tutorial on how to secure microservice-to-microservice commu {: tip}
- + ## VPC clusters: Exposing apps to a private network {: #ingress_expose_vpc_private} @@ -1273,7 +1273,7 @@ http://./ ``` {: codeblock} - + diff --git a/cs_ingress_about.md b/cs_ingress_about.md index 01cafae8b..e7ded0ada 100644 --- a/cs_ingress_about.md +++ b/cs_ingress_about.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-22" +lastupdated: "2019-11-26" keywords: kubernetes, iks, nginx, ingress controller @@ -32,7 +32,7 @@ Ingress is a Kubernetes service that balances network traffic workloads in your ## What comes with Ingress? {: #ingress_components} -Ingress consists of three components: Ingress resources, application load balancers (ALBs), and the multizone load balancer (MZLB) for classic clusters or the VPC load balancer for VPC clusters. +Ingress consists of three components: Ingress resources, application load balancers (ALBs), and the multizone load balancer (MZLB) for classic clusters or the VPC load balancer for VPC clusters. {: shortdesc} ### Ingress resource @@ -57,23 +57,23 @@ For more information, see [Planning networking for single or multiple namespaces The application load balancer (ALB) is an external load balancer that listens for incoming HTTP, HTTPS, or TCP service requests. The ALB then forwards requests to the appropriate app pod according to the rules defined in the Ingress resource. {: shortdesc} -When you create a standard cluster, {{site.data.keyword.containerlong_notm}} automatically creates a highly available ALB in each zone where you have worker nodes and assigns a unique public route which all public ALBs share. You can find the public route for your cluster by running `ibmcloud ks cluster get --cluster ` and looking for the **Ingress subdomain** in the format `mycluster--0001.us-south.containers.appdomain.cloud`. One default private ALB is also automatically created in each zone of your cluster, but the private ALBs are not automatically enabled and do not use the Ingress subdomain. Note that classic clusters with workers that are connected to private VLANs only are not assigned an IBM-provided Ingress subdomain. +When you create a standard cluster, {{site.data.keyword.containerlong_notm}} automatically creates a highly available ALB in each zone where you have worker nodes and assigns a unique public route which all public ALBs share. You can find the public route for your cluster by running `ibmcloud ks cluster get --cluster ` and looking for the **Ingress subdomain** in the format `mycluster--0001.us-south.containers.appdomain.cloud`. One default private ALB is also automatically created in each zone of your cluster, but the private ALBs are not automatically enabled and do not use the Ingress subdomain. Note that classic clusters with workers that are connected to private VLANs only are not assigned an IBM-provided Ingress subdomain. -**Classic clusters: ALB IP addresses** +**Classic clusters: ALB IP addresses** -In classic clusters, the Ingress subdomain for your cluster is linked to the public ALB IP addresses. You can find the IP address of each public ALB by running `ibmcloud ks alb ls --cluster ` and looking for the **ALB IP** field. The portable public and private ALB IP addresses are provisioned into your IBM Cloud infrastructure account during cluster creation and are static floating IPs that do not change for the life of the cluster. If the worker node is removed, a `Keepalived` daemon that constantly monitors the IP automatically reschedules the ALB pods that were on that worker to another worker node in that zone. The rescheduled ALB pods retain the same static IP address. However, if you remove a zone from a cluster, then the ALB IP address for that zone is removed. +In classic clusters, the Ingress subdomain for your cluster is linked to the public ALB IP addresses. You can find the IP address of each public ALB by running `ibmcloud ks alb ls --cluster ` and looking for the **ALB IP** field. The portable public and private ALB IP addresses are provisioned into your IBM Cloud infrastructure account during cluster creation and are static floating IPs that do not change for the life of the cluster. If the worker node is removed, a `Keepalived` daemon that constantly monitors the IP automatically reschedules the ALB pods that were on that worker to another worker node in that zone. The rescheduled ALB pods retain the same static IP address. However, if you remove a zone from a cluster, then the ALB IP address for that zone is removed. **VPC clusters: ALB hostnames** -When you create a VPC cluster, one public VPC load balancer is automatically created outside of your cluster in your VPC. The public VPC load balancer puts the public IP addresses of your public ALBs behind one hostname. In VPC clusters, a hostname is assigned to the ALBs because the ALB IP addresses are not static and might change over time. You can find the hostname that is assigned to your public ALBs and the hostname that is assigned to your private ALBs by running `ibmcloud ks alb ls --cluster ` and looking for the **Load Balancer Hostname** field. Because the private ALBs are disabled by default, a private VPC load balancer that puts four private ALBs behind one hostname is created only when you enable your private ALBs. +When you create a VPC cluster, one public VPC load balancer is automatically created outside of your cluster in your VPC. The public VPC load balancer puts the public IP addresses of your public ALBs behind one hostname. In VPC clusters, a hostname is assigned to the ALBs because the ALB IP addresses are not static and might change over time. You can find the hostname that is assigned to your public ALBs and the hostname that is assigned to your private ALBs by running `ibmcloud ks alb ls --cluster ` and looking for the **Load Balancer Hostname** field. Because the private ALBs are disabled by default, a private VPC load balancer that puts four private ALBs behind one hostname is created only when you enable your private ALBs. -### Multizone load balancer (MZLB) or Load Balancer for VPC +### Multizone load balancer (MZLB) or Load Balancer for VPC {: #mzlb} -Depending on whether you have a classic or VPC cluster, a Cloudflare multizone load balancer (MZLB) or a Load Balancer for VPC health checks your ALBs. -{: shortdesc} +Depending on whether you have a classic or VPC cluster, a Cloudflare multizone load balancer (MZLB) or a Load Balancer for VPC health checks your ALBs. +{: shortdesc} -**Classic clusters: Multizone load balancer (MZLB)** +**Classic clusters: Multizone load balancer (MZLB)** Whenever you create a multizone cluster or [add a zone to a single zone cluster](/docs/containers?topic=containers-add_workers#add_zone), a Cloudflare multizone load balancer (MZLB) is automatically created and deployed so that 1 MZLB exists for each region. The MZLB puts the IP addresses of your ALBs behind the same subdomain and enables health checks on these IP addresses to determine whether they are available or not. @@ -84,7 +84,7 @@ In rare cases, some DNS resolvers or client apps might continue to use the unhea The MZLB load balances for public ALBs that use the IBM-provided Ingress subdomain only. If you use only private ALBs, you must manually check the health of the ALBs and update DNS lookup results. If you use public ALBs that use a custom domain, you can include the ALBs in MZLB load balancing by creating a CNAME in your DNS entry to forward requests from your custom domain to the IBM-provided Ingress subdomain for your cluster. If you use Calico pre-DNAT network policies to block all incoming traffic to Ingress services, you must also whitelist [Cloudflare's IPv4 IPs ![External link icon](../icons/launch-glyph.svg "External link icon")](https://www.cloudflare.com/ips/) that are used to check the health of your ALBs. For steps on how to create a Calico pre-DNAT policy to whitelist these IPs, see [Lesson 3 of the Calico network policy tutorial](/docs/containers?topic=containers-policy_tutorial#lesson3). -{: note} +{: note} **VPC clusters: Load Balancer for VPC** @@ -94,12 +94,12 @@ The Ingress subdomain for your cluster is automatically linked to the VPC load b Before forwarding traffic to ALBs, the VPC load balancer also health checks the public ALB IP addresses that are behind the hostname to determine whether the ALBs are available or not. Every 5 seconds, the VPC load balancer health checks the floating public ALB IPs for your cluster and keeps the DNS lookup results updated based on these health checks. When a user sends a request to your app by using the cluster's Ingress subdomain and app route, such as `mycluster--0001.us-south.containers.appdomain.cloud/myapp`, the VPC load balancer receives the request. If all ALBs are healthy, a normal operation DNS lookup of your Ingress subdomain returns all of the floating IPs, 1 of which the client accesses at random. However, if one IP becomes unavailable for any reason, then the health check for that IP fails after 2 retries. The VPC load balancer removes the failed IP from the subdomain, and the DNS lookup returns only the healthy IPs while a new floating IP address is generated. -Note that the VPC load balancer health checks only public ALBs and updates DNS lookup results for the Ingress subdomain. If you use only private ALBs, you must manually health check them and update DNS lookup results. +Note that the VPC load balancer health checks only public ALBs and updates DNS lookup results for the Ingress subdomain. If you use only private ALBs, you must manually health check them and update DNS lookup results.
-## How does a request get to my app with Ingress in a classic cluster? +## How does a request get to my app with Ingress in a classic cluster? {: #architecture-classic} ### Single-zone cluster @@ -165,7 +165,7 @@ This diagram shows the traffic flow through a single-zone, gateway-enabled clust 6. The app returns a response to the client. Equal Cost Multipath (ECMP) routing is used to balance the response traffic through a gateway on one of the gateway worker nodes to the client.
- + ## How does a request get to my app with Ingress in a VPC cluster? {: #architecture-vpc} @@ -183,7 +183,7 @@ image Expose an app in a V
 
 4. The VPC load balancer service routes the request to an ALB. Each ALB routes requests to the app instances in its own zone and to app instances in other zones. Additionally, if multiple app instances are deployed in one zone, the ALB routes the requests between the app pods in the zone.
 
-5. The ALB checks if a routing rule for the `myapp` path in the cluster exists. If a matching rule is found, the request is proxied according to the rules that you defined in the Ingress resource to the pod where the app is deployed. The source IP address of the package is changed to the public IP address of the worker node where the app pod runs. If multiple app instances are deployed in the cluster, the ALB load balances the requests between app pods across all zones.
+5. The ALB checks if a routing rule for the `myapp` path in the cluster exists. If a matching rule is found, the request is proxied according to the rules that you defined in the Ingress resource to the pod where the app is deployed. The source IP address of the package is changed to the public IP address of the worker node where the app pod runs. If multiple app instances are deployed in the cluster, the ALB load balances the requests between app pods across all zones.</roks311-vpc>
 
 
 
diff --git a/cs_ingress_settings.md b/cs_ingress_settings.md
index 6e11e642a..758fc1f80 100644
--- a/cs_ingress_settings.md
+++ b/cs_ingress_settings.md
@@ -2,7 +2,7 @@
 
 copyright:
   years: 2014, 2019
-lastupdated: Classic infrastructure provider icon The source IP address for client requests can be preserved in classic clusters only, and cannot be preserved in VPC clusters. -{: note} +{: note} By default, the source IP address of the client request is not preserved. When a client request to your app is sent to your cluster, the request is routed to a pod for the load balancer service that exposes the ALB. If no app pod exists on the same worker node as the load balancer service pod, the load balancer forwards the request to an app pod on a different worker node. The source IP address of the package is changed to the public IP address of the worker node where the app pod runs. {: shortdesc} diff --git a/cs_ingress_user_managed.md b/cs_ingress_user_managed.md index b8b1ca0a2..5cd790fed 100644 --- a/cs_ingress_user_managed.md +++ b/cs_ingress_user_managed.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-20" +lastupdated: "2019-11-26" keywords: kubernetes, nginx, iks multiple ingress controllers, byo controller @@ -31,7 +31,7 @@ Bring your own Ingress controller to run on {{site.data.keyword.cloud_notm}} and The IBM-provided Ingress application load balancers (ALBs) are based on NGINX controllers that you can configure by using [custom {{site.data.keyword.cloud_notm}} annotations](/docs/containers?topic=containers-ingress_annotation). Depending on what your app requires, you might want to configure your own custom Ingress controller. When you bring your own Ingress controller instead of using the IBM-provided Ingress ALB, you are responsible for supplying the controller image, maintaining the controller, updating the controller, and any security-related updates to keep your Ingress controller free from vulnerabilities. -## Classic clusters: Exposing your Ingress controller by creating an NLB and a hostname +## Classic clusters: Exposing your Ingress controller by creating an NLB and a hostname {: #user_managed_nlb} Create a network load balancer (NLB) to expose your custom Ingress controller deployment, and then create a hostname for the NLB IP address. @@ -116,7 +116,7 @@ In classic clusters, bringing your own Ingress controller is supported only for ``` https:/// ``` - {: codeblock} + {: codeblock} ## VPC clusters: Exposing your Ingress controller by creating a VPC load balancer and subdomain {: #user_managed_vpc} @@ -221,7 +221,7 @@ Expose your custom Ingress controller deployment to the public or to the private ``` https:/// ``` - {: codeblock} + {: codeblock} diff --git a/cs_limitations.md b/cs_limitations.md index b4d6c71d9..29dfad510 100644 --- a/cs_limitations.md +++ b/cs_limitations.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, infrastructure, rbac, policy @@ -35,10 +35,10 @@ If you anticipate reaching any of the following {{site.data.keyword.containerlon ## Service limitations {: #tech_limits} -{{site.data.keyword.containerlong_notm}} comes with the following service limitations that apply to all clusters, independent of what infrastructure provider you plan to use. +{{site.data.keyword.containerlong_notm}} comes with the following service limitations that apply to all clusters, independent of what infrastructure provider you plan to use. {: shortdesc} -In addition to the service limitations, make sure to also review the limitations for [classic](#classic_limits) or [VPC](#vpc_ks_limits) clusters. +In addition to the service limitations, make sure to also review the limitations for [classic](#classic_limits) or [VPC](#vpc_ks_limits) clusters. {: note} @@ -95,7 +95,7 @@ Classic infrastructure clusters in {{site.data.keyword.containerlong_notm}} are -
You can have a total of 250 IBM Cloud infrastructure file and block storage volumes per account. If you mount more than this amount, you might see an "out of capacity" message when you provision persistent volumes and need to contact your IBM Cloud infrastructure representative. For more FAQs, see the [file](/docs/infrastructure/FileStorage?topic=FileStorage-file-storage-faqs#how-many-volumes-can-i-provision-) and [block](/docs/infrastructure/BlockStorage?topic=BlockStorage-block-storage-faqs#how-many-instances-can-share-the-use-of-a-block-storage-volume-) storage docs.
+ ## VPC cluster limitations {: #vpc_ks_limits} @@ -152,6 +152,6 @@ VPC Generation 1 compute clusters in {{site.data.keyword.containerlong_notm}} ar VPC clusters use the [{{site.data.keyword.containerlong_notm}} v2 API](/docs/containers?topic=containers-cs_api_install#api_about). The v2 API is currently under development, with only a limited number of API operations currently available. You can run certain v1 API operations against the VPC cluster, such as `GET /v1/clusters` or `ibmcloud ks cluster ls`, but not all the information that a Classic cluster has is returned or you might experience unexpected results. For supported VPC v2 operations, see the [CLI reference topic for VPC commands](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cli_classic_vpc_about). - + diff --git a/cs_loadbalancer.md b/cs_loadbalancer.md index 54f00caa7..453e20aed 100644 --- a/cs_loadbalancer.md +++ b/cs_loadbalancer.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, lb1.0, nlb @@ -23,7 +23,7 @@ subcollection: containers {:download: .download} {:preview: .preview} -# Classic: Setting up basic load balancing with an NLB 1.0 +# Classic: Setting up basic load balancing with an NLB 1.0 {: #loadbalancer} Classic infrastructure provider icon Version 1.0 NLBs can be created in classic clusters only, and cannot be created in VPC clusters. To load balance in VPC clusters, see [Exposing apps with load balancers for VPC](/docs/containers?topic=containers-vpc-lbaas). @@ -46,7 +46,7 @@ kubectl expose deploy my-app --port=80 --target-port=8080 --type=LoadBalancer -- * Enable a [Virtual Router Function (VRF)](/docs/resources?topic=direct-link-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud) for your IBM Cloud infrastructure account. To enable VRF, [contact your IBM Cloud infrastructure account representative](/docs/infrastructure/direct-link?topic=direct-link-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud#how-you-can-initiate-the-conversion). To check whether a VRF is already enabled, use the `ibmcloud account show` command. If you cannot or do not want to enable VRF, enable [VLAN spanning](/docs/infrastructure/vlans?topic=vlans-vlan-spanning#vlan-spanning). When a VRF or VLAN spanning is enabled, the NLB 1.0 can route packets to various subnets in the account. * Ensure you have the [**Writer** or **Manager** {{site.data.keyword.cloud_notm}} IAM service role](/docs/containers?topic=containers-users#platform) for the `default` namespace. * Ensure you have the required number of worker nodes: - * Classic clusters: If you restrict network traffic to edge worker nodes, ensure that at least two [edge worker nodes](/docs/containers?topic=containers-edge#edge) are enabled in each zone so that NLBs deploy uniformly. + * Classic clusters: If you restrict network traffic to edge worker nodes, ensure that at least two [edge worker nodes](/docs/containers?topic=containers-edge#edge) are enabled in each zone so that NLBs deploy uniformly. * Gateway-enabled classic clusters: Ensure that at least two gateway worker nodes in the `gateway` worker pool are enabled in each zone so that NLBs deploy uniformly. To set up an NLB 1.0 service in a multizone cluster: diff --git a/cs_loadbalancer_about.md b/cs_loadbalancer_about.md index 6aa76dcfa..4e86cbb8f 100644 --- a/cs_loadbalancer_about.md +++ b/cs_loadbalancer_about.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, lb2.0, nlb @@ -23,11 +23,11 @@ subcollection: containers {:download: .download} {:preview: .preview} -# Classic: About network load balancers (NLBs) -{: #loadbalancer-about} +# Classic: About network load balancers (NLBs) +{: #loadbalancer-about} Classic infrastructure provider icon Network load balancers can be created in classic clusters only. To load balance in VPC clusters, see [Exposing apps with load balancers for VPC](/docs/containers?topic=containers-vpc-lbaas). -{: note} +{: note} When you create a standard cluster, {{site.data.keyword.containerlong}} automatically provisions a portable public subnet and a portable private subnet. {: shortdesc} diff --git a/cs_loadbalancer_dns.md b/cs_loadbalancer_dns.md index a06b49f29..4845e4dd3 100644 --- a/cs_loadbalancer_dns.md +++ b/cs_loadbalancer_dns.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, lb2.0, nlb, health check, dns, hostname, subdomain @@ -23,11 +23,11 @@ subcollection: containers {:download: .download} {:preview: .preview} -# Classic: Registering a DNS subdomain for an NLB -{: #loadbalancer_hostname} +# Classic: Registering a DNS subdomain for an NLB +{: #loadbalancer_hostname} VPC infrastructure provider icon This content is specific to NLBs in classic clusters. For VPC clusters, see [Registering a VPC load balancer hostname with a DNS subdomain](/docs/containers?topic=containers-vpc-lbaas#vpc_dns). -{: note} +{: note} After you set up network load balancers (NLBs), you can create DNS entries for the NLB IPs by creating subdomains. You can also set up TCP/HTTP(S) monitors to health check the NLB IP addresses behind each subdomain. {: shortdesc} diff --git a/cs_network_cluster.md b/cs_network_cluster.md index 6784c2038..0c1e772c1 100644 --- a/cs_network_cluster.md +++ b/cs_network_cluster.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, vlan @@ -23,14 +23,14 @@ subcollection: containers {:download: .download} {:preview: .preview} -# Changing service endpoints or VLAN connections for classic clusters +# Changing service endpoints or VLAN connections for classic clusters {: #cs_network_cluster} After you initially set up your network when you [create a cluster](/docs/containers?topic=containers-clusters), you can change the service endpoints that your Kubernetes master is accessible through or change the VLAN connections for your worker nodes. -{: shortdesc} +{: shortdesc} Classic infrastructure provider icon The content on this page is specific to classic clusters. For information about VPC clusters, see [Understanding network basics of VPC clusters](/docs/containers?topic=containers-plan_clusters#vpc_basics). -{: note} +{: note} ## Setting up the private service endpoint {: #set-up-private-se} diff --git a/cs_network_policy.md b/cs_network_policy.md index b0963663a..dbab03c06 100644 --- a/cs_network_policy.md +++ b/cs_network_policy.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, calico, egress, rules @@ -24,10 +24,10 @@ subcollection: containers {:preview: .preview} # Controlling traffic with network policies -{: #network_policies} +{: #network_policies} Classic infrastructure provider icon This network policy information is specific to classic clusters. For network policy information for VPC clusters, see [Controlling traffic with VPC access control lists](/docs/containers?topic=containers-vpc-network-policy). -{: note} +{: note} Every {{site.data.keyword.containerlong}} cluster is set up with a network plug-in called Calico. Default network policies are set up to secure the public network interface of every worker node in the cluster. {: shortdesc} @@ -102,7 +102,7 @@ Review the following default Calico host policies that are automatically applied -A default Kubernetes policy that limits access to the Kubernetes Dashboard is also created. Kubernetes policies don't apply to the host endpoint, but to the `kube-dashboard` pod instead. This policy applies to all classic clusters. +A default Kubernetes policy that limits access to the Kubernetes Dashboard is also created. Kubernetes policies don't apply to the host endpoint, but to the `kube-dashboard` pod instead. This policy applies to all classic clusters. diff --git a/cs_nodeport.md b/cs_nodeport.md index b798fa7f6..4c48dcbab 100644 --- a/cs_nodeport.md +++ b/cs_nodeport.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-10-30" +lastupdated: "2019-11-26" keywords: kubernetes, iks, app access @@ -59,10 +59,10 @@ The public IP address of the worker node is not permanent. When a worker node is {: #nodeport_config} You can expose your app as a Kubernetes NodePort service for free or standard clusters. -{:shortdesc} +{:shortdesc} In VPC clusters, you can access an app through a NodePort only if you are connected to your private VPC network, such as through a VPN connection. To access an app from the internet, you must use a [VPC load balancer](/docs/containers?topic=containers-vpc-lbaas) or [Ingress](/docs/containers?topic=containers-ingress-about) service instead. -{: note} +{: note} If you do not already have an app ready, you can use a Kubernetes example app called [Guestbook ![External link icon](../icons/launch-glyph.svg "External link icon")](https://github.com/kubernetes/examples/blob/master/guestbook/all-in-one/guestbook-all-in-one.yaml). diff --git a/cs_overview.md b/cs_overview.md index 8507b250b..b75594054 100644 --- a/cs_overview.md +++ b/cs_overview.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, infrastructure, rbac, policy @@ -45,11 +45,11 @@ Kubernetes is an open source platform for managing containerized workloads and s Containers provide a standard way to package your application's code, configurations, and dependencies into a single unit that can run as a resource-isolated process on a compute server. To run your app in Kubernetes on {{site.data.keyword.containerlong_notm}}, you must first containerize your app by creating a container image that you store in a container registry. For an overview of key Docker concepts and benefits, see [Docker containers](#docker_containers). To dive deeper into Docker, see the [Docker documentation](https://docs.docker.com/). **What compute host infrastructure does the service offer?**
-With {{site.data.keyword.containerlong_notm}}, you can create your cluster of compute hosts on classic {{site.data.keyword.cloud_notm}} infrastructure, or VPC Gen 1 compute infrastructure. +With {{site.data.keyword.containerlong_notm}}, you can create your cluster of compute hosts on classic {{site.data.keyword.cloud_notm}} infrastructure, or VPC Gen 1 compute infrastructure. -[Classic clusters](/docs/containers?topic=containers-getting-started) are created on your choice of virtual or bare metal worker nodes that are connected to VLANs. If you require additional local disks, you can also choose one of the bare metal flavors that are designed for software-defined storage solutions, such as Portworx. Depending on the level of hardware isolation that you need, virtual worker nodes can be set up as shared or dedicated nodes, whereas bare metal machines are always set up as dedicated nodes. +[Classic clusters](/docs/containers?topic=containers-getting-started) are created on your choice of virtual or bare metal worker nodes that are connected to VLANs. If you require additional local disks, you can also choose one of the bare metal flavors that are designed for software-defined storage solutions, such as Portworx. Depending on the level of hardware isolation that you need, virtual worker nodes can be set up as shared or dedicated nodes, whereas bare metal machines are always set up as dedicated nodes. -[VPC clusters](/docs/containers?topic=containers-getting-started#vpc-classic-gs) are created in your own Virtual Private Cloud that gives you the security of a private cloud environment with the dynamic scalability of a public cloud. You use network access control lists to protect the subnets that your worker nodes are connected to. VPC clusters can be provisioned on shared virtual infrastructure only. +[VPC clusters](/docs/containers?topic=containers-getting-started#vpc-classic-gs) are created in your own Virtual Private Cloud that gives you the security of a private cloud environment with the dynamic scalability of a public cloud. You use network access control lists to protect the subnets that your worker nodes are connected to. VPC clusters can be provisioned on shared virtual infrastructure only. For more information, see [Overview of Classic and VPC infrastructure providers](/docs/containers?topic=containers-infrastructure_providers). diff --git a/cs_secure.md b/cs_secure.md index c75f4da04..f952aa852 100644 --- a/cs_secure.md +++ b/cs_secure.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, containers @@ -109,8 +109,8 @@ The following image shows the default cluster security settings that address aut - + @@ -139,7 +139,7 @@ The following image shows the default cluster security settings that address aut **What else can I do to secure my Kubernetes API server?**
You can decide how you want your master and worker nodes to communicate and how your cluster users can access the Kubernetes API server by enabling the private service endpoint only, the public service endpoint only, or the public and private service endpoints. -For more information about service endpoints, see worker-to-master and user-to-master communication in [classic clusters](/docs/containers?topic=containers-plan_clusters#workeruser-master) and [VPC clusters](/docs/containers?topic=containers-plan_clusters#vpc-workeruser-master). +For more information about service endpoints, see worker-to-master and user-to-master communication in [classic clusters](/docs/containers?topic=containers-plan_clusters#workeruser-master) and [VPC clusters](/docs/containers?topic=containers-plan_clusters#vpc-workeruser-master).
@@ -152,11 +152,11 @@ Worker nodes carry the deployments and services that make up your app. When you {: shortdesc} **Who owns the worker node and am I responsible to secure it?**
-The ownership of a worker node depends on the type of cluster that you create and the infrastructure provider that you choose. +The ownership of a worker node depends on the type of cluster that you create and the infrastructure provider that you choose. -- **Free classic clusters**: Worker nodes are provisioned in to the {{site.data.keyword.cloud_notm}} account that is owned by IBM. You can deploy apps to the worker node but cannot change settings or install extra software on the worker node. Due to limited capacity and limited {{site.data.keyword.containerlong_notm}} features, do not run production workloads on free classic clusters. Consider using standard classic or standard VPC clusters for your production workloads. +- **Free classic clusters**: Worker nodes are provisioned in to the {{site.data.keyword.cloud_notm}} account that is owned by IBM. You can deploy apps to the worker node but cannot change settings or install extra software on the worker node. Due to limited capacity and limited {{site.data.keyword.containerlong_notm}} features, do not run production workloads on free classic clusters. Consider using standard classic or standard VPC clusters for your production workloads. - **Standard classic clusters**: Worker nodes are provisioned in to your {{site.data.keyword.cloud_notm}} account. The worker nodes are dedicated to you and you are responsible to request timely updates to the worker nodes to ensure that the worker node OS and {{site.data.keyword.containerlong_notm}} components apply the latest security updates and patches. -- **Standard VPC clusters**: Worker nodes are provisioned in to an {{site.data.keyword.cloud_notm}} account that is owned by IBM to enable monitoring of malicious activities and apply security updates. You cannot access your worker nodes by using the VPC dashboard. However, you can manage your worker nodes by using the {{site.data.keyword.containerlong_notm}} console, CLI, or API. The virtual machines that make up your worker nodes are dedicated to you and you are responsible to request timely updates so that your worker node OS and {{site.data.keyword.containerlong_notm}} components apply the latest security updates and patches. +- **Standard VPC clusters**: Worker nodes are provisioned in to an {{site.data.keyword.cloud_notm}} account that is owned by IBM to enable monitoring of malicious activities and apply security updates. You cannot access your worker nodes by using the VPC dashboard. However, you can manage your worker nodes by using the {{site.data.keyword.containerlong_notm}} console, CLI, or API. The virtual machines that make up your worker nodes are dedicated to you and you are responsible to request timely updates so that your worker node OS and {{site.data.keyword.containerlong_notm}} components apply the latest security updates and patches. @@ -192,11 +192,11 @@ The image does not include components that ensure secure end-to-end communicatio - + - - + + @@ -261,7 +261,7 @@ The more apps or worker nodes that you expose publicly, the more steps you must {: caption="Private services and worker node options" caption-side="top"} **What if I want to connect my cluster to an on-prem data center?**
-To connect your worker nodes and apps to an on-prem data center, you can configure a [VPN IPSec endpoint with a strongSwan service, a Virtual Router Appliance, or with a Fortigate Security Appliance](/docs/containers?topic=containers-vpn#vpn). +To connect your worker nodes and apps to an on-prem data center, you can configure a [VPN IPSec endpoint with a strongSwan service, a Virtual Router Appliance, or with a Fortigate Security Appliance](/docs/containers?topic=containers-vpn#vpn). ### Network segmentation and privacy for VPC clusters {: #network_segmentation_vpc} @@ -295,7 +295,7 @@ The more apps or worker nodes that you expose publicly, the more steps you must {: caption="VPC network security options" caption-side="top"} **What if I want to connect my cluster to other networks, like other VPCs, an on-prem data center, or IBM Cloud classic resources?**
-Depending on the network that you want to connect your worker nodes to, you can [choose a VPN solution](/docs/containers?topic=containers-vpc-vpnaas#options). +Depending on the network that you want to connect your worker nodes to, you can [choose a VPN solution](/docs/containers?topic=containers-vpc-vpnaas#options).
### Expose apps with LoadBalancer and Ingress services {: #network_lb_ingress} @@ -304,9 +304,9 @@ You can use network load balancer (NLB) and Ingress application load balancer (A {: shortdesc} **Can I use security groups to manage my cluster's network traffic?**
-Classic clusters: {{site.data.keyword.cloud_notm}} [security groups](/docs/infrastructure/security-groups?topic=security-groups-about-ibm-security-groups#about-ibm-security-groups) are applied to the network interface of a single virtual server to filter traffic at the hypervisor level. If you want to manage traffic for each worker node, you can use security groups. When you create a security group, you must allow the VRRP protocol, which {{site.data.keyword.containerlong_notm}} uses to manage NLB IP addresses. To uniformly manage traffic for your cluster across all of your worker nodes, use [Calico and Kubernetes policies](/docs/containers?topic=containers-network_policies). +Classic clusters: {{site.data.keyword.cloud_notm}} [security groups](/docs/infrastructure/security-groups?topic=security-groups-about-ibm-security-groups#about-ibm-security-groups) are applied to the network interface of a single virtual server to filter traffic at the hypervisor level. If you want to manage traffic for each worker node, you can use security groups. When you create a security group, you must allow the VRRP protocol, which {{site.data.keyword.containerlong_notm}} uses to manage NLB IP addresses. To uniformly manage traffic for your cluster across all of your worker nodes, use [Calico and Kubernetes policies](/docs/containers?topic=containers-network_policies). -VPC clusters: Use [access control lists (ACLs) and Kubernetes network policies](/docs/containers?topic=containers-vpc-network-policy) to manage network traffic into and out of your cluster. You cannot use [VPC security groups](/docs/infrastructure/security-groups?topic=security-groups-about-ibm-security-groups#about-ibm-security-groups) to control traffic for your cluster. VPC security groups are applied to the network interface of a single virtual server to filter traffic at the hypervisor level. However, the worker nodes of your VPC cluster exist in a service account and are not listed in the VPC infrastructure dashboard. You cannot attach a security group to your worker nodes instances. +VPC clusters: Use [access control lists (ACLs) and Kubernetes network policies](/docs/containers?topic=containers-vpc-network-policy) to manage network traffic into and out of your cluster. You cannot use [VPC security groups](/docs/infrastructure/security-groups?topic=security-groups-about-ibm-security-groups#about-ibm-security-groups) to control traffic for your cluster. VPC security groups are applied to the network interface of a single virtual server to filter traffic at the hypervisor level. However, the worker nodes of your VPC cluster exist in a service account and are not listed in the VPC infrastructure dashboard. You cannot attach a security group to your worker nodes instances. **How can I secure the source IP within the cluster?**
In version 2.0 NLBs, the source IP address of the client request is preserved by default. However, in version 1.0 NLBs and in all Ingress ALBs, the source IP address of the client request is not preserved. When a client request to your app is sent to your cluster, the request is routed to a pod for the NLB 1.0 or ALB. If no app pod exists on the same worker node as the load balancer service pod, the NLB or ALB forwards the request to an app pod on a different worker node. The source IP address of the package is changed to the public IP address of the worker node where the app pod runs. diff --git a/cs_storage_cos.md b/cs_storage_cos.md index 6200f2a4c..ef8c4610c 100644 --- a/cs_storage_cos.md +++ b/cs_storage_cos.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -152,8 +152,8 @@ Looking for instructions for how to update or remove the {{site.data.keyword.cos {: tip} Before you begin: -- [Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.](/docs/containers?topic=containers-cs_cli_install#cs_cli_configure) -- If you plan to install the {{site.data.keyword.cos_full_notm}} plug-in in a VPC cluster, you must enable VRF in your {{site.data.keyword.cloud_notm}} account by running `ibmcloud account update --service-endpoint-enable true`. This command output prompts you to open a support case to enable your account to use VRF and service endpoints. When VRF is enabled, any system that is connected to any of the private VLANs in the same {{site.data.keyword.cloud_notm}} account can communicate with the cluster worker nodes. You can isolate your cluster from other systems on the private network by applying [Calico private network policies](/docs/containers?topic=containers-network_policies#isolate_workers). +- [Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.](/docs/containers?topic=containers-cs_cli_install#cs_cli_configure) +- If you plan to install the {{site.data.keyword.cos_full_notm}} plug-in in a VPC cluster, you must enable VRF in your {{site.data.keyword.cloud_notm}} account by running `ibmcloud account update --service-endpoint-enable true`. This command output prompts you to open a support case to enable your account to use VRF and service endpoints. When VRF is enabled, any system that is connected to any of the private VLANs in the same {{site.data.keyword.cloud_notm}} account can communicate with the cluster worker nodes. You can isolate your cluster from other systems on the private network by applying [Calico private network policies](/docs/containers?topic=containers-network_policies#isolate_workers). To install the plug-in: diff --git a/cs_subnets.md b/cs_subnets.md index 0d0d4ddff..83eafbb06 100644 --- a/cs_subnets.md +++ b/cs_subnets.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, subnets, ips, vlans, networking @@ -23,14 +23,14 @@ subcollection: containers {:download: .download} {:preview: .preview} -# Configuring subnets and IP addresses for classic clusters +# Configuring subnets and IP addresses for classic clusters {: #subnets} Change the pool of available portable public or private IP addresses for network load balancer (NLB) services by adding subnets to your {{site.data.keyword.containerlong}} cluster. -{:shortdesc} +{:shortdesc} Classic infrastructure provider icon The content on this page is specific to classic clusters. For information about VPC clusters, see [Understanding network basics of VPC clusters](/docs/containers?topic=containers-plan_clusters#vpc_basics). -{: note} +{: note} ## Overview of networking in {{site.data.keyword.containerlong_notm}} {: #basics} @@ -81,7 +81,7 @@ In {{site.data.keyword.containerlong_notm}}, VLANs have a limit of 40 subnets. I {: note} **Do the IP address for my worker nodes change?**
-Your worker node is assigned an IP address on the public or private VLANs that your cluster uses. After the worker node is provisioned, the worker node IP address persists across `reboot` and `update` operations, but the worker node IP address changes after a `replace` operation. Additionally, the private IP address of the worker node is used for the worker node identity in most `kubectl` commands. If you change the VLANs that the worker pool uses, new worker nodes that are provisioned in that pool use the new VLANs for their IP addresses. Existing worker node IP addresses do not change, but you can choose to remove the worker nodes that use the old VLANs. +Your worker node is assigned an IP address on the public or private VLANs that your cluster uses. After the worker node is provisioned, the worker node IP address persists across `reboot` and `update` operations, but the worker node IP address changes after a `replace` operation. Additionally, the private IP address of the worker node is used for the worker node identity in most `kubectl` commands. If you change the VLANs that the worker pool uses, new worker nodes that are provisioned in that pool use the new VLANs for their IP addresses. Existing worker node IP addresses do not change, but you can choose to remove the worker nodes that use the old VLANs. ### Network segmentation {: #basics_segmentation} diff --git a/cs_troubleshoot_storage.md b/cs_troubleshoot_storage.md index 8ac847ad6..94b08d43e 100644 --- a/cs_troubleshoot_storage.md +++ b/cs_troubleshoot_storage.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, help, debug @@ -72,7 +72,7 @@ Review the options to debug persistent storage and find the root causes for fail 3. For block storage, object storage, and Portworx only: Make sure that you [installed the Helm server Tiller with a Kubernetes services account](/docs/containers?topic=containers-helm#public_helm_install). -4. For classic block storage, object storage, and Portworx only: Make sure that you installed the latest Helm chart version for the plug-in. +4. For classic block storage, object storage, and Portworx only: Make sure that you installed the latest Helm chart version for the plug-in. **Block and object storage**: @@ -83,7 +83,7 @@ Review the options to debug persistent storage and find the root causes for fail {: pre} 2. List the Helm charts in the repository. - **For classic block storage**: + **For classic block storage**: ``` helm search iks-charts | grep block-storage-plugin ``` @@ -162,7 +162,7 @@ Review the options to debug persistent storage and find the root causes for fail {: pre} 3. Review common errors that can occur during the PVC creation. - - [File storage and classic block storage: PVC remains in a pending state](#file_pvc_pending) + - [File storage and classic block storage: PVC remains in a pending state](#file_pvc_pending) - [Object storage: PVC remains in a pending state](#cos_pvc_pending) 7. Check whether the pod that mounts your storage instance is successfully deployed. @@ -1325,7 +1325,7 @@ Start by verifying that the information that you entered in the {{site.data.keyw If you entered the correct information on the {{site.data.keyword.cloud_notm}} catalog page, verify that your cluster is correctly set up for Portworx. {: shortdesc} -1. Verify that you selected a classic {{site.data.keyword.containerlong_notm}} cluster. VPC on Classic clusters are not supported in Portworx. +1. Verify that you selected a classic {{site.data.keyword.containerlong_notm}} cluster. VPC on Classic clusters are not supported in Portworx. 2. Verify that the cluster that you want to use meets the [minimum hardware requirements for Portworx ![External link icon](../icons/launch-glyph.svg "External link icon")](https://docs.portworx.com/start-here-installation/). 3. If you want to use a virtual machine cluster, make sure that you [added raw, unformatted, and unmounted block storage](/docs/containers?topic=containers-portworx#create_block_storage) to your cluster so that Portworx can include the disks into the Portworx storage layer. 4. Verify that your cluster is set up with public network connectivity. For more information, see [Understanding network basics of classic clusters](/docs/containers?topic=containers-plan_clusters#plan_basics). diff --git a/cs_users.md b/cs_users.md index c75446ad1..c65d51fae 100644 --- a/cs_users.md +++ b/cs_users.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, access, permissions, api key @@ -67,7 +67,7 @@ To see the specific {{site.data.keyword.containerlong_notm}} permissions by each Example actions that are permitted by RBAC roles are creating objects such as pods or reading pod logs.
Classic infrastructure
Classic infrastructure roles enable access to your IBM Cloud infrastructure resources. Set up a user with **Super User** infrastructure role, and store this user's infrastructure credentials in an API key. Then, set the API key in each region and resource group that you want to create clusters in. After you set up the API key, other users that you grant access to {{site.data.keyword.containerlong_notm}} do not need infrastructure roles as the API key is shared for all users within the region. Instead, {{site.data.keyword.cloud_notm}} IAM platform roles determine the infrastructure actions that users are allowed to perform. If you don't set up the API key with full Super User infrastructure or you need to grant specific device access to users, you can [customize infrastructure permissions](#infra_access).

-Example actions that are permitted by infrastructure roles are viewing the details of cluster worker node machines or editing networking and storage resources.

VPC clusters do not need classic infrastructure permissions. Instead, you assign **Administrator** platform access to the **VPC Infrastructure** service in {{site.data.keyword.cloud_notm}}. Then, these credentials are stored in the API key for each region and resource group that you create clusters in.

+Example actions that are permitted by infrastructure roles are viewing the details of cluster worker node machines or editing networking and storage resources.

VPC clusters do not need classic infrastructure permissions. Instead, you assign **Administrator** platform access to the **VPC Infrastructure** service in {{site.data.keyword.cloud_notm}}. Then, these credentials are stored in the API key for each region and resource group that you create clusters in.

Cloud Foundry
Not all services can be managed with {{site.data.keyword.cloud_notm}} IAM. If you're using one of these services, you can continue to use Cloud Foundry user roles to control access to those services. Cloud Foundry roles grant access to organizations and spaces within the account. To see the list of Cloud Foundry-based services in {{site.data.keyword.cloud_notm}}, run ibmcloud service list.

Example actions that are permitted by Cloud Foundry roles are creating a new Cloud Foundry service instance or binding a Cloud Foundry service instance to a cluster. To learn more, see the available [org and space roles](/docs/iam?topic=iam-cfaccess) or the steps for [managing Cloud Foundry access](/docs/iam?topic=iam-mngcf) in the {{site.data.keyword.cloud_notm}} IAM documentation.
@@ -193,7 +193,7 @@ To successfully provision and work with clusters, you must ensure that your {{si {: #understand_infra} Determine whether your account has access to the IBM Cloud infrastructure portfolio and learn about how {{site.data.keyword.containerlong_notm}} uses the API key to access the portfolio. -{: shortdesc} +{: shortdesc} **Does the classic or VPC infrastructure provider for my cluster affect what access I need to the portfolio?**
For classic clusters, you create your resources in a classic infrastructure account, and must have certain [classic infrastructure roles](/docs/containers?topic=containers-access_reference#infra) that authorize access to compute, storage, and networking resources. @@ -203,7 +203,7 @@ For VPC clusters, you must have the {{site.data.keyword.cloud_notm}} IAM **Admin For both [classic and VPC clusters](/docs/containers?topic=containers-infrastructure_providers), these infrastructure credentials are stored in an API key for the region and resource group of the cluster. To create and manage clusters after the infrastructure permissions are set, assign users IAM access roles to the {{site.data.keyword.containerlong_notm}}. Unlike classic, VPC does not support manually setting infrastructure credentials (`ibmcloud ks credential set`) to use another IBM Cloud infrastructure account to provision worker nodes. You must use your {{site.data.keyword.cloud_notm}} account's linked infrastructure portfolio. -{: important} +{: important}
**Does my account already have access to the IBM Cloud infrastructure portfolio?**
@@ -226,11 +226,11 @@ To access the IBM Cloud infrastructure portfolio, you use an {{site.data.keyword - + - +
Default Kubernetes policies for each cluster
Fine-grained access controlAs the account administrator you can [grant access to other users for {{site.data.keyword.containerlong_notm}}](/docs/containers?topic=containers-users#users) by using {{site.data.keyword.cloud_notm}} Identity and Access Management (IAM). {{site.data.keyword.cloud_notm}} IAM provides secure authentication with the {{site.data.keyword.cloud_notm}} platform, {{site.data.keyword.containerlong_notm}}, and all the resources in your account. Setting up proper user roles and permissions is key to limit who can access your resources and to limit the damage that a user can do when legitimate permissions are misused.

You can select from the following pre-defined user roles that determine the set of actions that the user can perform:
  • Platform roles: Determine the cluster and worker node management-related actions that a user can perform in {{site.data.keyword.containerlong_notm}}.
  • Service access roles: Determine the [Kubernetes RBAC role](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) that is assigned to the user and the actions that a user can run against the Kubernetes API server. With RBAC roles, users can create Kubernetes resources, such as creating app deployments, adding namespaces, or setting up configmaps. For more information about the corresponding RBAC roles that are assigned to a user and associated permissions, see [{{site.data.keyword.cloud_notm}} IAM service roles](/docs/containers?topic=containers-access_reference#service).
  • Classic infrastructure: Enables access to your classic {{site.data.keyword.cloud_notm}} infrastructure resources. Example actions that are permitted by classic infrastructure roles are viewing the details of cluster worker node machines or editing networking and storage resources.
  • -
  • VPC infrastructure: Enables access to VPC infrastructure resources. Example actions that are permitted by VPC infrastructure roles are creating a VPC, adding subnets, changing floating IP addresses, and creating VPC Block Storage instances.

For more information about access control in a cluster, see [Assigning cluster access](/docs/openshift?topic=openshift-users).
As the account administrator you can [grant access to other users for {{site.data.keyword.containerlong_notm}}](/docs/containers?topic=containers-users#users) by using {{site.data.keyword.cloud_notm}} Identity and Access Management (IAM). {{site.data.keyword.cloud_notm}} IAM provides secure authentication with the {{site.data.keyword.cloud_notm}} platform, {{site.data.keyword.containerlong_notm}}, and all the resources in your account. Setting up proper user roles and permissions is key to limit who can access your resources and to limit the damage that a user can do when legitimate permissions are misused.

You can select from the following pre-defined user roles that determine the set of actions that the user can perform:
  • Platform roles: Determine the cluster and worker node management-related actions that a user can perform in {{site.data.keyword.containerlong_notm}}.
  • Service access roles: Determine the [Kubernetes RBAC role](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) that is assigned to the user and the actions that a user can run against the Kubernetes API server. With RBAC roles, users can create Kubernetes resources, such as creating app deployments, adding namespaces, or setting up configmaps. For more information about the corresponding RBAC roles that are assigned to a user and associated permissions, see [{{site.data.keyword.cloud_notm}} IAM service roles](/docs/containers?topic=containers-access_reference#service).
  • Classic infrastructure: Enables access to your classic {{site.data.keyword.cloud_notm}} infrastructure resources. Example actions that are permitted by classic infrastructure roles are viewing the details of cluster worker node machines or editing networking and storage resources.
  • +
  • VPC infrastructure: Enables access to VPC infrastructure resources. Example actions that are permitted by VPC infrastructure roles are creating a VPC, adding subnets, changing floating IP addresses, and creating VPC Block Storage instances.

For more information about access control in a cluster, see [Assigning cluster access](/docs/openshift?topic=openshift-users).
Admission controllers
Compute isolationWorker nodes are dedicated to a cluster and do not host workloads of other clusters. When you create a standard classic cluster, you can choose to provision your worker nodes as [physical machines (bare metal) or as virtual machines](/docs/containers?topic=containers-planning_worker_nodes#planning_worker_nodes) that run on shared or dedicated physical hardware. Worker nodes in a free classic cluster or in a standard VPC Gen 1 compute cluster can be provisioned as virtual machines on shared infrastructure only.Worker nodes are dedicated to a cluster and do not host workloads of other clusters. When you create a standard classic cluster, you can choose to provision your worker nodes as [physical machines (bare metal) or as virtual machines](/docs/containers?topic=containers-planning_worker_nodes#planning_worker_nodes) that run on shared or dedicated physical hardware. Worker nodes in a free classic cluster or in a standard VPC Gen 1 compute cluster can be provisioned as virtual machines on shared infrastructure only.
Option to deploy bare metal on classicIf you create a standard classic cluster, you can choose to provision your worker nodes on bare metal physical servers (instead of virtual server instances). With bare metal machines, you have additional control over the compute host, such as the memory or CPU. This setup eliminates the virtual machine hypervisor that allocates physical resources to virtual machines that run on the host. Instead, all of a bare metal machine's resources are dedicated exclusively to the worker, so you don't need to worry about "noisy neighbors" sharing resources or slowing down performance. Bare metal servers are dedicated to you, with all its resources available for cluster usage.

Bare metal machines are not supported in VPC Gen 1 compute clusters.

Option to deploy bare metal on classicIf you create a standard classic cluster, you can choose to provision your worker nodes on bare metal physical servers (instead of virtual server instances). With bare metal machines, you have additional control over the compute host, such as the memory or CPU. This setup eliminates the virtual machine hypervisor that allocates physical resources to virtual machines that run on the host. Instead, all of a bare metal machine's resources are dedicated exclusively to the worker, so you don't need to worry about "noisy neighbors" sharing resources or slowing down performance. Bare metal servers are dedicated to you, with all its resources available for cluster usage.

Bare metal machines are not supported in VPC Gen 1 compute clusters.

Encrypted disks
**Subscription accounts** are not set up with access to the IBM Cloud infrastructure portfolio.

Option 1: [Create a new Pay-As-You-Go account](/docs/account?topic=account-accounts#paygo) that is set up with access to the IBM Cloud infrastructure portfolio. When you choose this option, you have two separate {{site.data.keyword.cloud_notm}} accounts and billings.

If you want to continue using your Subscription account, you can use your new Pay-As-You-Go account to generate an API key in IBM Cloud infrastructure. Then, you must manually set the IBM Cloud infrastructure API key for your Subscription account. Keep in mind that IBM Cloud infrastructure resources are billed through your new Pay-As-You-Go account.

Option 2: If you already have an existing IBM Cloud infrastructure account that you want to use, you can manually set IBM Cloud infrastructure credentials for your {{site.data.keyword.cloud_notm}} account.

When you manually link to an IBM Cloud infrastructure account, the credentials are used for every IBM Cloud infrastructure specific action in your {{site.data.keyword.cloud_notm}} account. You must ensure that the API key that you set has [sufficient infrastructure permissions](/docs/containers?topic=containers-users#infra_access) so that your users can create and work with clusters. You can manually set credentials for only classic clusters, not VPC clusters.

Option 1: [Create a new Pay-As-You-Go account](/docs/account?topic=account-accounts#paygo) that is set up with access to the IBM Cloud infrastructure portfolio. When you choose this option, you have two separate {{site.data.keyword.cloud_notm}} accounts and billings.

If you want to continue using your Subscription account, you can use your new Pay-As-You-Go account to generate an API key in IBM Cloud infrastructure. Then, you must manually set the IBM Cloud infrastructure API key for your Subscription account. Keep in mind that IBM Cloud infrastructure resources are billed through your new Pay-As-You-Go account.

Option 2: If you already have an existing IBM Cloud infrastructure account that you want to use, you can manually set IBM Cloud infrastructure credentials for your {{site.data.keyword.cloud_notm}} account.

When you manually link to an IBM Cloud infrastructure account, the credentials are used for every IBM Cloud infrastructure specific action in your {{site.data.keyword.cloud_notm}} account. You must ensure that the API key that you set has [sufficient infrastructure permissions](/docs/containers?topic=containers-users#infra_access) so that your users can create and work with clusters. You can manually set credentials for only classic clusters, not VPC clusters.

**IBM Cloud infrastructure accounts**, no {{site.data.keyword.cloud_notm}} account

[Create an {{site.data.keyword.cloud_notm}} Pay-As-You-Go account](/docs/account?topic=account-accounts#paygo). You have two separate IBM Cloud infrastructure accounts and billing.

By default, your new {{site.data.keyword.cloud_notm}} account uses the new infrastructure account. To continue using the old infrastructure account, manually set the credentials. You can manually set credentials for only classic clusters, not VPC clusters.

[Create an {{site.data.keyword.cloud_notm}} Pay-As-You-Go account](/docs/account?topic=account-accounts#paygo). You have two separate IBM Cloud infrastructure accounts and billing.

By default, your new {{site.data.keyword.cloud_notm}} account uses the new infrastructure account. To continue using the old infrastructure account, manually set the credentials. You can manually set credentials for only classic clusters, not VPC clusters.

@@ -239,14 +239,14 @@ To access the IBM Cloud infrastructure portfolio, you use an {{site.data.keyword {{site.data.keyword.containerlong_notm}} accesses the IBM Cloud infrastructure portfolio by using an API key. The API key impersonates, or stores the credentials of, a user with access to an IBM Cloud infrastructure account. API keys are set by region within a resource group, and are shared by users in that region.   -To enable all users to access the infrastructure portfolio, the user whose credentials are stored in the API key must have the appropriate permissions to the [infrastructure provider](/docs/containers?topic=containers-infrastructure_providers). +To enable all users to access the infrastructure portfolio, the user whose credentials are stored in the API key must have the appropriate permissions to the [infrastructure provider](/docs/containers?topic=containers-infrastructure_providers). * Classic clusters: **Super User** role or the [minimum required permissions](/docs/containers?topic=containers-access_reference#infra) for classic infrastructure. * VPC clusters: [**Administrator** platform role for VPC Infrastructure](/docs/vpc-on-classic?topic=vpc-on-classic-managing-user-permissions-for-vpc-resources). * [**Administrator** platform role](/docs/containers?topic=containers-users#platform) for {{site.data.keyword.containerlong_notm}} at the account level. * [**Writer** or **Manager** service role](/docs/containers?topic=containers-users#platform) for {{site.data.keyword.containerlong_notm}}. * [**Administrator** platform role](/docs/containers?topic=containers-users#platform) for Container Registry at the account level. -Then, let that user perform the first admin action in a region and resource group. The user's infrastructure credentials are stored in an API key for that region and resource group. +Then, let that user perform the first admin action in a region and resource group. The user's infrastructure credentials are stored in an API key for that region and resource group. Other users within the account share the API key for accessing the infrastructure. When users log in to the {{site.data.keyword.cloud_notm}} account, an {{site.data.keyword.cloud_notm}} IAM token that is based on the API key is generated for the CLI session and enables infrastructure-related commands to be run in a cluster. @@ -297,12 +297,12 @@ To ensure that all infrastructure-related actions can be successfully completed 3. To make sure that all infrastructure-related actions in your cluster can be successfully performed, verify that the user has the correct infrastructure access policies. 1. From the menu bar, select **Manage > Access (IAM)**. - 2. Select the **Users** tab, click on the user. The required infrastructure permissions vary depending on what type of [cluster infrastructure provider](/docs/containers?topic=containers-infrastructure_providers) you use, classic or VPC. - * **For classic clusters**: + 2. Select the **Users** tab, click on the user. The required infrastructure permissions vary depending on what type of [cluster infrastructure provider](/docs/containers?topic=containers-infrastructure_providers) you use, classic or VPC. + * **For classic clusters**: 1. In the **API keys** pane, verify that the user has a **Classic infrastructure API key**, or click **Create an IBM Cloud API key**. For more information, see [Managing classic infrastructure API keys](/docs/iam?topic=iam-classic_keys#classic_keys). 2. Click the **Classic infrastructure** tab and then click the **Permissions** tab. - 3. If the user doesn't have each category checked, you can use the **Permission sets** drop-down list to assign the **Super User** role. Or you can expand each category and give the user the required [infrastructure permissions](/docs/containers?topic=containers-access_reference#infra). - * **For VPC clusters**: Assign the user the [**Administrator** platform role for VPC Infrastructure](/docs/vpc-on-classic?topic=vpc-on-classic-managing-user-permissions-for-vpc-resources). + 3. If the user doesn't have each category checked, you can use the **Permission sets** drop-down list to assign the **Super User** role. Or you can expand each category and give the user the required [infrastructure permissions](/docs/containers?topic=containers-access_reference#infra). + * **For VPC clusters**: Assign the user the [**Administrator** platform role for VPC Infrastructure](/docs/vpc-on-classic?topic=vpc-on-classic-managing-user-permissions-for-vpc-resources). ### Accessing the infrastructure portfolio with your default {{site.data.keyword.cloud_notm}} Pay-As-You-Go account {: #default_account} @@ -345,10 +345,10 @@ To set the API key to access the IBM Cloud infrastructure portfolio: {: #credentials} Instead of using the default linked IBM Cloud infrastructure account to order infrastructure for clusters within a region, you might want to use a different IBM Cloud infrastructure account that you already have. You can link this infrastructure account to your {{site.data.keyword.cloud_notm}} account by using the [`ibmcloud ks credential set`](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_credentials_set) command. The IBM Cloud infrastructure credentials are used instead of the default Pay-As-You-Go account's credentials that are stored for the region. -{: shortdesc} +{: shortdesc} You can manually set infrastructure credentials to a different account only for classic clusters, not for VPC clusters. -{: note} +{: note} The IBM Cloud infrastructure credentials set by the `ibmcloud ks credential set` command persist after your session ends. If you remove IBM Cloud infrastructure credentials that were manually set with the [`ibmcloud ks credential unset --region `](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_credentials_unset) command, the default Pay-As-You-Go account credentials are used. However, this change in infrastructure account credentials might cause [orphaned clusters](/docs/containers?topic=containers-cs_troubleshoot_clusters#orphaned). {: important} @@ -1082,10 +1082,10 @@ Before you begin: [Log in to your account. If applicable, target the appropriate {: #infra_access} When you assign the **Super User** infrastructure role to the admin who sets the API key or whose infrastructure credentials are set, other users within the account share the API key or credentials for performing infrastructure actions. You can then control which infrastructure actions the users can perform by assigning the appropriate [{{site.data.keyword.cloud_notm}} IAM platform role](#platform). You don't need to edit the user's IBM Cloud infrastructure permissions. -{: shortdesc} +{: shortdesc} Classic infrastructure permissions apply only to classic clusters. For VPC clusters, see [Assigning role-based access to VPC resources](/docs/vpc-on-classic?topic=vpc-on-classic-setting-up-access-to-your-classic-infrastructure-from-vpc). -{: note} +{: note} For compliance, security, or billing reasons, you might not want to give the **Super User** infrastructure role to the user who sets the API key or whose credentials are set with the `ibmcloud ks credential set` command. However, if this user doesn't have the **Super User** role, then infrastructure-related actions, such as creating a cluster or reloading a worker node, can fail. Instead of using {{site.data.keyword.cloud_notm}} IAM platform roles to control users' infrastructure access, you must set specific IBM Cloud infrastructure permissions for users. @@ -1098,10 +1098,10 @@ Before you begin: You can grant classic infrastructure access through the [console](#infra_console) or [CLI](#infra_cli). ### Assigning infrastructure access through the console -{: #infra_console} +{: #infra_console} Classic infrastructure permissions apply only to classic clusters. For VPC clusters, see [Assigning role-based access to VPC resources](/docs/vpc-on-classic?topic=vpc-on-classic-setting-up-access-to-your-classic-infrastructure-from-vpc). -{: note} +{: note} 1. Log in to the [{{site.data.keyword.cloud_notm}} console ![External link icon](../icons/launch-glyph.svg "External link icon")](https://cloud.ibm.com). From the menu bar, select **Manage > Access (IAM)**. 2. Click the **Users** page, and then click the name of the user that you want to set permissions for. @@ -1129,10 +1129,10 @@ Downgrading permissions? The action can take a few minutes to complete. {: tip} ### Assigning infrastructure access through the CLI -{: #infra_cli} +{: #infra_cli} Classic infrastructure permissions apply only to classic clusters. For VPC clusters, see [Assigning role-based access to VPC resources](/docs/vpc-on-classic?topic=vpc-on-classic-setting-up-access-to-your-classic-infrastructure-from-vpc). -{: note} +{: note} 1. Check whether the credentials for classic infrastructure access for {{site.data.keyword.containerlong_notm}} in the region and resource group have any missing required or suggested permissions. ``` @@ -1344,10 +1344,10 @@ To remove all of a user's Cloud Foundry permissions, you can remove the user's o {: #remove_infra} You can remove IBM Cloud infrastructure permissions for a user by using the {{site.data.keyword.cloud_notm}} console. -{: shortdesc} +{: shortdesc} Classic infrastructure permissions apply only to classic clusters. For VPC clusters, see [Assigning role-based access to VPC resources](/docs/vpc-on-classic?topic=vpc-on-classic-setting-up-access-to-your-classic-infrastructure-from-vpc). -{: note} +{: note} 1. Log in to the [{{site.data.keyword.cloud_notm}} console ![External link icon](../icons/launch-glyph.svg "External link icon")](https://cloud.ibm.com/). From the menu bar, select **Manage > Access (IAM)**. 2. Click the **Users** page, and then click the name of the user that you want to remove permissions from. diff --git a/cs_why.md b/cs_why.md index 80af00540..a73722817 100644 --- a/cs_why.md +++ b/cs_why.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, containers @@ -42,7 +42,7 @@ Ready to get started? Try out the [creating a Kubernetes cluster tutorial](/docs |Benefit|Description| |-------|-----------| |Choice of container platform provider |
  • Deploy clusters with **OpenShift** or community **Kubernetes** installed as the container platform orchestrator.
  • Choose the developer experience that fits your company, or run workloads across both OpenShift or community Kubernetes clusters.
  • Built-in integrations from the {{site.data.keyword.cloud_notm}} console to the Kubernetes dashboard or OpenShift web console.
  • Single view and management experience of all your OpenShift or community Kubernetes clusters from {{site.data.keyword.cloud_notm}}.
  • For more information, see [Comparison between OpenShift and community Kubernetes clusters](#openshift_kubernetes).
| -|Single-tenant Kubernetes clusters with compute, network, and storage infrastructure isolation|
  • Create your own customized infrastructure that meets the requirements of your organization.
  • Choose between [{{site.data.keyword.cloud_notm}} Classic or VPC infrastructure providers](/docs/containers?topic=containers-infrastructure_providers).
  • Provision a dedicated and secured Kubernetes master, worker nodes, virtual networks, and storage by using the resources provided by IBM Cloud infrastructure.
  • Fully managed Kubernetes master that is continuously monitored and updated by {{site.data.keyword.IBM_notm}} to keep your cluster available.
  • Option to provision worker nodes as bare metal servers for compute-intensive workloads such as GPU.
  • Store persistent data, share data between Kubernetes pods, and restore data when needed with the integrated and secure volume service.
  • Benefit from full support for all native Kubernetes APIs.
| +|Single-tenant Kubernetes clusters with compute, network, and storage infrastructure isolation|
  • Create your own customized infrastructure that meets the requirements of your organization.
  • Choose between [{{site.data.keyword.cloud_notm}} Classic or VPC infrastructure providers](/docs/containers?topic=containers-infrastructure_providers).
  • Provision a dedicated and secured Kubernetes master, worker nodes, virtual networks, and storage by using the resources provided by IBM Cloud infrastructure.
  • Fully managed Kubernetes master that is continuously monitored and updated by {{site.data.keyword.IBM_notm}} to keep your cluster available.
  • Option to provision worker nodes as bare metal servers for compute-intensive workloads such as GPU.
  • Store persistent data, share data between Kubernetes pods, and restore data when needed with the integrated and secure volume service.
  • Benefit from full support for all native Kubernetes APIs.
| | Multizone clusters to increase high availability |
  • Easily manage worker nodes of the same flavor (CPU, memory, virtual or physical) with worker pools.
  • Guard against zone failure by spreading nodes evenly across select multizones and by using anti-affinity pod deployments for your apps.
  • Decrease your costs by using multizone clusters instead of duplicating the resources in a separate cluster.
  • Benefit from automatic load balancing across apps with the multizone load balancer (MZLB) that is set up automatically for you in each zone of the cluster.
| | Highly available masters |
  • Reduce cluster downtime such as during master updates with highly available masters that are provisioned automatically when you create a cluster.
  • Spread your masters across zones in a [multizone cluster](/docs/containers?topic=containers-ha_clusters#multizone) to protect your cluster from zonal failures.
| |Image security compliance with Vulnerability Advisor|
  • Set up your own repo in our secured Docker private image registry where images are stored and shared by all users in the organization.
  • Benefit from automatic scanning of images in your private {{site.data.keyword.cloud_notm}} registry.
  • Review recommendations specific to the operating system used in the image to fix potential vulnerabilities.
| @@ -139,7 +139,7 @@ Both OpenShift and community Kubernetes clusters are production-ready container |Consistent container orchestration across hybrid cloud providers|Feature available|Feature available| |Access to {{site.data.keyword.cloud_notm}} services such as AI|Feature available|Feature available| |Software-defined storage Portworx solution available for multizone data use cases|Feature available|Feature available| -|Create a cluster in an IBM Virtual Private Cloud (VPC)|Feature available|Feature available| +|Create a cluster in an IBM Virtual Private Cloud (VPC)|Feature available|Feature available| |Ability to create free clusters|Feature available| | |Latest community Kubernetes distribution|Feature available| | |Cluster on only the private network|Feature available| | diff --git a/cs_worker_add.md b/cs_worker_add.md index 420bcfe32..7ae90e757 100644 --- a/cs_worker_add.md +++ b/cs_worker_add.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, clusters, worker nodes, worker pools, delete @@ -83,7 +83,7 @@ To resize the worker pool, change the number of worker nodes that the worker poo
- + ## Adding worker nodes in VPC clusters {: #vpc_pools} @@ -237,7 +237,7 @@ Add worker nodes to your classic cluster. {: shortdesc} ### Creating a new worker pool -{: #add_pool} +{: #add_pool} You can add worker nodes to your classic cluster by creating a new worker pool. {:shortdesc} @@ -303,12 +303,12 @@ Before you begin, make sure that you have the [**Operator** or **Administrator** {: screen}
- + ### Adding a zone to a worker pool -{: #add_zone} +{: #add_zone} -You can span your classic cluster across multiple zones within one region by adding a zone to your existing worker pool. +You can span your classic cluster across multiple zones within one region by adding a zone to your existing worker pool. {:shortdesc} When you add a zone to a worker pool, the worker nodes that are defined in your worker pool are provisioned in the new zone and considered for future workload scheduling. {{site.data.keyword.containerlong_notm}} automatically adds the `failure-domain.beta.kubernetes.io/region` label for the region and the `failure-domain.beta.kubernetes.io/zone` label for the zone to each worker node. The Kubernetes scheduler uses these labels to spread pods across zones within the same region. diff --git a/cs_worker_plan.md b/cs_worker_plan.md index 4dcfc3ab1..bcf0e5c54 100644 --- a/cs_worker_plan.md +++ b/cs_worker_plan.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, hardware, flavor, machine type, vm, bm @@ -33,7 +33,7 @@ A worker node flavor describes the compute resources, such as CPU, memory, and d ## Available hardware for worker nodes {: #shared_dedicated_node} -The worker node flavors and isolation levels that are available to you depend on your container platform, cluster type, the infrastructure provider that you want to use, and the {{site.data.keyword.containerlong_notm}} location where you want to create your cluster. +The worker node flavors and isolation levels that are available to you depend on your container platform, cluster type, the infrastructure provider that you want to use, and the {{site.data.keyword.containerlong_notm}} location where you want to create your cluster. {: shortdesc} Hardware options for worker nodes in a standard cluster @@ -41,7 +41,7 @@ The worker node flavors and isolation levels that are available to you depend on **What flavors are available to me?**
Classic standard clusters can be created on [virtual](#vm) and [bare metal](#bm) worker nodes. If you require additional local disks, you can also choose one of the bare metal flavors that are designed for [software-defined storage](#sds) solutions, such as Portworx. Depending on the level of hardware isolation that you need, virtual worker nodes can be set up as shared or dedicated nodes, whereas bare metal machines are always set up as dedicated nodes. If you create a free classic cluster, your cluster is provisioned with the smallest virtual worker node flavor on shared infrastructure. -VPC Gen 1 compute clusters can be provisioned as standard clusters on shared [virtual](#vm) worker nodes only, and must be created in one of the supported [multizone-capable metro cities](/docs/containers?topic=containers-regions-and-zones#zones). Free VPC clusters are not supported. +VPC Gen 1 compute clusters can be provisioned as standard clusters on shared [virtual](#vm) worker nodes only, and must be created in one of the supported [multizone-capable metro cities](/docs/containers?topic=containers-regions-and-zones#zones). Free VPC clusters are not supported. Gateway-enabled classic clusters are created with a `compute` pool of compute worker nodes and a `gateway` pool of gateway worker nodes by default. During cluster creation you can specify the isolation and flavor for the compute worker nodes, but by default the gateway worker nodes are created on shared virtual machines with the `u3c.2x4` flavor. If you want to change the isolation and flavor of the gateway worker nodes, you can [create a new gateway worker pool](/docs/containers?topic=containers-add_workers#gateway_replace) to replace the `gateway` worker pool. @@ -54,7 +54,7 @@ See [updating flavors](/docs/containers?topic=containers-update#machine_type). **How do I manage my worker nodes?**
Worker nodes in classic clusters are provisioned into your {{site.data.keyword.cloud_notm}} account. You can manage your worker nodes by using {{site.data.keyword.containerlong_notm}}, but you can also use the [classic infrastructure dashboard](https://cloud.ibm.com/classic/) in the {{site.data.keyword.cloud_notm}} console to work with your worker node directly. -Unlike classic clusters, the worker nodes of your VPC Gen 1 compute cluster are not listed in the [VPC infrastructure dashboard](https://cloud.ibm.com/vpc/overview). Instead, you manage your worker nodes with {{site.data.keyword.containerlong_notm}} only. However, your worker nodes might be connected to other VPC infrastructure resources, such as VPC subnets or VPC Block Storage. These resources are included in the VPC infrastructure dashboard and can be managed separately from there. +Unlike classic clusters, the worker nodes of your VPC Gen 1 compute cluster are not listed in the [VPC infrastructure dashboard](https://cloud.ibm.com/vpc/overview). Instead, you manage your worker nodes with {{site.data.keyword.containerlong_notm}} only. However, your worker nodes might be connected to other VPC infrastructure resources, such as VPC subnets or VPC Block Storage. These resources are included in the VPC infrastructure dashboard and can be managed separately from there. **What limitations do I need to be aware of?**
Kubernetes limits the maximum number of worker nodes that you can have in a cluster. Review [worker node and pod quotas ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/setup/best-practices/cluster-large/) for more information. @@ -71,26 +71,26 @@ With VMs, you get greater flexibility, quicker provisioning times, and more auto {: shortdesc} **Do I want to use shared or dedicated hardware?**
-When you create a standard classic cluster, you must choose whether you want the underlying hardware to be shared by multiple {{site.data.keyword.IBM_notm}} customers (multi tenancy) or to be dedicated to you only (single tenancy). VPC standard clusters can be provisioned on shared infrastructure (multi tenancy) only. +When you create a standard classic cluster, you must choose whether you want the underlying hardware to be shared by multiple {{site.data.keyword.IBM_notm}} customers (multi tenancy) or to be dedicated to you only (single tenancy). VPC standard clusters can be provisioned on shared infrastructure (multi tenancy) only. * **In a multi-tenant, shared hardware setup**: Physical resources, such as CPU and memory, are shared across all virtual machines that are deployed to the same physical hardware. To ensure that every virtual machine can run independently, a virtual machine monitor, also referred to as the hypervisor, segments the physical resources into isolated entities and allocates them as dedicated resources to a virtual machine (hypervisor isolation). * **In a single-tenant, dedicated hardware setup**: All physical resources are dedicated to you only. You can deploy multiple worker nodes as virtual machines on the same physical host. Similar to the multi-tenant setup, the hypervisor assures that every worker node gets its share of the available physical resources. Shared nodes are usually less costly than dedicated nodes because the costs for the underlying hardware are shared among multiple customers. However, when you decide between shared and dedicated nodes, you might want to check with your legal department to discuss the level of infrastructure isolation and compliance that your app environment requires. -Some classic worker node flavors are available for only one type of tenancy setup. For example, `m3c` VMs can be provisioned in a shared tenancy setup only. Additionally, VPC clusters are available as only shared virtual machines. +Some classic worker node flavors are available for only one type of tenancy setup. For example, `m3c` VMs can be provisioned in a shared tenancy setup only. Additionally, VPC clusters are available as only shared virtual machines. {: note} **How does storage work for VMs?**
Every VM comes with an attached disk for storage of information that the VM needs to run, such as OS file system, container runtime, and the `kubelet`. Local storage on the worker node is for short-term processing only, and the storage disks are wiped when you delete, reload, replace, or update the worker node. For persistent storage solutions for your apps, see [Planning highly available persistent storage](/docs/containers?topic=containers-storage_planning#storage_planning). - Additionally, classic and VPC infrastructure differ in the disk setup. + Additionally, classic and VPC infrastructure differ in the disk setup. -* **Classic VMs**: Classic VMs have two attached disks. The primary storage disk has 25 GB for the OS file system, and the secondary storage disk has 100 GB for data such as the container runtime and the `kubelet`. For reliability, the primary and secondary storage volumes are local disks instead of storage area networking (SAN). Reliability benefits include higher throughput when serializing bytes to the local disk and reduced file system degradation due to network failures. The secondary disk is encrypted by default. -* **VPC Gen 1 compute VMs**: VPC VMs have one primary disk that is a block storage volume that is attached via the network. The storage layer is not separated from the other networking layers, and both network and storage traffic are routed on the same network. To account for network latency, the storage disks have a maximum of up to 3000 IOPS. The primary storage disk is used for storing data such as the OS file system, container runtime, and `kubelet`, and is [encrypted by default](/docs/vpc-on-classic-block-storage?topic=vpc-on-classic-block-storage-block-storage-about#encryption). +* **Classic VMs**: Classic VMs have two attached disks. The primary storage disk has 25 GB for the OS file system, and the secondary storage disk has 100 GB for data such as the container runtime and the `kubelet`. For reliability, the primary and secondary storage volumes are local disks instead of storage area networking (SAN). Reliability benefits include higher throughput when serializing bytes to the local disk and reduced file system degradation due to network failures. The secondary disk is encrypted by default. +* **VPC Gen 1 compute VMs**: VPC VMs have one primary disk that is a block storage volume that is attached via the network. The storage layer is not separated from the other networking layers, and both network and storage traffic are routed on the same network. To account for network latency, the storage disks have a maximum of up to 3000 IOPS. The primary storage disk is used for storing data such as the OS file system, container runtime, and `kubelet`, and is [encrypted by default](/docs/vpc-on-classic-block-storage?topic=vpc-on-classic-block-storage-block-storage-about#encryption). **What virtual machine flavors are available?**
-The following table shows available worker node flavors for classic and VPC Gen 1 compute clusters. Worker node flavors vary by cluster type, the zone where you want to create the cluster, the container platform, and the infrastructure provider that you want to use. To see the flavors available in your zone, run `ibmcloud ks flavors --zone `. +The following table shows available worker node flavors for classic and VPC Gen 1 compute clusters. Worker node flavors vary by cluster type, the zone where you want to create the cluster, the container platform, and the infrastructure provider that you want to use. To see the flavors available in your zone, run `ibmcloud ks flavors --zone `. If your classic cluster has deprecated `x1c` or older Ubuntu 16 `x2c` worker node flavors, you can [update your cluster to have Ubuntu 18 `x3c` worker nodes](/docs/containers?topic=containers-update#machine_type). {: tip} @@ -116,7 +116,7 @@ If your classic cluster has deprecated `x1c` or older Ubuntu 16 `x2c` worker nod {: #classic-worker-vm-flavors} {: tab-title="Classic clusters"} {: tab-group="vm-worker-flavors"} - + | Name and use case | Cores/ Memory | Primary disk | Network speed `*` | |:-----------------|:-----------------|:------------------|:-------------| @@ -138,7 +138,7 @@ If your classic cluster has deprecated `x1c` or older Ubuntu 16 `x2c` worker nod `*` For more information about network performance caps for VPC virtual machines, see [Profiles](/docs/vpc-on-classic-vsi?topic=vpc-on-classic-vsi-profiles). {: note} - + ## Physical machines (bare metal) @@ -147,10 +147,10 @@ If your classic cluster has deprecated `x1c` or older Ubuntu 16 `x2c` worker nod You can provision your worker node as a single-tenant physical server, also referred to as bare metal. {: shortdesc} - + Classic infrastructure provider icon Physical machines are available for classic clusters only and are not supported in VPC Gen 1 compute clusters. {: note} - + **How is bare metal different than VMs?**
Bare metal gives you direct access to the physical resources on the machine, such as the memory or CPU. This setup eliminates the virtual machine hypervisor that allocates physical resources to virtual machines that run on the host. Instead, all of a bare metal machine's resources are dedicated exclusively to the worker, so you don't need to worry about "noisy neighbors" sharing resources or slowing down performance. Physical flavors have more local storage than virtual, and some have RAID to increase data availability. Local storage on the worker node is for short-term processing only, and the primary and secondary disks are wiped when you update or reload the worker node. For persistent storage solutions, see [Planning highly available persistent storage](/docs/containers?topic=containers-storage_planning#storage_planning). @@ -241,10 +241,10 @@ Choose a flavor, or machine type, with the right storage configuration to suppor Software-defined storage (SDS) flavors are physical machines that are provisioned with additional raw disks for physical local storage. Unlike the primary and secondary local disk, these raw disks are not wiped during a worker node update or reload. Because data is co-located with the compute node, SDS machines are suited for high-performance workloads. {: shortdesc} - + Classic infrastructure provider icon Software-defined storage flavor are available for classic clusters only and are not supported in VPC Gen 1 compute clusters. {: note} - + **When do I use SDS flavors?**
You typically use SDS machines in the following cases: diff --git a/release_notes.md b/release_notes.md index 225daaad2..0d9855213 100644 --- a/release_notes.md +++ b/release_notes.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, release notes @@ -41,13 +41,13 @@ Use the release notes to learn about the latest changes to the {{site.data.keywo 25 November 2019 -
  • Cluster autoscaling for VPC clusters: You can [set up the cluster autoscaler](/docs/containers?topic=containers-ca#ca_helm) on clusters that run on the first generation of compute for Virtual Private Cloud (VPC).
  • +
    • Cluster autoscaling for VPC clusters: You can [set up the cluster autoscaler](/docs/containers?topic=containers-ca#ca_helm) on clusters that run on the first generation of compute for Virtual Private Cloud (VPC).
    • Version changelog: Worker node patch updates are available for Kubernetes [1.16.3_1518](/docs/containers?topic=containers-changelog#1163_1518_worker), [1.15.6_1525](/docs/containers?topic=containers-changelog#1156_1525_worker), [1.14.9_1541](/docs/containers?topic=containers-changelog#1149_1541_worker), and [1.13.12_1544](/docs/containers?topic=containers-changelog#11312_1544_worker).
    • 22 November 2019
        -
      • Bring your own DNS for load balancers: Added steps for bringing your own custom domain for [NLBs](/docs/containers?topic=containers-loadbalancer_hostname#loadbalancer_hostname_dns) in classic clusters and [VPC load balancers](/docs/containers?topic=containers-vpc-lbaas#vpc_lb_dns) in VPC clusters.
      • +
      • Bring your own DNS for load balancers: Added steps for bringing your own custom domain for [NLBs](/docs/containers?topic=containers-loadbalancer_hostname#loadbalancer_hostname_dns) in classic clusters and [VPC load balancers](/docs/containers?topic=containers-vpc-lbaas#vpc_lb_dns) in VPC clusters.
      • Gateway appliance firewalls: Updated the [required IP addresses and ports](/docs/containers?topic=containers-firewall#vyatta_firewall) that you must open in a public gateway device firewall
      • Ingress ALB subdomain format: [Changes are made to the Ingress subdomain](/docs/containers?topic=containers-ingress-about#ingress-resource). New clusters are assigned an Ingress subdomain in the format `.-0001..containers.appdomain.cloud` and an Ingress secret in the format `.-0001`. Any existing clusters that use the `..containers.mybluemix.net` subdomain are assigned a CNAME record that maps to a `..containers.appdomain.cloud` subdomain.
      @@ -58,7 +58,7 @@ Use the release notes to learn about the latest changes to the {{site.data.keywo
    • Ingress ALB changelog: Updated the [ALB `nginx-ingress` image to 597 and the `ingress-auth` image to build 353](/docs/containers?topic=containers-cluster-add-ons-changelog#alb_changelog).
    • Version changelog: Master patch updates are available for Kubernetes [1.16.3_1518](/docs/containers?topic=containers-changelog#1163_1518), [1.15.6_1525](/docs/containers?topic=containers-changelog#1156_1525), [1.14.9_1541](/docs/containers?topic=containers-changelog#1149_1541), and [1.13.12_1544](/docs/containers?topic=containers-changelog#11312_1544).
    - + 19 November 2019
      @@ -66,7 +66,7 @@ Use the release notes to learn about the latest changes to the {{site.data.keywo
    • Fluentd component changes: The Fluentd component is created for your cluster only if you [create a logging configuration to forward logs to a syslog server](/docs/containers?topic=containers-health#configuring). If no logging configurations for syslog exist in your cluster, the Fluentd component is removed automatically. If you do not forward logs to syslog and want to ensure that the Fluentd component is removed from your cluster, [automatic updates to Fluentd must be enabled](/docs/containers?topic=containers-update#logging-up).
    • Bringing your own Ingress controller in VPC clusters: Added [steps](/docs/containers?topic=containers-ingress-user_managed#user_managed_vpc) for exposing your Ingress controller by creating a VPC load balancer and subdomain.
    - +
    14 November 2019 New! Diagnostics and Debug Tool add-on: The [{{site.data.keyword.containerlong_notm}} Diagnostics and Debug Tool](/docs/containers?topic=containers-cs_troubleshoot#debug_utility) is now available as a cluster add-on.