diff --git a/cs_access_reference.md b/cs_access_reference.md index a1591d43a..9e940290b 100644 --- a/cs_access_reference.md +++ b/cs_access_reference.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # User access permissions {: #access_reference} @@ -32,7 +32,7 @@ When you [assign cluster permissions](/docs/containers?topic=contai ## {{site.data.keyword.cloud_notm}} IAM platform roles {: #iam_platform} -{{site.data.keyword.containerlong_notm}} is configured to use {{site.data.keyword.cloud_notm}} Identity and Access Management (IAM) roles. {{site.data.keyword.cloud_notm}} IAM platform roles determine the actions that users can perform on {{site.data.keyword.cloud_notm}} resources such as clusters, worker nodes, and Ingress application load balancers (ALBs). {{site.data.keyword.cloud_notm}} IAM platform roles also automatically set basic infrastructure permissions for users. To set platform roles, see [Assigning {{site.data.keyword.cloud_notm}} IAM platform permissions](/docs/containers?topic=containers-users#platform). +{{site.data.keyword.containerlong_notm}} is configured to use {{site.data.keyword.cloud_notm}} Identity and Access Management (IAM) roles. {{site.data.keyword.cloud_notm}} IAM platform roles determine the actions that users can perform on {{site.data.keyword.cloud_notm}} resources such as clusters, worker nodes, and Ingress application load balancers (ALBs). {{site.data.keyword.cloud_notm}} IAM platform roles also automatically set basic infrastructure permissions for users. To set platform roles, see [Assigning {{site.data.keyword.cloud_notm}} IAM platform permissions](/docs/containers?topic=containers-users#platform). {: shortdesc}

Do not assign {{site.data.keyword.cloud_notm}} IAM platform roles at the same time as a service role. You must assign platform and service roles separately.

diff --git a/cs_annotations.md b/cs_annotations.md index e98650e8f..5714fbb2e 100644 --- a/cs_annotations.md +++ b/cs_annotations.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, ingress @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Customizing Ingress routing with annotations {: #ingress_annotation} @@ -411,7 +411,7 @@ metadata: ### Location snippets (`location-snippets`) {: #location-snippets} -Add a custom location block configuration for a service. +Add a custom location block configuration for a service. {:shortdesc} **Description**
diff --git a/cs_api_install.md b/cs_api_install.md index ae4c98115..eb0bc176c 100644 --- a/cs_api_install.md +++ b/cs_api_install.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, ibmcloud, ic, ks, kubectl, api @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Setting up the API {: #cs_api_install} @@ -103,7 +103,7 @@ You can use the version two (`v2`) API to manage both classic and VPC clusters. ## Automating cluster deployments with the API {: #cs_api} -You can use the {{site.data.keyword.containerlong_notm}} API to automate the creation, deployment, and management of your Kubernetes clusters. +You can use the {{site.data.keyword.containerlong_notm}} API to automate the creation, deployment, and management of your Kubernetes clusters. {:shortdesc} The {{site.data.keyword.containerlong_notm}} API requires header information that you must provide in your API request and that can vary depending on the API that you want to use. To determine what header information is needed for your API, see the [{{site.data.keyword.containerlong_notm}} API documentation ![External link icon](../icons/launch-glyph.svg "External link icon")](https://us-south.containers.cloud.ibm.com/swagger-api). diff --git a/cs_app.md b/cs_app.md index 83fc744dc..4e24c75a3 100644 --- a/cs_app.md +++ b/cs_app.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-21" +lastupdated: "2019-11-26" keywords: kubernetes, iks, node.js, js, java, .net, go, flask, react, python, swift, rails, ruby, spring boot, angular @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Deploying Kubernetes-native apps in clusters {: #app} diff --git a/cs_app_knative.md b/cs_app_knative.md index 2db4aa475..fee95a053 100644 --- a/cs_app_knative.md +++ b/cs_app_knative.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, knative @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Deploying serverless apps with Knative diff --git a/cs_at_events.md b/cs_at_events.md index 273b04e23..8ff028281 100644 --- a/cs_at_events.md +++ b/cs_at_events.md @@ -2,7 +2,7 @@ copyright: years: 2017, 2019 -lastupdated: "2019-11-08" +lastupdated: "2019-11-26" keywords: kubernetes, iks, audit @@ -21,12 +21,12 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # {{site.data.keyword.at_full_notm}} events {: #at_events} -You can view, manage, and audit user-initiated activities in your {{site.data.keyword.containerlong}} community Kubernetes or OpenShift cluster by using the {{site.data.keyword.at_full}} service. +You can view, manage, and audit user-initiated activities in your {{site.data.keyword.containerlong}} community Kubernetes or OpenShift cluster by using the {{site.data.keyword.at_full}} service. {: shortdesc} {{site.data.keyword.containerlong_notm}} automatically generates cluster management events and forwards these event logs to {{site.data.keyword.at_full_notm}}. To access these logs, you must [provision an instance of {{site.data.keyword.at_full_notm}}](/docs/services/Activity-Tracker-with-LogDNA?topic=logdnaat-getting-started). diff --git a/cs_cli_changelog.md b/cs_cli_changelog.md index 5bb3f9738..74fc0f7a2 100644 --- a/cs_cli_changelog.md +++ b/cs_cli_changelog.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # CLI changelog {: #cs_cli_changelog} diff --git a/cs_cli_install.md b/cs_cli_install.md index 6fc800192..0758c8606 100644 --- a/cs_cli_install.md +++ b/cs_cli_install.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Setting up the CLI {: #cs_cli_install} diff --git a/cs_cluster_access.md b/cs_cluster_access.md index 2461503d0..7bbd448f9 100644 --- a/cs_cluster_access.md +++ b/cs_cluster_access.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-21" +lastupdated: "2019-11-26" keywords: kubernetes, iks, clusters @@ -19,7 +19,7 @@ subcollection: containers {:tip: .tip} {:note: .note} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Accessing Kubernetes clusters diff --git a/cs_cluster_plan_ha.md b/cs_cluster_plan_ha.md index 91ef47551..e2fac1d70 100644 --- a/cs_cluster_plan_ha.md +++ b/cs_cluster_plan_ha.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Planning your cluster for high availability {: #ha_clusters} @@ -35,7 +35,7 @@ Your users are less likely to experience downtime when you distribute your apps 1. A [single zone cluster](#single_zone) with multiple worker nodes in a worker pool. 2. A [multizone cluster](#multizone) that spreads worker nodes across zones within one region. -3. **Clusters with public network connectivity**: [Multiple clusters](#multiple_clusters) that are set up across zones or regions and that are connected via a global load balancer. +3. **Clusters with public network connectivity**: [Multiple clusters](#multiple_clusters) that are set up across zones or regions and that are connected via a global load balancer. ## Single zone cluster {: #single_zone} diff --git a/cs_cluster_plan_network.md b/cs_cluster_plan_network.md index 1d6f48b4f..279b12ebc 100644 --- a/cs_cluster_plan_network.md +++ b/cs_cluster_plan_network.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Planning your cluster network setup @@ -532,7 +532,7 @@ Your worker nodes can automatically, securely communicate with other {{site.data **External communication to apps that run on worker nodes** -To provide private access to an app in your cluster, you can create a private network load balancer (NLB) or Ingress application load balancer (ALB). These Kubernetes network services expose your app to the private network only so that any on-premises system with a connection to the subnet that the NLB IP is on can access the app. +To provide private access to an app in your cluster, you can create a private network load balancer (NLB) or Ingress application load balancer (ALB). These Kubernetes network services expose your app to the private network only so that any on-premises system with a connection to the subnet that the NLB IP is on can access the app. Ready to get started with a cluster for this scenario? After you plan your [high availability](/docs/containers?topic=containers-ha_clusters) and [worker node](/docs/containers?topic=containers-planning_worker_nodes) setups, see [Creating clusters](/docs/containers?topic=containers-clusters). diff --git a/cs_cluster_scaling.md b/cs_cluster_scaling.md index 594babdb0..47c5bb8cd 100644 --- a/cs_cluster_scaling.md +++ b/cs_cluster_scaling.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} {:gif: data-image-type='gif'} # Autoscaling clusters @@ -914,7 +914,7 @@ Before you begin: [Log in to your account. If applicable, target the appropriate ## Using the cluster autoscaler for a private network-only cluster {: #ca_private_cluster} -The cluster autoscaler is available for standard clusters that are set up with public network connectivity. If your cluster cannot access the public network, such as a private cluster behind a firewall or a cluster with only the private service endpoint enabled, you must temporarily open the required ports or temporarily enable the public service endpoint to install, update, or customize the cluster autoscaler. After the cluster autoscaler is installed, you can close the ports or disable the public service endpoint. +The cluster autoscaler is available for standard clusters that are set up with public network connectivity. If your cluster cannot access the public network, such as a private cluster behind a firewall or a cluster with only the private service endpoint enabled, you must temporarily open the required ports or temporarily enable the public service endpoint to install, update, or customize the cluster autoscaler. After the cluster autoscaler is installed, you can close the ports or disable the public service endpoint. {: shortdesc} If your account is not enabled for VRF and service endpoints, you can [open the required ports](/docs/containers?topic=containers-firewall#vyatta_firewall) to allow public network connectivity in your cluster. diff --git a/cs_cluster_update.md b/cs_cluster_update.md index ff139dca8..5d9df9732 100644 --- a/cs_cluster_update.md +++ b/cs_cluster_update.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Updating clusters, worker nodes, and cluster components {: #update} @@ -568,7 +568,7 @@ When you create a logging configuration for a source in your cluster to forward As of 14 November 2019, a Fluentd component is created for your cluster only if you [create a logging configuration to forward logs to a syslog server](/docs/containers?topic=containers-health#configuring). If no logging configurations for syslog exist in your cluster, the Fluentd component is removed automatically. If you do not forward logs to syslog and want to ensure that the Fluentd component is removed from your cluster, automatic updates to Fluentd must be enabled. {: important} -You can manage automatic updates of the Fluentd component in the following ways. **Note**: To run the following commands, you must have the [**Administrator** {{site.data.keyword.cloud_notm}} IAM platform role](/docs/containers?topic=containers-users#platform) for the cluster. +You can manage automatic updates of the Fluentd component in the following ways. **Note**: To run the following commands, you must have the [**Administrator** {{site.data.keyword.cloud_notm}} IAM platform role](/docs/containers?topic=containers-users#platform) for the cluster. * Check whether automatic updates are enabled by running the `ibmcloud ks logging autoupdate get --cluster ` [command](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_log_autoupdate_get). * Disable automatic updates by running the `ibmcloud ks logging autoupdate disable` [command](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_log_autoupdate_disable). diff --git a/cs_clusters.md b/cs_clusters.md index 4db8fffcc..95af22294 100644 --- a/cs_clusters.md +++ b/cs_clusters.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} {:gif: data-image-type='gif'} # Creating clusters @@ -780,7 +780,7 @@ Create your single zone or multizone VPC Generation 1 compute cluster by using t {: #next_steps} When the cluster is up and running, you can check out the following cluster administration tasks: -- If you created the cluster in a multizone capable zone, [spread worker nodes by adding a zone to your cluster](/docs/containers?topic=containers-add_workers). +- If you created the cluster in a multizone capable zone, [spread worker nodes by adding a zone to your cluster](/docs/containers?topic=containers-add_workers). - [Deploy an app in your cluster.](/docs/containers?topic=containers-app#app_cli) - [Set up your own private registry in {{site.data.keyword.cloud_notm}} to store and share Docker images with other users.](/docs/services/Registry?topic=registry-getting-started) - [Set up the cluster autoscaler](/docs/containers?topic=containers-ca#ca) to automatically add or remove worker nodes from your worker pools based on your workload resource requests. diff --git a/cs_dedicated.md b/cs_dedicated.md index d678b7c07..9657b3ce4 100644 --- a/cs_dedicated.md +++ b/cs_dedicated.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Deprecated: Dedicated cloud diff --git a/cs_dns.md b/cs_dns.md index abc948948..431c408c9 100644 --- a/cs_dns.md +++ b/cs_dns.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-04" +lastupdated: "2019-11-26" keywords: kubernetes, iks, coredns, kubedns, dns @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Configuring the cluster DNS provider for classic clusters diff --git a/cs_edge.md b/cs_edge.md index e1245d386..aa1d2249a 100644 --- a/cs_edge.md +++ b/cs_edge.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-14" +lastupdated: "2019-11-26" keywords: kubernetes, iks, affinity, taint @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Restricting network traffic to edge worker nodes {: #edge} @@ -195,7 +195,7 @@ Trying out a gateway-enabled cluster? See [Isolating networking workloads to edg {: tip} Before you begin: -- Ensure you that have the [**Manager** {{site.data.keyword.cloud_notm}} IAM service role for all namespaces](/docs/containers?topic=containers-users#platform). +- Ensure you that have the [**Manager** {{site.data.keyword.cloud_notm}} IAM service role for all namespaces](/docs/containers?topic=containers-users#platform). - [Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.](/docs/containers?topic=containers-cs_cli_install#cs_cli_configure)
To prevent other workloads from running on edge worker nodes: diff --git a/cs_encrypt.md b/cs_encrypt.md index 911c4fa35..ac2a13d1c 100644 --- a/cs_encrypt.md +++ b/cs_encrypt.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} {:external: target="_blank" .external} # Protecting sensitive information in your cluster @@ -244,7 +244,7 @@ Before you begin: [Log in to your account. If applicable, target the appropriate etcdCACertFile: '/Users//.bluemix/plugins/container-service/clusters/-admin/ca--.pem' ``` {: screen} -5. Confirm that the Kubernetes secrets for the cluster are encrypted. Replace the `cluster_name`, `etcdEndpoints`, `etcdCACertFile`, `etcdKeyFile`, and `etcdCertFile` fields with the values that you previously retrieved. The output is unreadable and scrambled, indicating that the secrets are encrypted. +5. Confirm that the Kubernetes secrets for the cluster are encrypted. Replace the `cluster_name`, `etcdEndpoints`, `etcdCACertFile`, `etcdKeyFile`, and `etcdCertFile` fields with the values that you previously retrieved. The output is unreadable and scrambled, indicating that the secrets are encrypted. ``` etcdctl get /registry/secrets/default/ --endpoints --cacert="" --key="" --cert="" ``` diff --git a/cs_firewall.md b/cs_firewall.md index 1b0713617..c3b672e91 100644 --- a/cs_firewall.md +++ b/cs_firewall.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-21" +lastupdated: "2019-11-26" keywords: kubernetes, iks, firewall, vyatta, ips @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Opening required ports and IP addresses in your firewall {: #firewall} @@ -550,7 +550,7 @@ If you want to access services that run inside or outside {{site.data.keyword.cl 1. [Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.](/docs/containers?topic=containers-cs_cli_install#cs_cli_configure) 2. Get the worker node subnets or the worker node IP addresses. - * **Worker node subnets**: If you anticipate changing the number of worker nodes in your cluster frequently, such as if you enable the [cluster autoscaler](/docs/containers?topic=containers-ca#ca), you might not want to update your firewall for each new worker node. Instead, you can whitelist the VLAN subnets that the cluster uses. Keep in mind that the VLAN subnet might be shared by worker nodes in other clusters. + * **Worker node subnets**: If you anticipate changing the number of worker nodes in your cluster frequently, such as if you enable the [cluster autoscaler](/docs/containers?topic=containers-ca#ca), you might not want to update your firewall for each new worker node. Instead, you can whitelist the VLAN subnets that the cluster uses. Keep in mind that the VLAN subnet might be shared by worker nodes in other clusters.

The **primary public subnets** that {{site.data.keyword.containerlong_notm}} provisions for your cluster come with 14 available IP addresses, and can be shared by other clusters on the same VLAN. When you have more than 14 worker nodes, another subnet is ordered, so the subnets that you need to whitelist can change. To reduce the frequency of change, create worker pools with worker node flavors of higher CPU and memory resources so that you don't need to add worker nodes as often.

1. List the worker nodes in your cluster. ``` diff --git a/cs_ha.md b/cs_ha.md index 8490ebbcf..409aa8618 100644 --- a/cs_ha.md +++ b/cs_ha.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-19" +lastupdated: "2019-11-26" keywords: kubernetes, iks, disaster recovery, dr, ha, hadr @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} @@ -41,7 +41,7 @@ You can achieve high availability on different levels in your IT infrastructure The {{site.data.keyword.containerlong_notm}} architecture and infrastructure is designed to ensure reliability, low processing latency, and a maximum uptime of the service. However, failures can happen. Depending on the service that you host in {{site.data.keyword.cloud_notm}}, you might not be able to tolerate failures, even if failures last for only a few minutes. {: shortdesc} -{{site.data.keyword.containerlong_notm}} provides several approaches to add more availability to your cluster by adding redundancy and anti-affinity. Review the following image to learn about potential points of failure and how to eliminate them. +{{site.data.keyword.containerlong_notm}} provides several approaches to add more availability to your cluster by adding redundancy and anti-affinity. Review the following image to learn about potential points of failure and how to eliminate them. Overview of fault domains in a high availability cluster within an {{site.data.keyword.cloud_notm}} region. diff --git a/cs_health.md b/cs_health.md index 5262f3f2c..ee9c910d5 100644 --- a/cs_health.md +++ b/cs_health.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-21" +lastupdated: "2019-11-26" keywords: kubernetes, iks, logmet, logs, metrics @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} {:external: target="_blank" .external} # Logging and monitoring diff --git a/cs_hybrid.md b/cs_hybrid.md index 4c2b68ebe..40a025a6c 100644 --- a/cs_hybrid.md +++ b/cs_hybrid.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-09-13" +lastupdated: "2019-11-26" keywords: kubernetes, iks, vpn, private cloud, icp @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} {:tsSymptoms: .tsSymptoms} {:tsCauses: .tsCauses} {:tsResolve: .tsResolve} diff --git a/cs_images.md b/cs_images.md index 847bbe3a6..f234f45ff 100644 --- a/cs_images.md +++ b/cs_images.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-19" +lastupdated: "2019-11-26" keywords: kubernetes, iks, registry, pull secret, secrets @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Building containers from images {: #images} @@ -75,7 +75,7 @@ You can build containers from trusted images that are signed and stored in {{sit ## Deploying containers from an {{site.data.keyword.registryshort_notm}} image to the `default` Kubernetes namespace {: #namespace} -You can deploy containers to your cluster from an IBM-provided public image or a private image that is stored in your {{site.data.keyword.registryshort_notm}} namespace. For more information about how your cluster accesses registry images, see [Understanding how your cluster is authorized to pull images from {{site.data.keyword.registrylong_notm}}](#cluster_registry_auth). +You can deploy containers to your cluster from an IBM-provided public image or a private image that is stored in your {{site.data.keyword.registryshort_notm}} namespace. For more information about how your cluster accesses registry images, see [Understanding how your cluster is authorized to pull images from {{site.data.keyword.registrylong_notm}}](#cluster_registry_auth). {:shortdesc} Before you begin: diff --git a/cs_ingress.md b/cs_ingress.md index a54fdf7b7..1ac2b2246 100644 --- a/cs_ingress.md +++ b/cs_ingress.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Setting up Ingress {: #ingress} @@ -1124,7 +1124,7 @@ When you enable the private ALBs, one private VPC load balancer is automatically Ingress resources define the routing rules that the ALB uses to route traffic to your app service. {: shortdesc} -If your cluster has multiple namespaces where apps are exposed, one Ingress resource is required for each namespace. The Ingress resource determines the host that is appended to your app and that builds the URL to access your app. The Ingress host must be unique in each Ingress resource that you create. The DNS subdomain that you created in the previous step is registered as a wildcard domain. You can use this domain to build multiple Ingress hosts for your Ingress resource. For example, if your subdomain is `mycluster-a1b2cdef345678g9hi012j3kl4567890-0001.us-south.containers.appdomain.cloud`, you can build multiple Ingress hosts by adding a custom value as a prefix to the subdomain, such as `example1.mycluster-a1b2cdef345678g9hi012j3kl4567890-0001.us-south.containers.appdomain.cloud`. +If your cluster has multiple namespaces where apps are exposed, one Ingress resource is required for each namespace. The Ingress resource determines the host that is appended to your app and that builds the URL to access your app. The Ingress host must be unique in each Ingress resource that you create. The DNS subdomain that you created in the previous step is registered as a wildcard domain. You can use this domain to build multiple Ingress hosts for your Ingress resource. For example, if your subdomain is `mycluster-a1b2cdef345678g9hi012j3kl4567890-0001.us-south.containers.appdomain.cloud`, you can build multiple Ingress hosts by adding a custom value as a prefix to the subdomain, such as `example1.mycluster-a1b2cdef345678g9hi012j3kl4567890-0001.us-south.containers.appdomain.cloud`. {: note} 1. Open your preferred editor and create an Ingress configuration file that is named, for example, `myingressresource.yaml`. diff --git a/cs_ingress_about.md b/cs_ingress_about.md index 0010f19d2..0e1e2d10d 100644 --- a/cs_ingress_about.md +++ b/cs_ingress_about.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # About Ingress ALBs {: #ingress-about} @@ -38,7 +38,7 @@ Ingress consists of three components: Ingress resources, application load balanc ### Ingress resource {: #ingress-resource} -To expose an app by using Ingress, you must create a Kubernetes service for your app and register this service with Ingress by defining an Ingress resource. The Ingress resource is a Kubernetes resource that defines the rules for how to route incoming requests for apps. +To expose an app by using Ingress, you must create a Kubernetes service for your app and register this service with Ingress by defining an Ingress resource. The Ingress resource is a Kubernetes resource that defines the rules for how to route incoming requests for apps. {: shortdesc} The Ingress resource also specifies the path to your app services. When you create a standard cluster, an Ingress subdomain is registered by default for your cluster in the format `.-0001..containers.appdomain.cloud`. The paths to your app services are appended to the public route to form a unique app URL such as `mycluster-a1b2cdef345678g9hi012j3kl4567890-0001.us-south.containers.appdomain.cloud/myapp1`. diff --git a/cs_ingress_health.md b/cs_ingress_health.md index afa981b1c..31dc60236 100644 --- a/cs_ingress_health.md +++ b/cs_ingress_health.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-22" +lastupdated: "2019-11-26" keywords: kubernetes, iks, ingress, alb, health, prometheus @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Logging and monitoring Ingress {: #ingress_health} @@ -175,7 +175,7 @@ Before you begin, ensure that you have the [**Writer** or **Manager** {{site.dat - For example, your log format might contain the following variables: + For example, your log format might contain the following variables: ``` apiVersion: v1 data: diff --git a/cs_ingress_settings.md b/cs_ingress_settings.md index 02f2329c6..534f81d26 100644 --- a/cs_ingress_settings.md +++ b/cs_ingress_settings.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Modifying default Ingress behavior @@ -36,7 +36,7 @@ After you expose your apps by creating an Ingress resource, you can further conf By default, only ports 80 and 443 are exposed in the Ingress ALB. To expose other ports, you can edit the `ibm-cloud-provider-ingress-cm` configmap resource. {: shortdesc} -1. Edit the configuration file for the `ibm-cloud-provider-ingress-cm` configmap resource. +1. Edit the configuration file for the `ibm-cloud-provider-ingress-cm` configmap resource. ``` kubectl edit cm ibm-cloud-provider-ingress-cm -n kube-system ``` diff --git a/cs_ingress_user_managed.md b/cs_ingress_user_managed.md index fd5cb1362..a8f92e475 100644 --- a/cs_ingress_user_managed.md +++ b/cs_ingress_user_managed.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Bringing your own Ingress controller {: #ingress-user_managed} @@ -34,7 +34,7 @@ The IBM-provided Ingress application load balancers (ALBs) are based on NGINX co ## Classic clusters: Exposing your Ingress controller by creating an NLB and a hostname {: #user_managed_nlb} -Create a network load balancer (NLB) to expose your custom Ingress controller deployment, and then create a hostname for the NLB IP address. +Create a network load balancer (NLB) to expose your custom Ingress controller deployment, and then create a hostname for the NLB IP address. {: shortdesc} In classic clusters, bringing your own Ingress controller is supported only for providing public external access to your apps and is not supported for providing private external access. diff --git a/cs_integrations_addons.md b/cs_integrations_addons.md index f02749ab5..a6cdc7ac2 100644 --- a/cs_integrations_addons.md +++ b/cs_integrations_addons.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-09-17" +lastupdated: "2019-11-26" keywords: kubernetes, iks, helm @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Adding services by using managed add-ons {: #managed-addons} diff --git a/cs_integrations_helm.md b/cs_integrations_helm.md index 775e4d63a..a275020fd 100644 --- a/cs_integrations_helm.md +++ b/cs_integrations_helm.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-14" +lastupdated: "2019-11-26" keywords: kubernetes, iks, helm, without tiller, private cluster tiller, integrations, helm chart @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} diff --git a/cs_integrations_ibm_third_party.md b/cs_integrations_ibm_third_party.md index ab0b50291..73a1aac49 100644 --- a/cs_integrations_ibm_third_party.md +++ b/cs_integrations_ibm_third_party.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-14" +lastupdated: "2019-11-26" keywords: kubernetes, iks, helm @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} {:external: target="_blank" .external} diff --git a/cs_integrations_overview.md b/cs_integrations_overview.md index 048596b58..c3c1814a1 100644 --- a/cs_integrations_overview.md +++ b/cs_integrations_overview.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-11" +lastupdated: "2019-11-26" keywords: kubernetes, iks, helm @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Supported IBM Cloud and third-party integrations diff --git a/cs_integrations_partnerships.md b/cs_integrations_partnerships.md index c906e4d5c..5447cb555 100644 --- a/cs_integrations_partnerships.md +++ b/cs_integrations_partnerships.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-10-17" +lastupdated: "2019-11-26" keywords: kubernetes, iks, helm @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # IBM Cloud Kubernetes Service partners diff --git a/cs_integrations_service_binding.md b/cs_integrations_service_binding.md index c082eb21f..7d88ba096 100644 --- a/cs_integrations_service_binding.md +++ b/cs_integrations_service_binding.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-09-24" +lastupdated: "2019-11-26" keywords: kubernetes, iks, helm, without tiller, private cluster tiller, integrations, helm chart @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} diff --git a/cs_istio.md b/cs_istio.md index 8f19496aa..7c73d9624 100644 --- a/cs_istio.md +++ b/cs_istio.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-19" +lastupdated: "2019-11-26" keywords: kubernetes, iks, envoy, sidecar, mesh, bookinfo @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Using the managed Istio add-on {: #istio} diff --git a/cs_kube_strategy.md b/cs_kube_strategy.md index 2e597ea6d..62b3ab9c2 100644 --- a/cs_kube_strategy.md +++ b/cs_kube_strategy.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-14" +lastupdated: "2019-11-26" keywords: kubernetes, iks, containers @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Defining your Kubernetes strategy diff --git a/cs_limitations.md b/cs_limitations.md index 39238bd51..fd320ee90 100644 --- a/cs_limitations.md +++ b/cs_limitations.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Service limitations {: #limitations} @@ -37,10 +37,10 @@ If you anticipate reaching any of the following {{site.data.keyword.containerlon {: #tech_limits}
-{{site.data.keyword.containerlong_notm}} comes with the following service limitations that apply to all clusters, independent of what infrastructure provider you plan to use. +{{site.data.keyword.containerlong_notm}} comes with the following service limitations that apply to all clusters, independent of what infrastructure provider you plan to use. {: shortdesc} -In addition to the service limitations, make sure to also review the limitations for [classic](#classic_limits) or [VPC](#vpc_ks_limits) clusters. +In addition to the service limitations, make sure to also review the limitations for [classic](#classic_limits) or [VPC](#vpc_ks_limits) clusters. {: note} @@ -97,7 +97,7 @@ Classic infrastructure clusters in {{site.data.keyword.containerlong_notm}} are -
You can have a total of 250 IBM Cloud infrastructure file and block storage volumes per account. If you mount more than this amount, you might see an "out of capacity" message when you provision persistent volumes and need to contact your IBM Cloud infrastructure representative. For more FAQs, see the [file](/docs/infrastructure/FileStorage?topic=FileStorage-file-storage-faqs#how-many-volumes-can-i-provision-) and [block](/docs/infrastructure/BlockStorage?topic=BlockStorage-block-storage-faqs#how-many-instances-can-share-the-use-of-a-block-storage-volume-) storage docs.
+ ## VPC cluster limitations {: #vpc_ks_limits} @@ -154,6 +154,6 @@ VPC Generation 1 compute clusters in {{site.data.keyword.containerlong_notm}} ar VPC clusters use the [{{site.data.keyword.containerlong_notm}} v2 API](/docs/containers?topic=containers-cs_api_install#api_about). The v2 API is currently under development, with only a limited number of API operations currently available. You can run certain v1 API operations against the VPC cluster, such as `GET /v1/clusters` or `ibmcloud ks cluster ls`, but not all the information that a Classic cluster has is returned or you might experience unexpected results. For supported VPC v2 operations, see the [CLI reference topic for VPC commands](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cli_classic_vpc_about). - + diff --git a/cs_loadbalancer.md b/cs_loadbalancer.md index 749c921e7..9352ed57a 100644 --- a/cs_loadbalancer.md +++ b/cs_loadbalancer.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Classic: Setting up basic load balancing with an NLB 1.0 {: #loadbalancer} @@ -29,7 +29,7 @@ subcollection: containers Classic infrastructure provider icon Version 1.0 NLBs can be created in classic clusters only, and cannot be created in VPC clusters. To load balance in VPC clusters, see [Exposing apps with load balancers for VPC](/docs/containers?topic=containers-vpc-lbaas). {: note} -Expose a port and use a portable IP address for a Layer 4 network load balancer (NLB) to expose a containerized app. For information about version 1.0 NLBs, see [Components and architecture of an NLB 1.0](/docs/containers?topic=containers-loadbalancer-about#v1_planning). +Expose a port and use a portable IP address for a Layer 4 network load balancer (NLB) to expose a containerized app. For information about version 1.0 NLBs, see [Components and architecture of an NLB 1.0](/docs/containers?topic=containers-loadbalancer-about#v1_planning). {:shortdesc} To quickly get started, you can run the following command to create a load balancer 1.0: diff --git a/cs_loadbalancer_about.md b/cs_loadbalancer_about.md index 81d2d2c92..1d3f2e592 100644 --- a/cs_loadbalancer_about.md +++ b/cs_loadbalancer_about.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Classic: About network load balancers (NLBs) {: #loadbalancer-about} @@ -54,7 +54,7 @@ Version 1.0 and 2.0 NLBs are both Layer 4 load balancers that exist in the Linux **How are versions 1.0 and 2.0 NLBs different?** -When a client sends a request to your app, the NLB routes request packets to the worker node IP address where an app pod exists. Version 1.0 NLBs use network address translation (NAT) to rewrite the request packet's source IP address to the IP of worker node where a load balancer pod exists. When the worker node returns the app response packet, it uses that worker node IP where the NLB exists. The NLB must then send the response packet to the client. To prevent the IP address from being rewritten, you can [enable source IP preservation](/docs/containers?topic=containers-loadbalancer#node_affinity_tolerations). However, source IP preservation requires load balancer pods and app pods to run on the same worker so that the request doesn't have to be forwarded to another worker. You must add node affinity and tolerations to app pods. For more information about basic load balancing with version 1.0 NLBs, see [Components and architecture of an NLB 1.0](#v1_planning). +When a client sends a request to your app, the NLB routes request packets to the worker node IP address where an app pod exists. Version 1.0 NLBs use network address translation (NAT) to rewrite the request packet's source IP address to the IP of worker node where a load balancer pod exists. When the worker node returns the app response packet, it uses that worker node IP where the NLB exists. The NLB must then send the response packet to the client. To prevent the IP address from being rewritten, you can [enable source IP preservation](/docs/containers?topic=containers-loadbalancer#node_affinity_tolerations). However, source IP preservation requires load balancer pods and app pods to run on the same worker so that the request doesn't have to be forwarded to another worker. You must add node affinity and tolerations to app pods. For more information about basic load balancing with version 1.0 NLBs, see [Components and architecture of an NLB 1.0](#v1_planning). As opposed to version 1.0 NLBs, version 2.0 NLBs don't use NAT when forwarding requests to app pods on other workers. When an NLB 2.0 routes a client request, it uses IP over IP (IPIP) to encapsulate the original request packet into another packet. This encapsulating IPIP packet has a source IP of the worker node where the load balancer pod is, which allows the original request packet to preserve the client IP as its source IP address. The worker node then uses direct server return (DSR) to send the app response packet to the client IP. The response packet skips the NLB and is sent directly to the client, decreasing the amount of traffic that the NLB must handle. For more information about DSR load balancing with version 2.0 NLBs, see [Components and architecture of an NLB 2.0 (beta)](#planning_ipvs). diff --git a/cs_loadbalancer_dns.md b/cs_loadbalancer_dns.md index c597ec4c5..b9cf27a9f 100644 --- a/cs_loadbalancer_dns.md +++ b/cs_loadbalancer_dns.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Classic: Registering a DNS subdomain for an NLB {: #loadbalancer_hostname} @@ -29,7 +29,7 @@ subcollection: containers VPC infrastructure provider icon This content is specific to NLBs in classic clusters. For VPC clusters, see [Registering a VPC load balancer hostname with a DNS subdomain](/docs/containers?topic=containers-vpc-lbaas#vpc_dns). {: note} -After you set up network load balancers (NLBs), you can create DNS entries for the NLB IPs by creating subdomains. You can also set up TCP/HTTP(S) monitors to health check the NLB IP addresses behind each subdomain. +After you set up network load balancers (NLBs), you can create DNS entries for the NLB IPs by creating subdomains. You can also set up TCP/HTTP(S) monitors to health check the NLB IP addresses behind each subdomain. {: shortdesc}
diff --git a/cs_loadbalancer_v2.md b/cs_loadbalancer_v2.md index ac8445d1a..460299ff5 100644 --- a/cs_loadbalancer_v2.md +++ b/cs_loadbalancer_v2.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, lb2.0, nlb @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Classic: Setting up DSR load balancing with an NLB 2.0 (beta) {: #loadbalancer-v2} diff --git a/cs_locations.md b/cs_locations.md index 88c494ae2..9c00871b4 100644 --- a/cs_locations.md +++ b/cs_locations.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-15" +lastupdated: "2019-11-26" keywords: kubernetes, iks, mzr, szr, multizone, multi az @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Locations {: #regions-and-zones} @@ -33,7 +33,7 @@ You can deploy {{site.data.keyword.containerlong}} clusters worldwide. When you _{{site.data.keyword.containerlong_notm}} locations_ -{{site.data.keyword.cloud_notm}} resources used to be organized into regions that were accessed via [region-specific endpoints](#bluemix_regions). Use the [global endpoint](#endpoint) instead. +{{site.data.keyword.cloud_notm}} resources used to be organized into regions that were accessed via [region-specific endpoints](#bluemix_regions). Use the [global endpoint](#endpoint) instead. {: deprecated} ## {{site.data.keyword.containerlong_notm}} locations diff --git a/cs_network_cluster.md b/cs_network_cluster.md index 3b7e468b6..927a64283 100644 --- a/cs_network_cluster.md +++ b/cs_network_cluster.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Changing service endpoints or VLAN connections for classic clusters {: #cs_network_cluster} @@ -35,7 +35,7 @@ After you initially set up your network when you [create a cluster](/docs/contai ## Setting up the private service endpoint {: #set-up-private-se} -Enable or disable the private service endpoint for your cluster. +Enable or disable the private service endpoint for your cluster. {: shortdesc} diff --git a/cs_network_planning.md b/cs_network_planning.md index b9cf3123f..ba3d69e36 100644 --- a/cs_network_planning.md +++ b/cs_network_planning.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, networking @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Planning in-cluster and external networking for apps {: #cs_network_planning} diff --git a/cs_network_policy.md b/cs_network_policy.md index a90328a0e..0408c7ddb 100644 --- a/cs_network_policy.md +++ b/cs_network_policy.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Controlling traffic with network policies {: #network_policies} @@ -506,7 +506,7 @@ To protect your cluster on the public network by using Calico policies: You can isolate your cluster from other systems on the private network by applying [Calico private network policies ![External link icon](../icons/launch-glyph.svg "External link icon")](https://github.com/IBM-Cloud/kube-samples/tree/master/calico-policies/private-network-isolation/calico-v3). {: shortdesc} -This set of Calico policies and host endpoints can isolate the private network traffic of a cluster from other resources in the account's private network, while allowing communication on the private network that is necessary for the cluster to function. For example, when you enable [VRF or VLAN spanning](/docs/containers?topic=containers-plan_clusters#worker-worker) to allow worker nodes to communicate with each other on the private network, any instance that is connected to any of the private VLANs in the same {{site.data.keyword.cloud_notm}} account can communicate with your worker nodes. +This set of Calico policies and host endpoints can isolate the private network traffic of a cluster from other resources in the account's private network, while allowing communication on the private network that is necessary for the cluster to function. For example, when you enable [VRF or VLAN spanning](/docs/containers?topic=containers-plan_clusters#worker-worker) to allow worker nodes to communicate with each other on the private network, any instance that is connected to any of the private VLANs in the same {{site.data.keyword.cloud_notm}} account can communicate with your worker nodes. To see a list of the ports that are opened by these policies and a list of the policies that are included, see the [README for the Calico public network policies ![External link icon](../icons/launch-glyph.svg "External link icon")](https://github.com/IBM-Cloud/kube-samples/blob/master/calico-policies/private-network-isolation/README.md). diff --git a/cs_nodeport.md b/cs_nodeport.md index 2ad7d4301..2576bedcb 100644 --- a/cs_nodeport.md +++ b/cs_nodeport.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Testing access to apps with NodePorts {: #nodeport} @@ -32,7 +32,7 @@ Make your containerized app available to internet access by using the public IP ## Managing network traffic by using NodePorts {: #nodeport_planning} -Expose a public port on your worker node and use the public IP address of the worker node to access your service in the cluster publicly from the internet. +Expose a public port on your worker node and use the public IP address of the worker node to access your service in the cluster publicly from the internet. {:shortdesc} diff --git a/cs_overview.md b/cs_overview.md index 0c6f78d4f..b8a926676 100644 --- a/cs_overview.md +++ b/cs_overview.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Overview {: #overview} @@ -52,7 +52,7 @@ With {{site.data.keyword.containerlong_notm}}, you can create your cluster of co [VPC clusters](/docs/containers?topic=containers-getting-started#vpc-classic-gs) are created in your own Virtual Private Cloud that gives you the security of a private cloud environment with the dynamic scalability of a public cloud. You use network access control lists to protect the subnets that your worker nodes are connected to. VPC clusters can be provisioned on shared virtual infrastructure only. -For more information, see [Overview of Classic and VPC infrastructure providers](/docs/containers?topic=containers-infrastructure_providers). +For more information, see [Overview of Classic and VPC infrastructure providers](/docs/containers?topic=containers-infrastructure_providers). **Where can I learn more about the service?**
Review the following links to find out more about the benefits and responsibilities when you use {{site.data.keyword.containerlong_notm}}. diff --git a/cs_performance.md b/cs_performance.md index 87819c529..11c7b6844 100644 --- a/cs_performance.md +++ b/cs_performance.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-01" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Tuning performance diff --git a/cs_pod_priority.md b/cs_pod_priority.md index 89f6c1614..fcc95455d 100644 --- a/cs_pod_priority.md +++ b/cs_pod_priority.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-20" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Setting pod priority @@ -31,7 +31,7 @@ With pod priority and preemption, you can configure priority classes to indicate {: shortdesc} **Why do I set pod priority?**
-As a cluster administrator, you want to control which pods are more critical to your cluster workload. Priority classes can help you control the Kubernetes scheduler decisions to favor higher priority pods over lower priority pods. The Kubernetes scheduler can even preempt (remove) lower priority pods that are running so that pending higher priority pods can be scheduled. +As a cluster administrator, you want to control which pods are more critical to your cluster workload. Priority classes can help you control the Kubernetes scheduler decisions to favor higher priority pods over lower priority pods. The Kubernetes scheduler can even preempt (remove) lower priority pods that are running so that pending higher priority pods can be scheduled. By setting pod priority, you can help prevent lower priority workloads from impacting critical workloads in your cluster, especially in cases where the cluster starts to reach its resource capacity. diff --git a/cs_providers.md b/cs_providers.md index 168ace294..557760464 100644 --- a/cs_providers.md +++ b/cs_providers.md @@ -3,7 +3,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-19" +lastupdated: "2019-11-26" keywords: kubernetes, iks, vpc, classic @@ -22,7 +22,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Supported infrastructure providers {: #infrastructure_providers} diff --git a/cs_psp.md b/cs_psp.md index 46b55fe2f..80b6ebdc9 100644 --- a/cs_psp.md +++ b/cs_psp.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-04" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Configuring pod security policies diff --git a/cs_remove.md b/cs_remove.md index 051501b9c..556e510af 100644 --- a/cs_remove.md +++ b/cs_remove.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-09-27" +lastupdated: "2019-11-26" keywords: kubernetes, iks, clusters, worker nodes, worker pools, delete @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} {:gif: data-image-type='gif'} # Removing clusters @@ -36,7 +36,7 @@ No backups are created of your cluster or your data in your persistent storage. Before you begin: * Note your cluster ID. You might need the cluster ID to investigate and remove related IBM Cloud infrastructure resources that are not automatically deleted with your cluster. -* If you want to delete the data in your persistent storage, [understand the delete options](/docs/containers?topic=containers-cleanup#cleanup). +* If you want to delete the data in your persistent storage, [understand the delete options](/docs/containers?topic=containers-cleanup#cleanup). * Make sure that you have the [**Administrator** {{site.data.keyword.cloud_notm}} IAM platform role](/docs/containers?topic=containers-users#platform). To remove a cluster: diff --git a/cs_responsibilities.md b/cs_responsibilities.md index 2d41ec631..2d4c40667 100644 --- a/cs_responsibilities.md +++ b/cs_responsibilities.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Your responsibilities with using {{site.data.keyword.containerlong_notm}} {: #responsibilities_iks} diff --git a/cs_secure.md b/cs_secure.md index 9b0f9fb28..aac6ba162 100644 --- a/cs_secure.md +++ b/cs_secure.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Security for {{site.data.keyword.containerlong_notm}} @@ -34,7 +34,7 @@ You can use built-in security features in {{site.data.keyword.containerlong}} fo ## Overview of security threats for your cluster {: #threats} -To protect your cluster from being compromised, you must understand potential security threats for your cluster and what you can do to reduce the exposure to vulnerabilities. +To protect your cluster from being compromised, you must understand potential security threats for your cluster and what you can do to reduce the exposure to vulnerabilities. {: shortdesc} Security threats for your cluster diff --git a/cs_sitemap.md b/cs_sitemap.md index 22d42f2de..4722d2b7a 100644 --- a/cs_sitemap.md +++ b/cs_sitemap.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" --- @@ -17,7 +17,7 @@ lastupdated: "2019-11-25" {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Site map diff --git a/cs_storage_basics.md b/cs_storage_basics.md index c4845de34..09ad6b746 100644 --- a/cs_storage_basics.md +++ b/cs_storage_basics.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-14" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Understanding Kubernetes storage basics {: #kube_concepts} @@ -144,7 +144,7 @@ Not finding what you are looking for? You can also create your own customized st If you cannot use one of the provided storage classes, you can create your own customized storage class. You might want to customize a storage class to specify configurations such as the zone, file system type, server type, or [volume binding mode ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode) options (block storage only). {: shortdesc} -1. Create a customized storage class. You can start by using one of the pre-defined storage classes, or check out our sample customized storage classes. +1. Create a customized storage class. You can start by using one of the pre-defined storage classes, or check out our sample customized storage classes. - Pre-defined storage classes: - [Classic File Storage](/docs/containers?topic=containers-file_storage#file_storageclass_reference) - [Classic Block Storage](/docs/containers?topic=containers-block_storage#block_storageclass_reference) diff --git a/cs_storage_block.md b/cs_storage_block.md index fb1721958..f2ead3430 100644 --- a/cs_storage_block.md +++ b/cs_storage_block.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Storing data on classic IBM Cloud Block Storage {: #block_storage} @@ -41,29 +41,6 @@ Install the {{site.data.keyword.cloud_notm}} Block Storage plug-in with a Helm c Before you begin: [Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.](/docs/containers?topic=containers-cs_cli_install#cs_cli_configure) -1. Make sure that your worker node applies the latest patch for your minor version to run your worker node with the latest security settings. The patch version also ensures that the root password on the worker node is renewed. - - If you did not apply updates or reload your worker node within the last 90 days, your root password on the worker node expires and the installation of the storage plug-in might fail. - {: note} - 1. List the current patch version of your worker nodes. - ``` - ibmcloud ks worker ls --cluster - ``` - {: pre} - - Example output: - ``` - OK - ID Public IP Private IP Machine Type State Status Zone Version - kube-dal10-crb1a23b456789ac1b20b2nc1e12b345ab-w26 169.xx.xxx.xxx 10.xxx.xx.xxx b3c.4x16.encrypted normal Ready dal10 1.14.9_1523* - ``` - {: screen} - - If your worker node does not apply the latest patch version, you see an asterisk (`*`) in the **Version** column of your CLI output. - - 2. Review the [version changelog](/docs/containers?topic=containers-changelog) to find the changes that are included in the latest patch version. - - 3. Apply the latest patch version by reloading your worker node. Follow the instructions in the [ibmcloud ks worker reload command](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_worker_reload) to gracefully reschedule any running pods on your worker node before you reload your worker node. Note that during the reload, your worker node machine is updated with the latest image and data is deleted if not [stored outside the worker node](/docs/containers?topic=containers-storage_planning#persistent_storage_overview). 1. [Follow the instructions](/docs/containers?topic=containers-helm#public_helm_install){: new_window} to install the Helm client on your local machine, and install the Helm server (Tiller) with a service account in your cluster. @@ -205,7 +182,7 @@ Before you begin: [Log in to your account. If applicable, target the appropriate ``` {: pre} -3. Find the name of the block storage Helm chart that you installed in your cluster. +3. Find the name of the block storage Helm chart that you installed in your cluster. ``` helm ls | grep ibmcloud-block-storage-plugin ``` @@ -1593,7 +1570,7 @@ The following examples create a storage class that provisions block storage with ## Removing persistent storage from a cluster {: #cleanup} -When you set up persistent storage in your cluster, you have three main components: the Kubernetes persistent volume claim (PVC) that requests storage, the Kubernetes persistent volume (PV) that is mounted to a pod and described in the PVC, and the IBM Cloud infrastructure instance, such as classic file or block storage. Depending on how you created your storage, you might need to delete all three components separately. +When you set up persistent storage in your cluster, you have three main components: the Kubernetes persistent volume claim (PVC) that requests storage, the Kubernetes persistent volume (PV) that is mounted to a pod and described in the PVC, and the IBM Cloud infrastructure instance, such as classic file or block storage. Depending on how you created your storage, you might need to delete all three components separately. {:shortdesc} ### Understanding your storage removal options diff --git a/cs_storage_block_vpc.md b/cs_storage_block_vpc.md index 15ef60948..f4d8dad0c 100644 --- a/cs_storage_block_vpc.md +++ b/cs_storage_block_vpc.md @@ -3,7 +3,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-14" +lastupdated: "2019-11-26" keywords: kubernetes, iks, vpc @@ -22,7 +22,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Storing data on {{site.data.keyword.block_storage_is_short}} (Gen 1 compute) {: #vpc-block} @@ -863,7 +863,7 @@ Use one of the IBM-provided storage classes as a basis to create your own custom ### Storing your custom PVC settings in a Kubernetes secret {: #vpc-block-storageclass-secret} -Specify your PVC settings in a Kubernetes secret and reference this secret in a customized storage class. Then, use the customized storage class to create a PVC with the custom parameters that you set in your secret. +Specify your PVC settings in a Kubernetes secret and reference this secret in a customized storage class. Then, use the customized storage class to create a PVC with the custom parameters that you set in your secret. {: shortdesc} **What options do I have to use the Kubernetes secret?**
@@ -1102,7 +1102,7 @@ To back up or restore data, choose between the following options: ## Removing persistent storage from a cluster {: #cleanup} -When you set up persistent storage in your cluster, you have three main components: the Kubernetes persistent volume claim (PVC) that requests storage, the Kubernetes persistent volume (PV) that is mounted to a pod and described in the PVC, and the IBM Cloud infrastructure instance, such as classic file or block storage. Depending on how you created your storage, you might need to delete all three components separately. +When you set up persistent storage in your cluster, you have three main components: the Kubernetes persistent volume claim (PVC) that requests storage, the Kubernetes persistent volume (PV) that is mounted to a pod and described in the PVC, and the IBM Cloud infrastructure instance, such as classic file or block storage. Depending on how you created your storage, you might need to delete all three components separately. {:shortdesc} ### Understanding your storage removal options diff --git a/cs_storage_cos.md b/cs_storage_cos.md index 254344ed6..5c4bc2455 100644 --- a/cs_storage_cos.md +++ b/cs_storage_cos.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Storing data on IBM Cloud Object Storage {: #object_storage} @@ -485,7 +485,7 @@ To install the plug-in: You can upgrade the existing {{site.data.keyword.cos_full_notm}} plug-in to the latest version. {: shortdesc} -1. If you previously installed version 1.0.4 or earlier of the Helm chart that is named `ibmcloud-object-storage-plugin`, remove this Helm installation from your cluster. Then, reinstall the Helm chart. +1. If you previously installed version 1.0.4 or earlier of the Helm chart that is named `ibmcloud-object-storage-plugin`, remove this Helm installation from your cluster. Then, reinstall the Helm chart. 1. Check whether the old version of the {{site.data.keyword.cos_full_notm}} Helm chart is installed in your cluster. ``` helm ls | grep ibmcloud-object-storage-plugin diff --git a/cs_storage_file.md b/cs_storage_file.md index 193919542..9a40a4bdc 100644 --- a/cs_storage_file.md +++ b/cs_storage_file.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-13" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -21,12 +21,12 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Storing data on classic IBM Cloud File Storage {: #file_storage} -{{site.data.keyword.cloud_notm}} File Storage is persistent, fast, and flexible network-attached, NFS-based file storage that you can add to your apps by using Kubernetes persistent volumes (PVs). You can choose between predefined storage tiers with GB sizes and IOPS that meet the requirements of your workloads. To find out if {{site.data.keyword.cloud_notm}} File Storage is the right storage option for you, see [Choosing a storage solution](/docs/containers?topic=containers-storage_planning#choose_storage_solution). For pricing information, see [Billing](/docs/infrastructure/FileStorage?topic=FileStorage-about#billing). +{{site.data.keyword.cloud_notm}} File Storage is persistent, fast, and flexible network-attached, NFS-based file storage that you can add to your apps by using Kubernetes persistent volumes (PVs). You can choose between predefined storage tiers with GB sizes and IOPS that meet the requirements of your workloads. To find out if {{site.data.keyword.cloud_notm}} File Storage is the right storage option for you, see [Choosing a storage solution](/docs/containers?topic=containers-storage_planning#choose_storage_solution). For pricing information, see [Billing](/docs/infrastructure/FileStorage?topic=FileStorage-about#billing). {: shortdesc} {{site.data.keyword.cloud_notm}} File Storage is available only in classic {{site.data.keyword.containerlong_notm}} clusters, and is not supported for VPC on Classic clusters. To use file storage in a private cluster that is set up without public network access, your cluster must run Kubernetes version 1.13 or higher. NFS file storage instances are specific to a single zone. If you have a multizone cluster, consider [multizone persistent storage options](/docs/containers?topic=containers-storage_planning#persistent_storage_overview). @@ -1538,7 +1538,7 @@ The following customized storage class lets you define the NFS version that you ## Removing persistent storage from a cluster {: #cleanup} -When you set up persistent storage in your cluster, you have three main components: the Kubernetes persistent volume claim (PVC) that requests storage, the Kubernetes persistent volume (PV) that is mounted to a pod and described in the PVC, and the IBM Cloud infrastructure instance, such as classic file or block storage. Depending on how you created your storage, you might need to delete all three components separately. +When you set up persistent storage in your cluster, you have three main components: the Kubernetes persistent volume claim (PVC) that requests storage, the Kubernetes persistent volume (PV) that is mounted to a pod and described in the PVC, and the IBM Cloud infrastructure instance, such as classic file or block storage. Depending on how you created your storage, you might need to delete all three components separately. {:shortdesc} ### Understanding your storage removal options diff --git a/cs_storage_planning.md b/cs_storage_planning.md index 00d4d4983..68b8b3220 100644 --- a/cs_storage_planning.md +++ b/cs_storage_planning.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-10-30" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Planning highly available persistent storage {: #storage_planning} @@ -30,7 +30,7 @@ subcollection: containers ## Choosing a storage solution {: #choose_storage_solution} -Before you can decide what type of storage is the right solution for your {{site.data.keyword.containerlong}} clusters, you must understand the {{site.data.keyword.cloud_notm}} infrastructure provider, your app requirements, the type of data that you want to store, and how often you want to access this data. +Before you can decide what type of storage is the right solution for your {{site.data.keyword.containerlong}} clusters, you must understand the {{site.data.keyword.cloud_notm}} infrastructure provider, your app requirements, the type of data that you want to store, and how often you want to access this data. 1. Decide whether your data must be permanently stored, or if your data can be removed at any time. - **Persistent storage:** Your data must still be available, even if the container, the worker node, or the cluster is removed. Use persistent storage in the following scenarios: diff --git a/cs_storage_portworx.md b/cs_storage_portworx.md index b8ee2afb8..4d9377bcf 100644 --- a/cs_storage_portworx.md +++ b/cs_storage_portworx.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-19" +lastupdated: "2019-11-26" keywords: openshift, roks, rhoks, rhos @@ -21,12 +21,12 @@ subcollection: openshift {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Storing data on software-defined storage (SDS) with Portworx {: #portworx} -[Portworx ![External link icon](../icons/launch-glyph.svg "External link icon")](https://portworx.com/products/introduction/) is a highly available software-defined storage solution that you can use to manage local persistent storage for your containerized databases and other stateful apps, or to share data between pods across multiple zones. +[Portworx ![External link icon](../icons/launch-glyph.svg "External link icon")](https://portworx.com/products/introduction/) is a highly available software-defined storage solution that you can use to manage local persistent storage for your containerized databases and other stateful apps, or to share data between pods across multiple zones. {: shortdesc} diff --git a/cs_storage_portworx_gs.md b/cs_storage_portworx_gs.md index aad7eba83..c23f63b07 100644 --- a/cs_storage_portworx_gs.md +++ b/cs_storage_portworx_gs.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-10-09" +lastupdated: "2019-11-26" keywords: kubernetes, iks, local persistent storage @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Getting started with Portworx diff --git a/cs_storage_utilities.md b/cs_storage_utilities.md index b66fc8e97..70d54ba1a 100644 --- a/cs_storage_utilities.md +++ b/cs_storage_utilities.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, local persistent storage @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} diff --git a/cs_subnets.md b/cs_subnets.md index 8740dcd30..7613196bb 100644 --- a/cs_subnets.md +++ b/cs_subnets.md @@ -21,12 +21,12 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Configuring subnets and IP addresses for classic clusters {: #subnets} -Change the pool of available portable public or private IP addresses for network load balancer (NLB) services by adding subnets to your {{site.data.keyword.containerlong}} cluster. +Change the pool of available portable public or private IP addresses for network load balancer (NLB) services by adding subnets to your {{site.data.keyword.containerlong}} cluster. {:shortdesc} Classic infrastructure provider icon The content on this page is specific to classic clusters. For information about VPC clusters, see [Understanding network basics of VPC clusters](/docs/containers?topic=containers-plan_clusters#vpc_basics). diff --git a/cs_tech.md b/cs_tech.md index d6fb159e3..0c5469e4f 100644 --- a/cs_tech.md +++ b/cs_tech.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-19" +lastupdated: "2019-11-26" keywords: kubernetes, iks, docker, containers @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Service architecture diff --git a/cs_troubleshoot.md b/cs_troubleshoot.md index a6d92183f..aa4d7cc0b 100644 --- a/cs_troubleshoot.md +++ b/cs_troubleshoot.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-21" +lastupdated: "2019-11-26" keywords: kubernetes, iks, help, debug @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} {:tsSymptoms: .tsSymptoms} {:tsCauses: .tsCauses} {:tsResolve: .tsResolve} diff --git a/cs_troubleshoot_clusters.md b/cs_troubleshoot_clusters.md index 1ab21911f..5bc400270 100644 --- a/cs_troubleshoot_clusters.md +++ b/cs_troubleshoot_clusters.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-21" +lastupdated: "2019-11-26" keywords: kubernetes, iks, ImagePullBackOff, registry, image, failed to pull image, debug @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} {:tsSymptoms: .tsSymptoms} {:tsCauses: .tsCauses} {:tsResolve: .tsResolve} diff --git a/cs_troubleshoot_debug_ingress.md b/cs_troubleshoot_debug_ingress.md index 9a445bd01..ecb1fa4c9 100644 --- a/cs_troubleshoot_debug_ingress.md +++ b/cs_troubleshoot_debug_ingress.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-22" +lastupdated: "2019-11-26" keywords: kubernetes, iks, nginx, ingress controller, help @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} {:tsSymptoms: .tsSymptoms} {:tsCauses: .tsCauses} {:tsResolve: .tsResolve} diff --git a/cs_troubleshoot_health.md b/cs_troubleshoot_health.md index 3f2043616..1937380c7 100644 --- a/cs_troubleshoot_health.md +++ b/cs_troubleshoot_health.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-10-01" +lastupdated: "2019-11-26" keywords: kubernetes, iks, logging, help, debug @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} {:tsSymptoms: .tsSymptoms} {:tsCauses: .tsCauses} {:tsResolve: .tsResolve} diff --git a/cs_troubleshoot_network.md b/cs_troubleshoot_network.md index 58fd9a36e..f0c62562f 100644 --- a/cs_troubleshoot_network.md +++ b/cs_troubleshoot_network.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-22" +lastupdated: "2019-11-26" keywords: kubernetes, iks, help, network, connectivity @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} {:tsSymptoms: .tsSymptoms} {:tsCauses: .tsCauses} {:tsResolve: .tsResolve} diff --git a/cs_troubleshoot_storage.md b/cs_troubleshoot_storage.md index 862dd1df3..dadffc55d 100644 --- a/cs_troubleshoot_storage.md +++ b/cs_troubleshoot_storage.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} {:tsSymptoms: .tsSymptoms} {:tsCauses: .tsCauses} {:tsResolve: .tsResolve} @@ -470,7 +470,7 @@ When you include an [init container![External link icon](../icons/launch-glyph.s ``` {: pre} -6. Verify that the volume is successfully mounted to your pod. Note the pod name and **Containers/Mounts** path. +6. Verify that the volume is successfully mounted to your pod. Note the pod name and **Containers/Mounts** path. ``` kubectl describe pod ``` diff --git a/cs_tutorial_vpc.md b/cs_tutorial_vpc.md index 3b276ee12..d6535243b 100644 --- a/cs_tutorial_vpc.md +++ b/cs_tutorial_vpc.md @@ -3,7 +3,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-21" +lastupdated: "2019-11-26" keywords: kubernetes, iks, vpc @@ -22,7 +22,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Creating a classic cluster in your Virtual Private Cloud (VPC) {: #vpc_ks_tutorial} diff --git a/cs_tutorials.md b/cs_tutorials.md index 290b6ccc3..5c3fa0412 100644 --- a/cs_tutorials.md +++ b/cs_tutorials.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-21" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} diff --git a/cs_tutorials_byoc.md b/cs_tutorials_byoc.md index b23db2f24..44b21a207 100644 --- a/cs_tutorials_byoc.md +++ b/cs_tutorials_byoc.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-14" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Set up a DevOps delivery pipeline for your app diff --git a/cs_tutorials_cf.md b/cs_tutorials_cf.md index 82c61fdb6..71368fdeb 100644 --- a/cs_tutorials_cf.md +++ b/cs_tutorials_cf.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-21" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Migrating an app from Cloud Foundry to a cluster diff --git a/cs_tutorials_ov.md b/cs_tutorials_ov.md index 88e2377d9..2ab907af8 100644 --- a/cs_tutorials_ov.md +++ b/cs_tutorials_ov.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Tutorial overview diff --git a/cs_tutorials_policies.md b/cs_tutorials_policies.md index ebee07ae1..6e6635c55 100644 --- a/cs_tutorials_policies.md +++ b/cs_tutorials_policies.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-21" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Using Calico network policies to block traffic diff --git a/cs_tutorials_starter.md b/cs_tutorials_starter.md index 327d3fa34..6e3e3b10f 100644 --- a/cs_tutorials_starter.md +++ b/cs_tutorials_starter.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-10-22" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Deploy a starter kit app to a Kubernetes cluster {: #tutorial-starterkit-kube} diff --git a/cs_uc_finance.md b/cs_uc_finance.md index 5d467dd16..c2855536b 100644 --- a/cs_uc_finance.md +++ b/cs_uc_finance.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-15" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Financial services use cases for {{site.data.keyword.cloud_notm}} {: #cs_uc_finance} @@ -33,7 +33,7 @@ take advantage of high availability, high-performance compute, easy spin-up of c ## Mortgage company trims costs and accelerates regulatory compliance {: #uc_mortgage} -A Risk Management VP for a residential mortgage company processes 70 million records a day, but the on-premises system was slow and also inaccurate. IT expenses soared because hardware quickly went out of date and wasn't utilized fully. While they waited for hardware provisioning, their regulatory compliance slowed. +A Risk Management VP for a residential mortgage company processes 70 million records a day, but the on-premises system was slow and also inaccurate. IT expenses soared because hardware quickly went out of date and wasn't utilized fully. While they waited for hardware provisioning, their regulatory compliance slowed. {: shortdesc} Why {{site.data.keyword.cloud_notm}}: To improve risk analysis, the company looked to {{site.data.keyword.containerlong_notm}} and IBM Cloud Analytic services to reduce costs, increase worldwide availability, and ultimately accelerate regulatory compliance. With {{site.data.keyword.containerlong_notm}} in multiple regions, their analysis apps can be containerized and deployed across the globe, improving availability and addressing local regulations. Those deployments are accelerated with familiar open source tools, already part of {{site.data.keyword.containerlong_notm}}. diff --git a/cs_uc_gov.md b/cs_uc_gov.md index 04394c2cf..2b2da9615 100644 --- a/cs_uc_gov.md +++ b/cs_uc_gov.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-08-23" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Government use cases for {{site.data.keyword.cloud_notm}} {: #cs_uc_gov} @@ -32,7 +32,7 @@ These use cases highlight how workloads on {{site.data.keyword.containerlong}} b ## Regional government improves collaboration and velocity with community Developers who combine public-private data {: #uc_data_mashup} -An Open-Government Data Program Executive needs to share public data with the community and private sector, but the data is locked in an on-premises monolithic system. +An Open-Government Data Program Executive needs to share public data with the community and private sector, but the data is locked in an on-premises monolithic system. {: shortdesc} Why {{site.data.keyword.cloud_notm}}: With {{site.data.keyword.containerlong_notm}}, the Exec delivers the transformative value of combined public-private data. Likewise, the service provides the public cloud platform to refactor and expose microservices from monolithic on-premises apps. Also, the public cloud allows government and the public partnerships to use external cloud services and collaboration-friendly open-source tools. diff --git a/cs_uc_health.md b/cs_uc_health.md index 7544bba1e..47a770622 100644 --- a/cs_uc_health.md +++ b/cs_uc_health.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-09-12" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Healthcare use cases for {{site.data.keyword.cloud_notm}} {: #cs_uc_health} @@ -33,7 +33,7 @@ These use cases highlight how workloads on {{site.data.keyword.containerlong}} b ## Healthcare provider migrates workloads from inefficient VMs to Ops-friendly containers for reporting and patient systems {: #uc_migrate} -An IT Exec for a healthcare provider has business reporting and patient systems on-premises. Those systems go through slow enhancement cycles, which leads to stagnant patient service levels. +An IT Exec for a healthcare provider has business reporting and patient systems on-premises. Those systems go through slow enhancement cycles, which leads to stagnant patient service levels. {: shortdesc} Why {{site.data.keyword.cloud_notm}}: To improve patient service, the provider looked to {{site.data.keyword.containerlong_notm}} and {{site.data.keyword.contdelivery_full}} to reduce IT spend and accelerate development, all on a secure platform. The provider’s high-use SaaS systems, which held both patient record systems and business report apps, needed updates frequently. Yet the on-premises environment hindered agile development. The provider also wanted to counteract increasing labor costs and a decreasing budget. diff --git a/cs_uc_intro.md b/cs_uc_intro.md index 21e7abfc8..096214d2e 100644 --- a/cs_uc_intro.md +++ b/cs_uc_intro.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-07-31" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} diff --git a/cs_uc_retail.md b/cs_uc_retail.md index 158dc53a7..73793c95c 100644 --- a/cs_uc_retail.md +++ b/cs_uc_retail.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-08-22" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Retail use cases for {{site.data.keyword.cloud_notm}} {: #cs_uc_retail} @@ -32,7 +32,7 @@ These use cases highlight how workloads on {{site.data.keyword.containerlong}} c ## Brick-and-mortar retailer shares data, by using APIs with global business partners to drive omnichannel sales {: #uc_data-share} -A Line-of-Business (LOB) Exec needs to increase sales channels, but the retail system is closed off in an on-premises data center. The competition has global business partners to cross-sell and upsell permutations of their goods: across brick-and-mortar and online sites. +A Line-of-Business (LOB) Exec needs to increase sales channels, but the retail system is closed off in an on-premises data center. The competition has global business partners to cross-sell and upsell permutations of their goods: across brick-and-mortar and online sites. {: shortdesc} Why {{site.data.keyword.cloud_notm}}: {{site.data.keyword.containerlong_notm}} provides a public-cloud ecosystem, where containers enable new business partners and other external players to co-develop apps and data, through APIs. Now that the retail system is on the public cloud, APIs also streamline data sharing and jump-start new app development. App deployments increase when Developers experiment easily, pushing changes to Development and Test systems quickly with toolchains. diff --git a/cs_uc_transport.md b/cs_uc_transport.md index 0c296de4a..6ef61808d 100644 --- a/cs_uc_transport.md +++ b/cs_uc_transport.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-08-23" +lastupdated: "2019-11-26" keywords: kubernetes, iks @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Transportation use cases for {{site.data.keyword.cloud_notm}} {: #cs_uc_transport} @@ -34,7 +34,7 @@ take advantage of toolchains for rapid app updates and multiregion deployments a ## Shipping company increases availability of worldwide systems for business partner ecosystem {: #uc_shipping} -An IT Exec has worldwide shipping routing and scheduling systems that partners interact with. Partners require up-to-the-minute information from these systems that access IoT device data. But, these systems were unable to scale across the globe with sufficient HA. +An IT Exec has worldwide shipping routing and scheduling systems that partners interact with. Partners require up-to-the-minute information from these systems that access IoT device data. But, these systems were unable to scale across the globe with sufficient HA. {: shortdesc} Why {{site.data.keyword.cloud_notm}}: {{site.data.keyword.containerlong_notm}} scales containerized apps with five 9s of availability to meet growing demands. App deployments occur 40 times daily when Developers experiment easily, pushing changes to Development and Test systems quickly. The IoT Platform makes access to IoT data easy. diff --git a/cs_users.md b/cs_users.md index fddd9fda3..84e22fa9f 100644 --- a/cs_users.md +++ b/cs_users.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Assigning cluster access {: #users} @@ -1094,7 +1094,7 @@ Before you begin: * Make sure that you are the account owner or have **Super User** and all device access. You can't grant a user access that you don't have. * Review the [required and suggested classic infrastructure permissions](/docs/containers?topic=containers-access_reference#infra). -You can grant classic infrastructure access through the [console](#infra_console) or [CLI](#infra_cli). +You can grant classic infrastructure access through the [console](#infra_console) or [CLI](#infra_cli). ### Assigning infrastructure access through the console {: #infra_console} diff --git a/cs_versions_addons.md b/cs_versions_addons.md index 05697d330..5579285c2 100644 --- a/cs_versions_addons.md +++ b/cs_versions_addons.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-21" +lastupdated: "2019-11-26" keywords: kubernetes, iks, nginx, ingress controller, fluentd @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Fluentd and Ingress ALB changelog {: #cluster-add-ons-changelog} diff --git a/cs_versions_changelog.md b/cs_versions_changelog.md index c3d5fbd67..29ba6cbd0 100644 --- a/cs_versions_changelog.md +++ b/cs_versions_changelog.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, versions, update, upgrade, BOM, bill of materials, versions, patch @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} {:external: target="_blank" .external} # Version changelog diff --git a/cs_vpn.md b/cs_vpn.md index e5ec20339..bba2f2731 100644 --- a/cs_vpn.md +++ b/cs_vpn.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-19" +lastupdated: "2019-11-26" keywords: kubernetes, iks, vyatta, strongswan, ipsec, on-prem @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Setting up VPN connectivity {: #vpn} @@ -34,7 +34,7 @@ subcollection: containers With VPN connectivity, you can securely connect apps in a Kubernetes cluster on {{site.data.keyword.containerlong}} to an on-premises network. You can also connect apps that are external to your cluster to an app that runs inside your cluster. {:shortdesc} -To connect your worker nodes and apps to an on-premises data center, you can configure one of the following options. +To connect your worker nodes and apps to an on-premises data center, you can configure one of the following options. - **strongSwan IPSec VPN Service**: You can set up a [strongSwan IPSec VPN service ![External link icon](../icons/launch-glyph.svg "External link icon")](https://www.strongswan.org/about.html) that securely connects your Kubernetes cluster with an on-premises network. The strongSwan IPSec VPN service provides a secure end-to-end communication channel over the internet that is based on the industry-standard Internet Protocol Security (IPSec) protocol suite. To set up a secure connection between your cluster and an on-premises network, [configure and deploy the strongSwan IPSec VPN service](#vpn-setup) directly in a pod in your cluster. diff --git a/cs_why.md b/cs_why.md index 8d05beb91..e1a346ac6 100644 --- a/cs_why.md +++ b/cs_why.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Benefits and service offerings {: #cs_ov} @@ -130,7 +130,7 @@ If you have a free cluster and want to upgrade to a standard cluster, you can [c ## Comparison between OpenShift and community Kubernetes clusters {: #openshift_kubernetes} -Both OpenShift and community Kubernetes clusters are production-ready container platforms that are tailored for enterprise workloads. The following table compares and contrasts some common characteristics that can help you choose which container platform is right for your use case. +Both OpenShift and community Kubernetes clusters are production-ready container platforms that are tailored for enterprise workloads. The following table compares and contrasts some common characteristics that can help you choose which container platform is right for your use case. {: shortdesc} |Characteristics|Community Kubernetes clusters|OpenShift clusters| diff --git a/cs_worker_add.md b/cs_worker_add.md index 7ec0984ef..da3b1b2e4 100644 --- a/cs_worker_add.md +++ b/cs_worker_add.md @@ -21,13 +21,13 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} {:gif: data-image-type='gif'} # Adding worker nodes and zones to clusters {: #add_workers} -To increase the availability of your apps, you can add worker nodes to an existing zone or multiple existing zones in your cluster. To help protect your apps from zone failures, you can add zones to your cluster. +To increase the availability of your apps, you can add worker nodes to an existing zone or multiple existing zones in your cluster. To help protect your apps from zone failures, you can add zones to your cluster. {:shortdesc} When you create a cluster, the worker nodes are provisioned in a worker pool. After cluster creation, you can add more worker nodes to a pool by resizing it or by adding more worker pools. By default, the worker pool exists in one zone. Clusters that have a worker pool in only one zone are called single zone clusters. When you add more zones to the cluster, the worker pool exists across the zones. Clusters that have a worker pool that is spread across more than one zone are called multizone clusters. diff --git a/cs_worker_plan.md b/cs_worker_plan.md index 58d6842c3..534108df8 100644 --- a/cs_worker_plan.md +++ b/cs_worker_plan.md @@ -21,12 +21,12 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Planning your worker node setup {: #planning_worker_nodes} -{{site.data.keyword.containerlong}} provides different worker node flavors and isolation levels so that you can choose the flavor and isolation that best meet the requirements of the workloads that you want to run in the cloud. +{{site.data.keyword.containerlong}} provides different worker node flavors and isolation levels so that you can choose the flavor and isolation that best meet the requirements of the workloads that you want to run in the cloud. {:shortdesc} A worker node flavor describes the compute resources, such as CPU, memory, and disk capacity that you get when you provision your worker node. Worker nodes of the same flavor are grouped in worker node pools. The total number of worker nodes in a cluster determine the compute capacity that is available to your apps in the cluster. diff --git a/faqs.md b/faqs.md index 4d4b8a54a..046b4cdac 100644 --- a/faqs.md +++ b/faqs.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-21" +lastupdated: "2019-11-26" keywords: kubernetes, iks, compliance, security standards, faq, kubernetes pricing, kubernetes service pricing, ibm cloud kubernetes service pricing, iks pricing, kubernetes charges, kubernetes service charges, ibm cloud kubernetes service charges, iks charges, kubernetes price, kubernetes service price, ibm cloud kubernetes service price, iks price, kubernetes billing, kubernetes service billing, ibm cloud kubernetes service billing, iks billing, kubernetes costs, kubernetes service costs, ibm cloud kubernetes service costs, iks costs @@ -19,7 +19,7 @@ subcollection: containers {:tip: .tip} {:note: .note} {:download: .download} -{:preview: .preview} +{:preview: .preview} {:faq: data-hd-content-type='faq'} diff --git a/getting-started.md b/getting-started.md index 851c8f330..ab76878ac 100644 --- a/getting-started.md +++ b/getting-started.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, containers @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} diff --git a/kubernetes-service-cli.md b/kubernetes-service-cli.md index 7147db120..e28fd0335 100644 --- a/kubernetes-service-cli.md +++ b/kubernetes-service-cli.md @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # {{site.data.keyword.containerlong_notm}} CLI {: #kubernetes-service-cli} @@ -1812,7 +1812,7 @@ workerNum: <number_workers> diskEncryption: false
--hardware HARDWARE
-
The level of hardware isolation for your worker node. Use `dedicated` so that available physical resources are dedicated to you only, or `shared` to allow physical resources to be shared with other IBM customers. The default is `shared`. This value is optional. For bare metal flavors, specify `dedicated`.
+
The level of hardware isolation for your worker node. Use `dedicated` so that available physical resources are dedicated to you only, or `shared` to allow physical resources to be shared with other IBM customers. The default is `shared`. This value is optional. For bare metal flavors, specify `dedicated`.
--machine-type FLAVOR
Choose a machine type, or flavor, for your worker nodes. You can deploy your worker nodes as virtual machines on shared or dedicated hardware, or as physical machines on bare metal. Available physical and virtual machines types vary by the zone in which you deploy the cluster. For more information, see the documentation for the `ibmcloud ks flavors (machine-types)` [command](#cs_machine_types). This value is required for standard clusters and is not available for free clusters.
diff --git a/release_notes.md b/release_notes.md index afe8fed84..a4578abf2 100644 --- a/release_notes.md +++ b/release_notes.md @@ -21,12 +21,12 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Release notes {: #iks-release} -Use the release notes to learn about the latest changes to the {{site.data.keyword.containerlong}} documentation that are grouped by month. +Use the release notes to learn about the latest changes to the {{site.data.keyword.containerlong}} documentation that are grouped by month. {:shortdesc} ## November 2019 diff --git a/vpc_dns.md b/vpc_dns.md index 15b76cc63..6c923c475 100644 --- a/vpc_dns.md +++ b/vpc_dns.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-25" +lastupdated: "2019-11-26" keywords: kubernetes, iks, coredns, dns @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Configuring CoreDNS for VPC clusters diff --git a/vpc_firewall.md b/vpc_firewall.md index 5df56e5ff..a23282ac4 100644 --- a/vpc_firewall.md +++ b/vpc_firewall.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-21" +lastupdated: "2019-11-26" keywords: kubernetes, iks, firewall, ips @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Opening required ports and IP addresses in your firewall {: #vpc-firewall} @@ -249,7 +249,7 @@ Although Calico policies are supported in VPC clusters, you can remain VPC-nativ ## Whitelisting your cluster in other services' firewalls or in on-premises firewalls {: #vpc-whitelist_workers} -If you want to access services that run inside or outside {{site.data.keyword.cloud_notm}} or on-premises and that are protected by a firewall, you can add the IP addresses of your worker nodes in that firewall to allow outbound network traffic to your cluster. For example, you might want to read data from an {{site.data.keyword.cloud_notm}} database that is protected by a firewall, or whitelist your worker node subnets in an on-premises firewall to allow network traffic from your cluster. +If you want to access services that run inside or outside {{site.data.keyword.cloud_notm}} or on-premises and that are protected by a firewall, you can add the IP addresses of your worker nodes in that firewall to allow outbound network traffic to your cluster. For example, you might want to read data from an {{site.data.keyword.cloud_notm}} database that is protected by a firewall, or whitelist your worker node subnets in an on-premises firewall to allow network traffic from your cluster. {:shortdesc} 1. [Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.](/docs/containers?topic=containers-cs_cli_install#cs_cli_configure) diff --git a/vpc_lbaas.md b/vpc_lbaas.md index e2bf58f04..a1d5a3378 100644 --- a/vpc_lbaas.md +++ b/vpc_lbaas.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-21" +lastupdated: "2019-11-26" keywords: kubernetes, iks, vpc lbaas, @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # VPC: Exposing apps with VPC load balancers {: #vpc-lbaas} @@ -34,7 +34,7 @@ Set up a Load Balancer for VPC to expose your app on the public or private netwo ## About VPC load balancing in {{site.data.keyword.containerlong_notm}} {: #lbaas_about} -When you create a Kubernetes `LoadBalancer` service for an app in your cluster, a layer 7 [Load Balancer for VPC](/docs/vpc-on-classic-network?topic=vpc-on-classic-network---using-load-balancers-in-ibm-cloud-vpc) is automatically created in your VPC outside of your cluster. The load balancer is multizonal and routes requests for your app through the private NodePorts that are automatically opened on your worker nodes. +When you create a Kubernetes `LoadBalancer` service for an app in your cluster, a layer 7 [Load Balancer for VPC](/docs/vpc-on-classic-network?topic=vpc-on-classic-network---using-load-balancers-in-ibm-cloud-vpc) is automatically created in your VPC outside of your cluster. The load balancer is multizonal and routes requests for your app through the private NodePorts that are automatically opened on your worker nodes. {: shortdesc} The VPC load balancer serves as the external entry point for incoming requests for the app. diff --git a/vpc_network_policy.md b/vpc_network_policy.md index 1f07d8a05..e14fe2df5 100644 --- a/vpc_network_policy.md +++ b/vpc_network_policy.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-21" +lastupdated: "2019-11-26" keywords: kubernetes, iks, firewall, acl, acls, access control list, rules, security group @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Controlling traffic with VPC ACLs and network policies {: #vpc-network-policy} @@ -29,7 +29,7 @@ subcollection: containers VPC infrastructure provider icon This ACL information is specific to VPC clusters. For network policy information for classic clusters, see [Controlling traffic with network policies](/docs/containers?topic=containers-network_policies). {: note} -If you have unique security requirements, you can control traffic to and from your cluster with VPC access control lists (ACLs) and traffic between pods in your cluster with Kubernetes network policies. +If you have unique security requirements, you can control traffic to and from your cluster with VPC access control lists (ACLs) and traffic between pods in your cluster with Kubernetes network policies. {: shortdesc}
diff --git a/vpc_ts.md b/vpc_ts.md index fda1b370d..5a3ad9a70 100644 --- a/vpc_ts.md +++ b/vpc_ts.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-14" +lastupdated: "2019-11-26" keywords: kubernetes, iks, vpc @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} {:tsSymptoms: .tsSymptoms} {:tsCauses: .tsCauses} {:tsResolve: .tsResolve} @@ -36,7 +36,7 @@ Review some known issues or common error messages that you might encounter when {: #vpc_ts_lb} {: tsSymptoms} -You publicly exposed your app by creating a Kubernetes `LoadBalancer` service in your VPC cluster. When you try to connect to your app by using the hostname that is assigned to the Kubernetes `LoadBalancer`, the connection fails or times out. +You publicly exposed your app by creating a Kubernetes `LoadBalancer` service in your VPC cluster. When you try to connect to your app by using the hostname that is assigned to the Kubernetes `LoadBalancer`, the connection fails or times out. When you run `kubectl describe svc `, you see a warning message similar to one of the following in the **Events** section: ``` diff --git a/vpc_vpn.md b/vpc_vpn.md index 294d107b3..6f46706ba 100644 --- a/vpc_vpn.md +++ b/vpc_vpn.md @@ -2,7 +2,7 @@ copyright: years: 2014, 2019 -lastupdated: "2019-11-14" +lastupdated: "2019-11-26" keywords: kubernetes, iks, strongswan, ipsec, on-prem, vpnaas, direct link @@ -21,7 +21,7 @@ subcollection: containers {:important: .important} {:deprecated: .deprecated} {:download: .download} -{:preview: .preview} +{:preview: .preview} # Setting up VPC VPN connectivity {: #vpc-vpnaas} @@ -29,7 +29,7 @@ subcollection: containers VPC infrastructure provider icon This VPN information is specific to VPC clusters. For VPN information for classic clusters, see [Setting up VPN connectivity](/docs/containers?topic=containers-vpn). {: note} -With VPN connectivity, you can securely connect apps and services in a VPC cluster in {{site.data.keyword.containerlong}} to on-premises networks, other VPCs, and {{site.data.keyword.cloud_notm}} classic infrastructure resources. You can also connect apps that are external to your cluster to an app that runs inside your cluster. +With VPN connectivity, you can securely connect apps and services in a VPC cluster in {{site.data.keyword.containerlong}} to on-premises networks, other VPCs, and {{site.data.keyword.cloud_notm}} classic infrastructure resources. You can also connect apps that are external to your cluster to an app that runs inside your cluster. {: shortdesc} ## Choosing a VPN solution