Skip to content

Commit

Permalink
Derek Poindexter: Update run_travis.sh
Browse files Browse the repository at this point in the history
  • Loading branch information
alchemyDocs committed Nov 26, 2019
1 parent eae8a3f commit 705d0b2
Show file tree
Hide file tree
Showing 27 changed files with 225 additions and 225 deletions.
58 changes: 29 additions & 29 deletions cs_access_reference.md

Large diffs are not rendered by default.

10 changes: 5 additions & 5 deletions cs_cluster_plan_ha.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

copyright:
years: 2014, 2019
lastupdated: "2019-11-25"
lastupdated: "2019-11-26"

keywords: kubernetes, iks, multi az, multi-az, szr, mzr

Expand Down Expand Up @@ -42,10 +42,10 @@ Your users are less likely to experience downtime when you distribute your apps
{: #single_zone}

Single zone clusters can be created in one of the supported [single zone cities or multizone metro locations](/docs/containers?topic=containers-regions-and-zones#zones). To improve availability for your app and to allow failover for the case that one worker node is not available in your cluster, add additional worker nodes to your single zone cluster.
{: shortdesc}
{: shortdesc}<roks311-vpc>

<img src="images/icon-vpc.png" alt="VPC infrastructure provider icon" width="15" style="width:15px; border-style: none"/> VPC clusters are supported only in [multizone metro locations](/docs/containers?topic=containers-regions-and-zones#zones). If your cluster must reside in one of the single zone cities, create a classic cluster instead.
{: note}
{: note}</roks311-vpc>

<img src="images/cs_cluster_singlezone.png" alt="High availability for clusters in a single zone" width="230" style="width:230px; border-style: none"/>

Expand Down Expand Up @@ -81,10 +81,10 @@ Let's say you need a worker node with six cores to handle the workload for your
When you create a cluster in a [multizone metro location](/docs/containers?topic=containers-regions-and-zones#zones), a highly available master is automatically deployed and three replicas are spread across the zones of the metro. For example, if the cluster is in `dal10`, `dal12`, or `dal13` zones, the replicas of the master are spread across each zone in the Dallas multizone metro.

**Do I have to do anything so that the master can communicate with the workers across zones?**</br>
If you created a VPC multizone cluster, the subnets in each zone are automatically set up with Access Control Lists (ACLs) that allow communication between the master and the worker nodes across zones. In classic clusters, if you have multiple VLANs for your cluster, multiple subnets on the same VLAN, or a multizone classic cluster, you must enable a [Virtual Router Function (VRF)](/docs/resources?topic=direct-link-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud) for your IBM Cloud infrastructure account so your worker nodes can communicate with each other on the private network. To enable VRF, [contact your IBM Cloud infrastructure account representative](/docs/infrastructure/direct-link?topic=direct-link-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud#how-you-can-initiate-the-conversion). To check whether a VRF is already enabled, use the `ibmcloud account show` command. If you cannot or do not want to enable VRF, enable [VLAN spanning](/docs/infrastructure/vlans?topic=vlans-vlan-spanning#vlan-spanning). To perform this action, you need the **Network > Manage Network VLAN Spanning** [infrastructure permission](/docs/containers?topic=containers-users#infra_access), or you can request the account owner to enable it. To check whether VLAN spanning is already enabled, use the `ibmcloud ks vlan spanning get --region <region>` [command](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_vlan_spanning_get).
<roks311-vpc>If you created a VPC multizone cluster, the subnets in each zone are automatically set up with Access Control Lists (ACLs) that allow communication between the master and the worker nodes across zones. </roks311-vpc>In classic clusters, if you have multiple VLANs for your cluster, multiple subnets on the same VLAN, or a multizone classic cluster, you must enable a [Virtual Router Function (VRF)](/docs/resources?topic=direct-link-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud) for your IBM Cloud infrastructure account so your worker nodes can communicate with each other on the private network. To enable VRF, [contact your IBM Cloud infrastructure account representative](/docs/infrastructure/direct-link?topic=direct-link-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud#how-you-can-initiate-the-conversion). To check whether a VRF is already enabled, use the `ibmcloud account show` command. If you cannot or do not want to enable VRF, enable [VLAN spanning](/docs/infrastructure/vlans?topic=vlans-vlan-spanning#vlan-spanning). To perform this action, you need the **Network > Manage Network VLAN Spanning** [infrastructure permission](/docs/containers?topic=containers-users#infra_access), or you can request the account owner to enable it. To check whether VLAN spanning is already enabled, use the `ibmcloud ks vlan spanning get --region <region>` [command](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_vlan_spanning_get).

**Can I convert my single zone cluster to a multizone cluster?**</br>
To convert a single zone cluster to a multizone cluster, your cluster must be set up in one of the supported [multizone metro locations](/docs/containers?topic=containers-regions-and-zones#zones). VPC clusters can be set up only in multizone metro locations, and as such can always be converted from a single zone cluster to a multizone cluster. Classic clusters that are set up in a single zone data center cannot be converted to a multizone cluster. To convert a single zone cluster to a multizone cluster, see [Adding worker nodes by adding a zone to a worker pool](/docs/containers?topic=containers-add_workers#add_zone).
To convert a single zone cluster to a multizone cluster, your cluster must be set up in one of the supported [multizone metro locations](/docs/containers?topic=containers-regions-and-zones#zones). <roks311-vpc>VPC clusters can be set up only in multizone metro locations, and as such can always be converted from a single zone cluster to a multizone cluster. </roks311-vpc>C<roks311-vpc>lassic c</roks311-vpc>lusters that are set up in a single zone data center cannot be converted to a multizone cluster. To convert a single zone cluster to a multizone cluster, see [Adding worker nodes by adding a zone to a worker pool](/docs/containers?topic=containers-add_workers#add_zone).



Expand Down
10 changes: 5 additions & 5 deletions cs_cluster_scaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

copyright:
years: 2014, 2019
lastupdated: "2019-11-25"
lastupdated: "2019-11-26"

keywords: kubernetes, iks, node scaling, ca, autoscaler

Expand Down Expand Up @@ -695,18 +695,18 @@ To limit a pod deployment to a specific worker pool that is managed by the clust

**To limit pods to run on certain autoscaled worker pools**:

1. Create the worker pool with the label that you want to use. For example, your label might be `app: nginx`.
**For classic clusters**:
1. Create the worker pool with the label that you want to use. For example, your label might be `app: nginx`.<roks311-vpc>
**For classic clusters**:</roks311-vpc>
```
ibmcloud ks worker-pool create classic --name <name> --cluster <cluster_name_or_ID> --machine-type <flavor> --size-per-zone <number_of_worker_nodes> --label <key>=<value>
```
{: pre}
{: pre}<roks311-vpc>
**For VPC clusters**:
```
ibmcloud ks worker-pool create vpc-classic --name <name> --cluster <cluster_name_or_ID> --flavor <flavor> --size-per-zone <number_of_worker_nodes> --label <key>=<value>
```
{: pre}

</roks311-vpc>
2. [Add the worker pool to the cluster autoscaler configuration](#ca_cm).
3. In your pod spec template, match the `nodeSelector` or `nodeAffinity` to the label that you used in your worker pool.

Expand Down
42 changes: 21 additions & 21 deletions cs_cluster_update.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

copyright:
years: 2014, 2019
lastupdated: "2019-11-25"
lastupdated: "2019-11-26"

keywords: kubernetes, iks, upgrade, version

Expand Down Expand Up @@ -83,21 +83,21 @@ To update the Kubernetes master _major_ or _minor_ version:

4. Install the version of the [`kubectl cli`](/docs/containers?topic=containers-cs_cli_install#kubectl) that matches the API server version that runs in the master. [Kubernetes does not support ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/setup/release/version-skew-policy/) `kubectl` client versions that are two or more versions apart from the server version (n +/- 2).

When the master update is complete, you can update your worker nodes, depending on the type of cluster infrastructure provider that you have.
When the master update is complete, you can update your worker nodes<roks311-vpc>, depending on the type of cluster infrastructure provider that you have.
* [Updating classic worker nodes](#worker_node).
* [Updating VPC worker nodes](#vpc_worker_node).
* [Updating VPC worker nodes](#vpc_worker_node)</roks311-vpc>.

<br />


## Updating classic worker nodes
## Updating<roks311-vpc> classic</roks311-vpc> worker nodes
{: #worker_node}

You received a notification to update your worker nodes in a [classic infrastructure](/docs/containers?topic=containers-infrastructure_providers) cluster. What does that mean? As security updates and patches are put in place for the API server and other master components, you must be sure that the worker nodes remain in sync.
{: shortdesc}
You received a notification to update your worker nodes<roks311-vpc> in a [classic infrastructure](/docs/containers?topic=containers-infrastructure_providers) cluster</roks311-vpc>. What does that mean? As security updates and patches are put in place for the API server and other master components, you must be sure that the worker nodes remain in sync.
{: shortdesc}<roks311-vpc>

<img src="images/icon-classic.png" alt="Classic infrastructure provider icon" width="15" style="width:15px; border-style: none"/> Applies to only classic clusters. Have a VPC cluster? See [Updating VPC worker nodes](#vpc_worker_node) instead.
{: note}
{: note}</roks311-vpc>

**What happens to my apps during an update?**</br>
If you run apps as part of a deployment on worker nodes that you update, the apps are rescheduled onto other worker nodes in the cluster. These worker nodes might be in a different worker pool, or if you have stand-alone worker nodes, apps might be scheduled onto stand-alone worker nodes. To avoid downtime for your app, you must ensure that you have enough capacity in the cluster to carry the workload.
Expand All @@ -113,7 +113,7 @@ When the config map is not defined, the default is used. By default, a maximum o
### Prerequisites
{: #worker-up-prereqs}

Before you update your classic infrastructure worker nodes, review the prerequisite steps.
Before you update your<roks311-vpc> classic infrastructure</roks311-vpc> worker nodes, review the prerequisite steps.
{: shortdesc}

Updates to worker nodes can cause downtime for your apps and services. Your worker node machine is reimaged, and data is deleted if not [stored outside the pod](/docs/containers?topic=containers-storage_planning#persistent_storage_overview).
Expand All @@ -126,10 +126,10 @@ Updates to worker nodes can cause downtime for your apps and services. Your work
- Consider [adding more worker nodes](/docs/containers?topic=containers-add_workers) so that your cluster has enough capacity to rescheduling your workloads during the update.
- Make sure that you have the [**Operator** or **Administrator** {{site.data.keyword.cloud_notm}} IAM platform role](/docs/containers?topic=containers-users#platform).

### Updating classic worker nodes in the CLI with a configmap
### Updating<roks311-vpc> classic</roks311-vpc> worker nodes in the CLI with a configmap
{: #worker-up-configmap}

Set up a configmap to perform a rolling update of your classic worker nodes.
Set up a configmap to perform a rolling update of your<roks311-vpc> classic</roks311-vpc> worker nodes.
{: shortdesc}

1. Complete the [prerequisite steps](#worker-up-prereqs).
Expand Down Expand Up @@ -279,7 +279,7 @@ Next steps:
- Inform developers who work in the cluster to update their `kubectl` CLI to the version of the Kubernetes master.
- If the Kubernetes dashboard does not display utilization graphs, [delete the `kube-dashboard` pod](/docs/containers?topic=containers-cs_troubleshoot_health#cs_dashboard_graphs).

### Updating classic worker nodes in the console
### Updating<roks311-vpc> classic</roks311-vpc> worker nodes in the console
{: #worker_up_console}

After you set up the config map for the first time, you can then update worker nodes by using the {{site.data.keyword.cloud_notm}} console.
Expand All @@ -293,7 +293,7 @@ To update worker nodes from the console:
5. From the action bar, click **Update**.

<br />

<roks311-vpc>

## Updating VPC worker nodes
{: #vpc_worker_node}
Expand Down Expand Up @@ -385,7 +385,7 @@ You can update your VPC worker nodes in the console. Before you begin, consider
5. From the action bar, click **Update**.

<br />

</roks311-vpc>

## Updating flavors (machine types)
{: #machine_type}
Expand Down Expand Up @@ -441,35 +441,35 @@ To update flavors:

3. Create a worker node with the new machine type.
- **For worker nodes in a worker pool**:
1. Create a worker pool with the number of worker nodes that you want to replace.
* Classic clusters:
1. Create a worker pool with the number of worker nodes that you want to replace.<roks311-vpc>
* Classic clusters:</roks311-vpc>
```
ibmcloud ks worker-pool create classic --name <pool_name> --cluster <cluster_name_or_ID> --machine-type <flavor> --size-per-zone <number_of_workers_per_zone>
```
{: pre}
{: pre}<roks311-vpc>
* VPC clusters:
```
ibmcloud ks worker-pool create vpc-classic <pool_name> --cluster <cluster_name_or_ID> --flavor <flavor> --size-per-zone <number_of_workers_per_zone>
```
{: pre}
{: pre}</roks311-vpc>

2. Verify that the worker pool is created.
```
ibmcloud ks worker-pool ls --cluster <cluster_name_or_ID>
```
{: pre}

3. Add the zone to your worker pool that you retrieved earlier. When you add a zone, the worker nodes that are defined in your worker pool are provisioned in the zone and considered for future workload scheduling. If you want to spread your worker nodes across multiple zones, choose a [multizone-capable zone](/docs/containers?topic=containers-regions-and-zones#zones).
* Classic clusters:
3. Add the zone to your worker pool that you retrieved earlier. When you add a zone, the worker nodes that are defined in your worker pool are provisioned in the zone and considered for future workload scheduling. If you want to spread your worker nodes across multiple zones, choose a [multizone-capable zone](/docs/containers?topic=containers-regions-and-zones#zones).<roks311-vpc>
* Classic clusters:</roks311-vpc>
```
ibmcloud ks zone add classic --zone <zone> --cluster <cluster_name_or_ID> --worker-pool <pool_name> --private-vlan <private_VLAN_ID> --public-vlan <public_VLAN_ID>
```
{: pre}
{: pre}<roks311-vpc>
* VPC clusters:
```
ibmcloud ks zone add vpc-classic --zone <zone> --cluster <cluster_name_or_ID> --worker-pool <pool_name> --subnet-id <vpc_subnet_id>
```
{: pre}
{: pre}</roks311-vpc>

- **Deprecated: For stand-alone worker nodes**:
```
Expand Down
Loading

0 comments on commit 705d0b2

Please sign in to comment.