Derived from Spring PetClinic Cloud.
For general information about the PetClinic sample application, see https://spring-petclinic.github.io/
The basic idea is that there's a ton of "cruft" inside tons of files in spring-petclinic that relate to "spring-cloud": configuration for service discovery, load balancing, routing, retries, resilience, etc.. None of that is an app's concern when you move to Istio. So we can get rid of it all. And little by little our apps become sane again.
On a Mac running Rancher Desktop, make sure your VM is given plenty of CPU and memory. I suggest you give your VM 16GB of memory and 6 CPUs.
-
Deploy a local K3D Kubernetes cluster with a local registry:
k3d cluster create my-istio-cluster \ --api-port 6443 \ --k3s-arg "--disable=traefik@server:0" \ --port 80:80@loadbalancer \ --registry-create my-cluster-registry:0.0.0.0:5010
Above, we:
- Disable the default traefik load balancer and configure local port 80 to instead forward to the "istio-ingressgateway" load balancer.
- Create a registry we can push to locally on port 5010 that is accessible from the Kubernetes cluster at "my-cluster-registry:5000".
-
Deploy Istio:
istioctl install -f istio-install-manifest.yaml
The manifest enables proxying of mysql databases in addition to the rest services.
-
Label the default namespace for sidecar injection:
kubectl label ns default istio-injection=enabled
Deployment Decisions:
- We use mysql. Mysql can be installed with helm. Its charts are in the bitnami repository.
- We deploy a separate database statefulset for each service
- Inside each statefulset we name the database "service_instance_db"
- Apps use the root username "root"
- The helm installation will generate a root user password in a secret
- The applications reference the secret name to get at the db credentials
-
Add the helm repository:
helm repo add bitnami https://charts.bitnami.com/bitnami
-
Update it:
helm repo update
Now we're ready to deploy the databases with a helm install
command for each app/service:
-
Vets:
helm install vets-db-mysql bitnami/mysql -f mysql-values.yaml
-
Visits:
helm install visits-db-mysql bitnami/mysql -f mysql-values.yaml
-
Customers:
helm install customers-db-mysql bitnami/mysql -f mysql-values.yaml
Wait for the pods to be ready (2/2 containers).
-
Compile the apps and run the tests:
mvn clean package
-
Build the images:
mvn spring-boot:build-image
-
Push the images to the local registry:
./push-images.sh
The deployment manifests are located in the folder named manifests
.
- The services are vets, visits, customers, and the frontend. For each service we create a Kubernetes Service Account, a Deployment, and a ClusterIP service.
routes.yaml
configures the Istio ingress gateway (which replaces spring cloud gateway) to route requests to the application's api endpoints.sleep.yaml
is a blank client Pod that can be used to send direct calls (for testing purposes) to specific microservices from within the Kubernetes cluster.
To deploy the app:
kubectl apply -f manifests/
Wait for the pods to be ready (2/2 containers).
To see the running PetClinic application, open a browser tab and visit http://localhost/.
Connect directly to the vets-db-mysql
database:
MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default vets-db-mysql -o jsonpath="{.data.mysql-root-password}" | base64 -d)
kubectl run vets-db-mysql-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.31-debian-11-r10 --namespace default --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD --command -- bash
mysql -h vets-db-mysql.default.svc.cluster.local -uroot -p"$MYSQL_ROOT_PASSWORD"
-
Call the "Vets" controller endpoint:
kubectl exec sleep -- curl vets-service.default.svc.cluster.local:8080/vets | jq
-
Here are a couple of
customers-service
endpoints to test:kubectl exec sleep -- curl customers-service.default.svc.cluster.local:8080/owners | jq
kubectl exec sleep -- curl customers-service.default.svc.cluster.local:8080/owners/1/pets/1 | jq
-
Test one of the
visits-service
endpoints:kubectl exec sleep -- curl visits-service.default.svc.cluster.local:8080/pets/visits\?petId=1 | jq