Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add master and slave redis services #627

Merged
merged 3 commits into from
Aug 9, 2023

Conversation

hashemi-soroush
Copy link
Contributor

Fixes #626 .

Changes proposed on the PR:

  • Adds Redis Master Kubernetes service
  • Adds Redis Slave Kubernetes service

@hashemi-soroush hashemi-soroush requested a review from a team as a code owner June 23, 2023 16:17
@ebuildy
Copy link
Contributor

ebuildy commented Jul 27, 2023

Good one again!

Is follow master pod if master changes?

@hashemi-soroush
Copy link
Contributor Author

Good one again!

Is follow master pod if master changes?

It does. Both Master and Replica services use label selectors to select their corresponding pods. So, when the master changes, their selected pods change too. As for the clients who are using these services, since these aren't headless services, the clients wouldn't notice any changes in the IP of the master or replica pods, and as long as the services select their corresponding pods correctly, the clients won't confuse master and replicas with each other.

@ebuildy
Copy link
Contributor

ebuildy commented Jul 28, 2023

I am not sure labels change, see #608

@hashemi-soroush
Copy link
Contributor Author

I am not sure labels change, see #608

The functionality is definitely implemented. Nonetheless, I will try reproducing it and share the results on that issue thread.

@ese
Copy link
Member

ese commented Aug 9, 2023

Thanks!

@ese ese merged commit d4c9947 into spotahome:master Aug 9, 2023
6 checks passed
@ese
Copy link
Member

ese commented Aug 17, 2023

@hashemi-soroush was it on porpose to use loadBalancer services? I think that can generate too unexpected hassle to existing clusters due creation of external load balancers in addition to security issues.
Probably changing to internal services before release

@hashemi-soroush
Copy link
Contributor Author

hashemi-soroush commented Aug 17, 2023

@ese there are a number of reasons I chose the loadbalancer type.

I assume the users of this operator come from two groups:

  1. Those who want to set up Redis for themselves, for example, they set up and use Redis in the same team
  2. Those who want to set up Redis for others, for example, the infrastructure teams of companies or companies that provide Redis as a service.

The first group probably starts its infrastructure with a single Kubernetes cluster, but, in my experience, quickly realises Kubernetes isn't a good fit for hosting stateful software off-the-shelf. It needs several configuration customisations which most likely makes the cluster not-well-suited for stateless software. So, it's best to have at least two clusters, one for stateful software and the other for stateless. Now, obviously you need to enable communication between the two clusters, so every database must be exposed externally one way or another. Also, to keep the database safe from the possibility of weak load balancing by its clients, which can result in the downtime of the database, we better expose it with a load balancer layer in the middle. Hence the loadbalancer type.

As for the second group of users, if it's the infrastructure team of a company, the reasoning is exactly same as above, and if it's a separate company that's providing Redis as a service in their own cloud, they have their own Kubernetes cluster and the rest of the reasoning is the same as above.

@ese
Copy link
Member

ese commented Aug 17, 2023

All that makes sense. From my point of view if we had implemented configuration options to set up external visibility of these endpoints I had no problem with it letting users set the use case they want. Setting they to load-balancer and open externally the cluster by default its probably an issue for the users that don't want it.
It would be ideal to have this possibility in the spec.
Given the current behavior, I'm not comfortable forcing all users to have load balancers exposing their Redis endpoints, at the moment who needs to maintain these services always can create them by other means since they don't need any maintenance because redis operator is taking care of the pod labeling

@hashemi-soroush
Copy link
Contributor Author

I understand that there are users who might not prefer the current behaviour (yet). So what exactly is your suggestion? A simple flag in the Redisfailover CRD? Something more detailed?

@nastynaz
Copy link

nastynaz commented Aug 22, 2023

I found this repo as I was looking for a way to easily define and deploy HA redis for apps in my cluster. The automatic loadbalancer setup means the current version isn't a good fit for my use case, and I assume it might be the same for others.

It'd be great if it was off by default but opt-in via a flag for three main reasons:

  1. It's secure by default. Important when deploying redis across many namespaces/apps where it's easy to lose track which apps need to be private and which don't.
  2. It doesn't create extra resources. Depending on cloud providers load balancers can add additional cost.
  3. It doesn't accidentally leak private data. Users can't always rely on security rules to block ingress (sometimes we set allow all ingress from 0.0.0.0/0), so if a new loadbalancer is created with external access it's automatically accessible everywhere. Whilst this isn't the most secure approach, it would be beneficial if the 'enable external access' was explicit rather than implicit (i.e. external load balancers need to be opt-in)

@hashemi-soroush
Copy link
Contributor Author

hashemi-soroush commented Aug 26, 2023

@nastynaz you can find out about the last updates on the issue you're referring to in issue #647 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add Master and Slave services in addition to Sentinel Service
4 participants