Skip to content

add rudimentary support for service configurables#249

Open
features-not-bugs wants to merge 2 commits intoakyriako:mainfrom
features-not-bugs:main
Open

add rudimentary support for service configurables#249
features-not-bugs wants to merge 2 commits intoakyriako:mainfrom
features-not-bugs:main

Conversation

@features-not-bugs
Copy link
Copy Markdown

Added in some basic configuration to the service spec for a typesense cluster.

I haven't worked with kubebuilder before so there may be an anti pattern or two in here.

@akyriako
Copy link
Copy Markdown
Owner

akyriako commented Mar 16, 2026

Hey @features-not-bugs , thanks for the PR. Please give me some context why you need these changes. because they are clashing with some design decisions of the operator:

  1. Type corev1.ServiceType: the service is exposed via Ingress or Gateway API and is created always as ClusterIP
  2. Annotations map[string]string: already covered by ServiceAnnotations map[string]string in TypesenseClusterSpec
  3. Labels map[string]string: no plans to allow custom labels, unless there is a very good reason.
  4. InternalTrafficPolicy & ExternalTrafficPolicy: due to the stateful nature of Typesense and the volatility of raft, restrictring internal traffic to Local would be counter-productive. For external traffic on the other hand, as long as the service is created always as of type ClusterIP, some option of the ExternalTrafficPolicy make no sense because they are meant to be combined with LoadBalancer or NodePort service types.

@features-not-bugs
Copy link
Copy Markdown
Author

features-not-bugs commented Mar 16, 2026

Hey @akyriako,

Thanks for getting back to this so fast. The overall idea here was to allow direct connection via the load balancer rather than requiring an Ingress/Gateway API stack to be deployed.

For example in my specific situation, I have an 4 host Kubernetes cluster deployed in an offsite datacenter running cilium with full BGP meshing to a pair of routers. Each of these routers have a link to AWS where we run another Kubernetes cluster (mission critical). The way the network is setup is that we can access the resources of our datacenter Kubernetes cluster from AWS by simply connecting to the load balancer IP's.

Adding in an Ingress/Gateway API for a service that will only be consumed from within our attached networks is just an extra step that isn't required in our use case.

For point 2, I added this in as it made sense to simply group all service specific configuration in the service object with the idea that the root level service annotation object could be deprecated should you wish.
3. Labels addition was simply a "if we're doing annotation why not labels, what can it hurt?" decision
4. I agree with the internalTrafficPolicy being a redundant option in this case, although the externalTrafficPolicy can be beneficial to have when upstream loadbalancers/routers are already able to direct traffic to the correct host.

Potentially going forward we can;

  • remove additional annotations declaration
  • remove labels declaration
  • restrict the ServiceType to ClusterIP and LoadBalancer
  • remove InternalTrafficPolicy
  • only allow ExternalTrafficPolicy to be configured if ServiceType = LoadBalancer

Let me know, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants