I’m not with @dneves on this one. There are no real reasons as to why Talaria should be reached directly from the outside, the industry’s best practice is to have reverse proxies doing their things like mTLS, role-based API auth and the likes; you wouldn’t off-load this to talaria itself, would you?
So we won’t get rid of reverse proxies, and we’ll have to handle the tuning and monitoring of these ingress pieces of software. It’s only a question of having a nice way for petasos to send a Redirect with the correct Location header, I mean, with the public FQDN ingress address of the selected talaria instance, instead of the internal k8s FQDN.
In practical terms,
You have two talaria instances in your cluster, with a headless service resolution that assigns FQDN names as such:
Those FQDN are the ones the whole internal ecosystem should use when needing to talk to talaria, but that’s only for workloads running inside the cluster1. If one client wants to get onto those talarias coming from the public cloud, they’ll have to use FQDN like:
And assuming my very own 1, workloads running outside the cluster BUT on the local network, would use a third way of reaching these talarias, as the .svc.cluster.local wouldn’t be reachable by them. If you have a DNS server you can manage on that network you can assign some dns zone to reach all the ingress nodes, then it’d be a matter of deciding on a FQDN that’d expose those via the ingress of the cluster. If you have no DNS you can manage, you’d have to be creative like using node ports on a predictable way or even - god forbid - using host ports.
1: this is assuming you have a standard kubernetes with an overlay network like canal or flannel; all of this changes when you have any other CNI, and would have to be adapted to that specific case.