-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
apis/v1alpha2: collapse ServiceImport spec and status fields to root #52
base: master
Are you sure you want to change the base?
apis/v1alpha2: collapse ServiceImport spec and status fields to root #52
Conversation
// ServiceImportStatus describes derived state of an imported service. | ||
type ServiceImportStatus struct { | ||
// +listType=atomic | ||
Ports []v1.ServicePort `json:"ports"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In v1alpha1, ServicePort
is redefined locally here - this appeared to be safe to switch to using the core v1 type, and it would eliminate the need for helper functions like https://github.com/Azure/fleet-networking/blob/fe3bdb555f2ed49c7cccae91cff5829f8c9dcb3b/api/v1alpha1/serviceimport_types.go#L75-L84
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feature is exactly what i am looking forward to have, waiting for this PR to be merged.
pkg/apis/v1alpha2/serviceimport.go
Outdated
// cluster is the name of the exporting cluster. Must be a valid RFC-1123 DNS | ||
// label. | ||
Cluster string `json:"cluster"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may want to centrally define this type as something more specific than string
and add CEL validation to align with the suggested constraints in https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/2149-clusterid and https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/4322-cluster-inventory#cluster-name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First pass at this in latest commit pulling from prior art over in Gateway API SectionName validation - I don't think we can capture the contextual constraints from KEP 2149 here, but it should at least be limited to RFC 1123 DNS labels now. I'm unsure if we can/should hoist this up somewhere to be reusable by e.g. ClusterProfile API, but this feels sufficient initially.
pkg/apis/v1alpha2/serviceimport.go
Outdated
// type defines the type of this service. | ||
// Must be "ClusterSetIP" or "Headless". | ||
// The "ClusterSetIP" type reflects exported Service(s) with type ClusterIP | ||
// and the "Headless" type reflects exported Service(s) with type Headless. | ||
// A ServiceImport with type ClusterSetIP SHOULD populate `.status.ips` | ||
// with virtual IP address(es) where the service can be reached from within | ||
// the importing cluster. | ||
// If exported Services of the same name and namespace in a given ClusterSet | ||
// have differing types, a "Conflict" status condition SHOULD be reported in | ||
// ServiceExport status. | ||
// +kubebuilder:validation:Enum=ClusterSetIP;Headless | ||
Type ServiceImportType `json:"type"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docs here are my expectation of how this is expected to work. Would appreciate feedback if this is inaccurate, e.g. if it should actually be possible to create a ClusterSetIP type ServiceImport for underlying exported Headless type Services.
Cilium implementation is still WIP (but a mere few weeks, maybe a month from completion). And we are planning to only have ServiceImport auto created/deleted by our controllers right now like suggested by the KEP so from a Cilium perspective this looks good to me! |
As per @skitt comment in office hours he mentions that Liqo is also a MCS API stakeholder cc @mikemorris |
Looking into this a bit more I’m not sure Liqo actually implements the MCS API. |
Hi for what it's worth per the convo in SIG-MC the docs for GKE are wrong and |
I wonder if you can share any link to that implementation? Is it completed? |
Sure It's mainly here https://github.com/cilium/cilium/tree/main/pkg/clustermesh/mcsapi and there is some other interesting bits here https://github.com/cilium/cilium/tree/main/pkg/clustermesh/endpointslicesync. It's not fully done yet unfortunately but very soon! The last major thing (the service import controller) is in a PR being reviewed/improved and then we still need to do a few things here and there (like integrating the conformance/e2e tests in our CI) but our implementation would be in a working state. The next cilium release is beginning of next year so we still have some times to iron out the last details anyway! |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: mikemorris The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
8101218
to
09f8c2c
Compare
Proposal, diff and PR description have been updated per this discussion, please take another look as this should now be ready for review! |
Running
|
@mikemorris does this help? With the introduction of a 2nd version, we'll need to mark one (v1alpha2 if I understand how this works correctly) w/ the |
// Package v1alpha1 contains API schema definitions for the Multi-Cluster | ||
// Services v1alpha1 API group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Package v1alpha1 contains API schema definitions for the Multi-Cluster | |
// Services v1alpha1 API group. | |
// Package v1alpha2 contains API schema definitions for the Multi-Cluster | |
// Services v1alpha2 API group. |
I can confirm this works. This also needs a few changes to the build infrastructure to support generating code for both packages: --- a/hack/update-codegen.sh
+++ b/hack/update-codegen.sh
@@ -28,7 +28,7 @@ gobin="${GOBIN:-$(go env GOPATH)/bin}"
OUTPUT_PKG=sigs.k8s.io/mcs-api/pkg/client
OUTPUT_DIR=$SCRIPT_ROOT/pkg/client
-FQ_APIS=sigs.k8s.io/mcs-api/pkg/apis/v1alpha1
+FQ_APIS=(sigs.k8s.io/mcs-api/pkg/apis/v1alpha1 sigs.k8s.io/mcs-api/pkg/apis/v1alpha2)
CLIENTSET_NAME=versioned
CLIENTSET_PKG_NAME=clientset
@@ -44,22 +44,22 @@ fi
COMMON_FLAGS="--go-header-file ${SCRIPT_ROOT}/hack/boilerplate.go.txt"
echo "Generating clientset at ${OUTPUT_PKG}/${CLIENTSET_PKG_NAME}"
-"${gobin}/client-gen" --clientset-name "${CLIENTSET_NAME}" --input-base "" --input "${FQ_APIS}" --output-pkg "${OUTPUT_PKG}/${CLIENTSET_PKG_NAME}" --output-dir "$OUTPUT_DIR/$CLIENTSET_PKG_NAME" ${COMMON_FLAGS}
+"${gobin}/client-gen" --clientset-name "${CLIENTSET_NAME}" --input-base "" "${FQ_APIS[@]/#/--input=}" --output-pkg "${OUTPUT_PKG}/${CLIENTSET_PKG_NAME}" --output-dir "$OUTPUT_DIR/$CLIENTSET_PKG_NAME" ${COMMON_FLAGS}
echo "Generating listers at ${OUTPUT_PKG}/listers"
-"${gobin}/lister-gen" "${FQ_APIS}" --output-pkg "${OUTPUT_PKG}/listers" --output-dir "${OUTPUT_DIR}/listers" ${COMMON_FLAGS}
+"${gobin}/lister-gen" "${FQ_APIS[@]}" --output-pkg "${OUTPUT_PKG}/listers" --output-dir "${OUTPUT_DIR}/listers" ${COMMON_FLAGS}
echo "Generating informers at ${OUTPUT_PKG}/informers"
"${gobin}/informer-gen" \
- "${FQ_APIS}" \
+ "${FQ_APIS[@]}" \
--versioned-clientset-package "${OUTPUT_PKG}/${CLIENTSET_PKG_NAME}/${CLIENTSET_NAME}" \
--listers-package "${OUTPUT_PKG}/listers" \
--output-pkg "${OUTPUT_PKG}/informers" \
--output-dir "${OUTPUT_DIR}/informers" \
${COMMON_FLAGS}
-echo "Generating register at ${FQ_APIS}"
-"${gobin}/register-gen" ${FQ_APIS} --output-file zz_generated.register.go ${COMMON_FLAGS}
+echo "Generating register at" "${FQ_APIS[@]}"
+"${gobin}/register-gen" "${FQ_APIS[@]}" --output-file zz_generated.register.go ${COMMON_FLAGS}
if [[ "${VERIFY_CODEGEN:-}" == "true" ]]; then |
Regarding my small research about how we could do the transition between v1alpha1 and v1alpha2: AFAICT the current way would be breaking for the consumer of the ServiceImport resource as per:
It means that controller querying v1alpha1 while the object is stored with v2alpha2 and the opposite should probably see bogus/empty fields. It's true that ServiceImport is entirely managed by the MCS-API implementation so this may only be a very temporary situation before everything is migrated to v1alpha2. An example of such controller reading ServiceImport could be for answering DNS request (I will assume it's the main thing for the rest of the post) which means that a controller reading v1alpha1 ServiceImport will see bogus ServiceImport if they are migrating/migrated to v1alpha1 and a controller reading v1alpha2 will see bogus ServiceImport that are not migrated to v1alpha1. So now for the option:
I believe that Submariner would be one of main project affected by this as you have some productions users "endorsed by the project" as many other project supporting MCS-API have some warning regarding production usage or is not released in a stable version yet IIUC. So I would say you should have the most say in what ways we should go @tpantelis @skitt. FYI I am hoping to get to the bottom of this before next Cilium version is released (first version to include MCS-API support!), considering we have a feature freeze somewhere in December and next release should be in January/February (guessing that if it's to skip a CRD transition and that we miss time here it may go in the feature freeze period before the stable release). So hopefully we can figure this out soon 🙏. |
Refs #48 (comment)
The source of truth for these values are on the exported Service and these fields should generally be written to by a controller, not a human. Manually updating these fields could cause unexpected behavior.
This would be a breaking change, necessitating a v1alpha2 API version.
We expect this change should have minimal impact on existing users of either v1alpha1 or forked CRDs though because they largely interact with ServiceImport through the
foo.clusterset.local
address, not the CRD directly.GKE has forked the CRD to make this change, and fixing this upstream I hope could allow them to de-fork.similarlyforked the CRD to make this same change and engineers have asked about this in Kubernetes Slack while building an MCS implementation, and have indicated support for changing this upstream and adopting MCS API CRDs directly if this change were to be made.I'm hoping that fixing this directly in the upstream spec would be an alternative to #20, and that all implementations would be able to use the MCS API CRDs directly.
UPDATE: Notably, removing the possibility of adding user-editable fields under
spec
likely means that having a multi-cluster controller automatically generating a ServiceImport with an appropriate name and placing it in an appropriate namespace for service networking beyond ClusterSet boundaries, where "sameness" can not be assumed may not be possible to automate.I'm proposing to retain the now-empty.spec
stanza because I believe there is future additive scope (intentionally excluded from this proposal) for the ability to export a service beyond a ClusterSet to be imported/consumed in a different Cluster which may not have the same "sameness" guarantees.A usage pattern where a user would manually create and name a ServiceImport (instead of an automated controller managing the resource) would have been a way to handle cross-ClusterSet service networking, using
spec
fields to handle mapping to a ClusterSet-external exported service "known" to the cluster, and adding astatus.conditions
field to ServiceImport for a controller to report if this attempted mapping was successful. However, this pattern has not seen sufficient demand to justify enabling, and may have concerns around overly-broad permissions or complex handshake UX between clusters in unassociated ClusterSets.With this change to the API it will likely become impractical to implement this pattern, but we largely view this as an acceptable tradeoff and the typical way to access remote services beyond the ClusterSet boundary should likely continue to be through a gateway pattern.
Other MCS API implemetations from which to solicit feedback on this change:
.spec.type
and.spec.ports
fields. Would be helpful to get direct feedback on whether this is desirable for any specific functionality (such as only exporting selected ports?), or whether the UX would be simplified if a controller set these in status automatically based on the underlying exported Services."ServiceImport IPs need update"
and"updated ServiceImport"
indicating that the Cloud Map controller is modifying these fields.Please add other known implementations!