You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

There have been asks from partners and prospective customers regarding the feasibility of deploying EMCO in production. There are some areas where EMCO needs enhancements to get it closer to production. Hopefully, the community can come together and contribute to an initiative that identifies the gaps, determines the enhancements needed to fill those gaps, and delivers those enhancements across multiple EMCO releases.

To get EMCO closer to production state, two important areas that need enhancements are Observability and Resiliency.

Observability

Observability is the property of a system that allows an external observer to deduce its current state, and the history leading to that state, by observing the externally visible attributes of the system. The main factors relating to observability are: logging, tracing, metrics, events and alerts. See the Observability page for more details.

For logging, we already have structured logs in the code base and fluentd in deployment. But:

  • We need EMCO to work with a logging stack such as ELK.
  • We need a good way to persist the logs.

These don't need any code changes. We can do a PoC for a deployment of EMCO with log persistence and a log stack, and document the ingredients and the recipe. Perhaps, the needed YAML files can be checked in as well.

We also need to investigate an events framework. This is TBD.

Resiliency

Database persistence

Today, db persistence is not enabled by default. We need to validate with persistence enabled.

  • We have tested with NFS-based PV in the past, and we still have the NFS-related YAMLs. If there is consensus on NFS as default storage, we need to set up NFS environment.
  • With persistence enabled, we cannot rely anymore upon developer-oriented troubleshooting and workarounds based on re-installing EMCO to blow away the db. Developers should also
    test with persistence enabled.

Recovery from crashes/disruptions

Scenarios to validate

  • Restart each microservice, when it is processing a request
  • In particular: Restart orchestrator when a DIG instantiate request is in flight
  • Restart all microservices together
  • Restart the node on which EMCO pods are running (assuming it is 1 node for now)

rsync can restart after a crash. Aarna, as part of EMCO backup/restore presentation, has tested blowing away the EMCO namespace (incl. EMCO pods and db), and restoring it.

Mongo db consistency

Some microservices may make multiple db writes for a single API call. So, if the microservice crashes in the middle of that API call, we will have an inconsistent update in mongo. We need to scrub for such scenarios and fix them.

Graceful handling of cluster connectivity failure

Without the GitOps model, rsync should apply configurable retry/timeout policies to handle cluster connectivity loss. We have the
/projects/.../{dig}/stop API but that is a workaround -- the user needs to invoke that API manually.

We need to validate rsync retries/timeout for cluster connectivity.

Question: can we recommend the GitOps approach and leave things as is? If not, we need to fix this.

Storage Considerations

We need storage in the cluster where EMCO runs for:

  • mongo db
  • etcd
  • logs
  • metrics? (assuming we aggregate cluster metrics in the central cluster via federated Prometheus, Thanos, Cortex, ....)
  • ?
  • No labels