Krake - The open source tool for intelligent, distributed workload management
Why?
Computing jobs are real energy guzzlers, and AI applications in particular consume significantly more computing power and energy than conventional computing jobs due to their size and complexity. The resulting conflict between serious CO2 footprints and AI applications, which have the potential to overcome our ecological and economic challenges, must not be neglected.
Solution
Krake [ˈkʀaːkə] is an orchestrator engine for containerised and virtualised workloads on distributed and heterogeneous cloud platforms. It creates a thin aggregation layer over the various platforms (such as OpenStack or Kubernetes) and makes them available to the cloud user via a single interface. The workloads are planned depending on the user's requirements (e.g. hardware, latencies, costs) and the characteristics of the platform (e.g. energy efficiency, load). The planning algorithm can be optimised for latencies, costs or energy, for example.
Advantages
Based on individual metrics and labels, Krake can automatically and intelligently decide where a virtualised workload should be executed. With the help of Krake and infrastructure-providing software, Kubernetes clusters can be rolled out, customised and scaled manually or automatically. The architectural components of Krake, which act as microservices in the background, are loosely connected and can be started, adapted or exchanged with your own developments independently of each other. Krake is fully open source and thus benefits from many advantages, such as a strong community, high quality and security through peer reviews or the promotion of interoperability or long-term accessibility of the code.
ALASCA members involved
Cloud&Heat Technologies
dNation
You want to actively participate in Krake and let your ideas and skills flow into our open-source project?
Find out here how to become part of the developer community around Krake and come aboard: