At its core, autoscaling is the ability to add workers to and remove workers from a running cluster without having to restart. No longer able to handle your workload with three workers? Add more! Scaled up but don't need the capacity anymore? Scale back down without restarting.
Wallaroo combines the ability to add and remove workers from a cluster with integrated state management. Application programmers use our APIs to define not just the logic, but also the state that makes their application go. Many solutions require that you use an external database to store your state. Not Wallaroo. Keep your application state in-memory for optimal performance.
Wallaroo's scaling is built on top of our scale-agnostic APIs. Programmers write to an API that doesn't mention the number of machines, where they are located or the number of processes. All infrastructure concerns are handled by Wallaroo, the scale-aware platform.
See it in action!
Need more details? Below is a short video created by our engineering team that has helped people understand what Wallaroo does. This video will give you:
- An overview of the problem we are solving with our scale-independent API
- A short intro to the Python API
- A demonstration of our autoscale functionality
- An idea of the power of scale-independent APIs in action
Wallaroo was built to provide programmers with a scale-agnostic API. New workers can be added to a running Wallaroo cluster. Existing workers can be removed from a running cluster. Wallaroo will adapt to both scenarios by redistributing work and continuing to process data without having to restart the cluster. We aren't quite there yet with a full-featured, rock-solid autoscaling, but we are close.
Full support is planned for Q4 2017.
You can follow our progress on GitHub
Wallaroo makes the infrastructure virtually disappear so you get rapid deployment, very low operating cost, and elastic capacity with zero downtime for your applications in big data, stream processing, machine learning, and microservices.