As programmable devices proliferate in our homes, public spaces, and workplaces, we increasingly collaborate with programs and machines through local interactions. These edge applications are designed for intrinsically local use cases, requiring resilience, scalability, and high performance without relying on a central cloud or persistent connections.
Unlike traditional cloud applications, edge applications must contend with several constraints:
To address these constraints, edge applications require a programming model and runtime that embraces decentralization, location-transparent and mobile services, physical co-location of data and processing, temporal and spatial decoupling of services, and automatic peer-to-peer data replication. The core principle is enabling autonomous operation without dependence on a central infrastructure or persistent connectivity.
We are inevitably moving towards increased decentralization. For most companies, being anchored in the cloud and needing to serve their users more efficiently, this means relying less on centralized infrastructure and moving towards hybrid cloud-to-edge systems. For more information, see the article I wrote about unifying the cloud and edge into a single cloud-to-edge continuum .
Decentralized architecture that distributes logic and data together offers several advantages when managing data in the cloud and at the edge:
We have been working on building out this programming model and runtime for the last 2-3 years. A major part of this work is always ensuring the physical co-location of end users, data, and compute to guarantee the lowest possible latency and the highest levels of resilience. If the user, data, and compute are always at the exact physical location, one can serve the user with the lowest possible latency, and since everything needed to serve the user is right there, one can lose the connection to the backend cloud and peers and still be able to serve the user. This requires a distributed replicated data mesh—a data distribution and consensus fabric that moves the data to where it needs to be at every moment.
In the 24.05 release, we have pushed the envelope for edge development even further with three exciting new features.
Last year, we shipped Akka Edge, but we are not stopping there. A significant leap forward for Akka is the new capability to use Akka concepts outside the JVM with the latest library called Akka Edge Rust. Here, we have extended Akka Edge to empower cloud developers to run their Akka applications even closer to where they are used and where the user’s data resides. Akka Edge Rust provides a subset of Akka implemented with the Rust programming language. Rust has been chosen given its focus on reliability and efficiency for resource-constrained devices where CPU, memory, and storage are at a premium.
This client library runs natively under 4 Mb of RAM (running on Arm32, Arm64, x86, amd64, RISC-V, and MIPS32). It has rich features, such as an actor model, event-sourcing, streaming projections over gRPC, local persistent event storage, WebAssembly (WASM) compatibility, and security (through TLS and Wireguard). Using this Akka client, one can extend an application to devices while maintaining its programming model, semantics, and core feature set.
For example, in the diagram below, the Akka JVM service is responsible for registering sensors. The Akka Edge Rust service will connect to the Akka JVM service and consume registration events as they occur. The Akka Edge Rust service will also “remember” what it is up to, and, in the case of a restart, it will re-connect and consume any new registrations from where it left off. Communication between the edge and cloud is made over gRPC. Observations for registered sensors can then be sent to the Akka Edge Rust service via UDP, as they are often used in practice. The Akka Edge Rust service will use its established connection with the Akka JVM service to propagate these local observations.
Learn more in the guide introducing Akka Edge Rust, which explains how to set up and develop a Rust-based service that works with an Akka JVM, a cloud-based counterpart.
An entity in Akka is a clustered event-sourced actor that is effectively a local, mobile, and fully replicated durable in-memory database with a built-in cache—since the in-memory representation is the Service of Record, it means that reads can always be served safely directly from memory.
We shipped Active-Active Replicated Event Sourcing for region-to-region replication about a year ago. Building upon that, in 24.05, we shipped Active-Active Replicated Event Sourcing for Edge, extending its support to entities (clustered event-sourced actors) running out at the far edge. Active-Active means that entities can concurrently process writes/updates in more than one geographical location—such as multiple edge Point-of-Presence (PoP), different cloud regions, hybrid cloud, and multi-cloud—while Akka guarantees that entities always converge to a consistent state, with read-your-writes semantics.
In short, Replicated Event Sourcing gives you:
It works like magic, forming an auto-replicating data mesh for transactional data that is truly location-transparent and peer-to-peer (masterless with all nodes being equal). It makes it easy to build applications spanning multiple clouds, cloud regions, edge PoPs and gateways.
Also, in addition to Event Sourced entities, it now also supports replicating state changes of Durable State entities in Akka Edge (PoPs and edge data centers) and Akka Distributed Cluster (multi-cloud and multi-region).
In our quest towards more performance, efficiency, and lower costs, we have made it much easier to build a GraalVM native image of an Akka application . GraalVM Native Image compiles Java or Scala code ahead of time to a native executable. A native image executable provides lower resource usage than the JVM, smaller deployments, faster startup times, and immediate peak performance—making it ideal for edge deployments in resource-constrained environments where one needs rapid responsiveness under autoscaling while also being very useful in the cloud.
Try these new features and let us know what you think. We would love to hear from you. The best place to start is the Akka Edge documentation, where you can understand the design and architecture, read about example use cases, get an overview of the features, and dive into one of the sample projects.