Hydra is a variant of DGD accelerated for multi-core systems. It is fully compatible with DGD, with the same capabilities (persistence, hotbooting, recompilation at runtime, JIT compiler, atomic functions) and the same event-driven runtime model. The goal for Hydra is to be able to run, on a single 128+ core machine, an application that otherwise would be distributed across a large network of servers. Example applications are social media or massively multiplayer online games.
Hydra works best on a bare-metal server. Running virtualized can slow it down by up to 30%. Hydra detects the number of available physical cores, and will use all of them. For the best result, all cores should be full-speed cores. Hyperthreading, which would be counterproductive for Hydra, is not used.
We will take a closer look at how Hydra differs from DGD.
An event-driven, non-blocking programming language is transactional if each task can be rolled back in case of a failure.
LPC can be made transactional:
Let tasks work on a copy of the objects they access, only committing changes to the original objects when the task commits.
Hold back network output until the task commits.
Any file operation aborts the task, unless it can be guaranteed at that point that the task will commit.
Once LPC is transactional, it can be parallelized.
Let tasks run in parallel, working on a copy of the objects they access.
At the end of a task, check that none of the copied objects have been modified by other tasks.
If so, all changes made by this task are committed to the original objects.
Otherwise, discard the changes and reschedule the task.
This is called Optimistic Concurrency. It originally applied to databases, but also works for transactional programming languages. With Optimistic Concurrency tasks can be parallelized automatically, even for LPC code which is not explicitly parallel.
Parallelization is automatic and the details are hidden from the LPC programmer. There is no discernable difference between a sequence of tasks in DGD, and a sequence of commits in Hydra. However, in order for tasks to be parallelized effectively, a few simple rules of thumb should be observed:
Scope: the number of objects accessed by each task should be small, and each task should be short-lived.
At arm's length: schedule tasks for an object without accessing the object's state. Scheduling a task in this manner does not count as a modification of the object.
Distribution: tasks which are scheduled for the same object must run sequentially. Tasks can run in parallel when they are scheduled for different objects.
Hydra achieves vertical scalability on a single multi-core host, in contrast with traditional horizontal scalability across multiple nodes. This offers advantages in cost (modern datacenter-class hardware is cheap) and operating cost, but most crucially it simplifies development and maintenance.
Binaries for Hydra on various architectures are frequently made available here. These are fully functional, but are limited to at most 64K objects and 255 connected users. Unrestricted binaries are available under a commercial license.
You can contact me at firstname.lastname@example.org (whitelist registration required).
There is also a forum for discussing Hydra and DGD.