Why No Container Support?
While ARDI can be containerised, it doesn't provide many benefits compared to its costs.
ARDI Already Uses Tiny Services
ARDI is already built around a philosophy of many small, purpose-built services working together. But it also manages those services - creating, removing and modifying them as required to suit your application.
Unfortunately, most container formats (such as Docker) expect each container to launch a single service, while even a small ARDI server requires many closely-related small services.
Such as…
* The Primary Web Service
* The Consolidator (live data service)
* Alarm Manager
* Live Drivers
* Historical Drivers
* Event Drivers
And in multi-site servers, there can be multiple instances of all of these. Some are very small services with a tiny memory footprint. Running them as distinct containers is quite inefficient - having several 100MB+ containers rather than a couple of 2MB daemons/services.
Moving to a container model also means that drivers need to be built into containers. While this is a straight-forward process, it's something that will have to be done manually by system administrators, as we don't currently maintain Docker versions of our driver library.
Extensibility
ARDI is designed to be extendable with both scripts and its Modular Output System.
This often needs new Python or Node libraries to be installed to support functions such as running local AI models, interfacing with 3rd party systems or integrating with business architecture. Unfortunately, containers aren't well suited to this - you can't easily install system-level libraries such as FFMPEG (to create video reports) in a containerised environment.
Cross-Platform Support
We also need to maintain compatibility with Windows. In the industrial space, Windows-based systems are far more common than containerised Linux systems. We also find that some software - such as OPC-Classic, Aveva Pi and some SCADA packages - only offer connectivity via Windows systems. A single-machine, service-centric model allows us to support both Linux and Windows deployments with the one code-base.
Licensing
It also makes licensing more complex, as you need to set up static IPs in your docker subnet and license your server accordingly.
Overall
While it is possible to deploy a Docker version of ARDI, we feel that it compromises the flexibility of ARDI systems. As a result, we don't recommend it.
ARDI will run along-side containerised apps and work with containerised parts - for example, you can store ARDIs primary database in a Docker container, run Grafana or Prometheus from containers, etc. But ARDI is fundamentally designed to be flexible - and locking a system into a container removes quite a lot of the adaptability of an ARDI system.
NOTE: In scenarios where running in a container isn't optional, we can support it - but we strongly suggest not deploying this way due to the maintenance overhead and limitations it imposes.