Why No Container Support?
ARDI is already built around a philosophy of many small, purpose-built services working together. But it also manages those services - creating, removing and modifying them as required to suit your application.
Unfortunately, most container formats (such as Docker) expect each container to launch a single service, while even a small ARDI server requires many closely-related small services.
Such as…
* The Primary Web Service
* The Consolidator (live data service)
* Alarm Manager
* Live Drivers
* Historical Drivers
* Event Drivers
And in multi-site servers, there can be multiple instances of all of these. Some are very small services with a tiny memory footprint. Running them as distinct containers is quite inefficient.
Moving to a container model also means users need to bring these services up manually, creating docker containers every time customers want to add new drivers to their ARDI server or whenever a new site is added. There's then even more work when things change and drivers are removed.
It also makes licensing more complex, as you need to set up static IPs in your docker subnet and license your server accordingly.
We also need to maintain compatibility with Windows. In the industrial space, Windows-based systems are far more common than containerised Linux systems. We also find that some software - such as OPC-Classic, Aveva Pi and some SCADA packages - only offer connectivity via Windows systems. A single-machine, service-centric model allows us to support both Linux and Windows machines with the one platform.
Overall, while it's theoretically possible to make a Docker version of ARDI, the user experience is so poor we've chosen not to focus on containerised applications at the current time.
However, ARDI will certainly run along-side containerised apps - for example, you can store ARDIs primary database in a Docker container, run Grafana or Prometheus from containers, etc. But ARDIs core is made up of too many distinct parts to easily fit into a single image.
NOTE: In worse-case scenarios where running in a container is absolutely required, we have developed the infrastructure to do so - but we strongly suggest not deploying this way due to the maintenance headache.