Show HN: Distr – open-source distribution platform for on-prem deployments

github.com

117 points by louis_w_gk 23 days ago

Distr is designed to help software engineers distribute and manage their applications or agents in customer-controlled or shared-responsibility environments. You only need a Docker Compose file or Helm chart—everything else for on-prem is handled by the platform.

We’re are an open source dev tool company. Over the past couple of months, we’ve spoken with dozens of software companies to understand their challenges with on-prem deployments. We analyzed the internal tools they’ve built and the best practices from existing solutions, combining them into a prebuilt, Open Source solution that works out of the box and integrates seamlessly.

Distr consists of two key components:

1. Hub - Provides a centralized view of all deployments and controls connected agents. - Comes with a simple GUI but also supports API and SDK access for seamless integration. - Fully Open- Surce and self-hostable, or you can use our fully managed platform.

2. Lightweight Agents - Pre-built agents for Helm (Kubernetes) and Docker Compose (VM) that run alongside your application. - Handle lifecycle tasks like guided installation, updates, and rollbacks. - Provide basic metrics (health status, application version) and logs

If you already have a customer portal or self-service interface for on-prem deployments, you can seamlessly integrate all features into your existing portal or application using our API or SDK. Alternatively, you can use our pre-built, white-labeled customer portal.

Here’s what an integration into your existing customer portal could look like:

  import {DistrService} from "@glasskube/distr-sdk";
  
  const customerHasAutoUpdatesEnabled = false; // replace with your own logic
  const deploymentTargetId = 'da1d7130-bfa9-49a1-b567-c49728837df7';
  const service = new DistrService({
      apiKey: 'distr-8c24167aeb5fd4bb48b6d2140927df0f'
  });
  
  const result = await service.isOutdated(deploymentTargetId);
  if(result.deploymentTarget.deployment?.latestStatus?.type !== 'ok') {
      // let the user decide whether to allow updates from an instable state, e.g. with:
      if(!confirm('The deployment is not in a stable state. Do you want to update anyway?')) {
          return;
      }
  }
  if(result.outdated) {
      if(customerHasAutoUpdatesEnabled) {
          await service.updateDeployment({deploymentTargetId});
          // notify customer about the update
      } else {
          const newerVersionsAvailable = result.newerVersions;
          // notify customer about the newer versions, e.g. via email
      }
  }


With the SDK/API, you can: - Display real-time deployed version and deployment status directly within the application, notifying customers when their deployed version is outdated. - Allow customers to trigger updates from within your app using a simple API call

If you’re distributing software and want to streamline updates or enhance monitoring, we’d love your feedback and are here to answer any questions.

Getting started is easy—just bring your Docker Compose file or Helm chart, and we’ll guide you through the rest.

Check out the fully managed version (https://app.distr.sh/register) and explore our documentation (https://distr.sh/docs/) to learn more.

rudasn 22 days ago

This looks super cool and useful! I really like the idea of an open source replicated.

I've been through the docs and could find if there's a way to deploy a git repo, or a tar file, that contains the docker compose file instead if copy pasting. What about env variables?

Also couldn't find how you specify the endpoint(s) of an app. Does the agent just calls `docker compose up` or is there a way to specify the command(s) that should be executed during installation, upgrade, etc.

The customer portal is a great idea. It's not clear though, if the customer portal is enabled for a deployment and the customer can manage their deployment (eg, update to new version), would the isv not be able to do that from the vendor portal? Is it all or nothing when it comes to client self-service?

Thanks, great stuff I look forward to trying it out!

  • pmig 22 days ago

    Thanks!

    > I've been through the docs and could find if there's a way to deploy a git repo, or a tar file, that contains the docker compose file instead if copy pasting.

    We are currently developing a GitHub Action utilizing our SDK to upload latest releases (and provide paths to the docker-compose file).

    > What about env variables?

    These are high up in our backlog and we are looking for initial partners to specify them with. We thought about using end-to-end encryption for env variables.

    What would be an example for an endpoint?

    > `docker compose up` Yes currently this all what the agent does. We assume that most applications handle upgrades inside their applications and not on an infrastructure level. Do you have an exact upgrade case in mind?

    Regarding the vendor and the customer portal, we want to enable both options. A deployment can either be managed by the vendor or by the customer. Some customers want to control when they want to perform upgrades and set certain variables or helm values themselves. In other cases it is fine if the ISV does all the management.

    This is also what might be configurable in the future.

    Thanks again, let us know if you know how can improve Distr. All feedback is welcome.

    • rudasn 22 days ago

      > a GitHub Action utilizing our SDK to upload latest releases

      Cool. So the tar would be uploaded to the hub and from there the user can manage deployments. Or maybe even create draft? deployments from the gh action and the user would only have to confirm/approve before shipping to clients.

      > Do you have an exact upgrade case in mind?

      Yeah, some apps could get quite complicated due to <reasons> and a single `docker compose up` for deployment won't cut it. For example, user may need to do a backup before deployment or some post deployment steps like db migrations. These steps may depend on which app version you are updating from, so even though they can be handled from within the app some logic/state is needed to determine when to execute them.

      The ability to specify pre-deploy, post-deploy and a custom deploy command (not just docker compose up) would be helpful.

      > A deployment can either be managed by the vendor or by the customer.

      But not both? Can't the customer invite the vendor back to allow them to perform such actions when necessary?

      The customer portal is a good idea, the ux needs a little polish I think. Make it super easy to see if there's a new app version, a button to install it, and perhaps real time feedback/progress. Some customers would also appreciate the ability to export a list of app deployments with logs for auditing.

      • pmig 21 days ago

        > Cool. So the tar would be uploaded to the hub and from there the user can manage deployments. Or maybe even create draft? deployments from the gh action and the user would only have to confirm/approve before shipping to clients.

        We don't support uploading the actual images these must be hosted in a separate container registry.

        > The ability to specify pre-deploy, post-deploy and a custom deploy command (not just docker compose up) would be helpful.

        I can see that thanks for the suggestion!

        > But not both? Can't the customer invite the vendor back to allow them to perform such actions when necessary?

        You'll probably right all of these scenarios are possible, we should make it configurable from both sides: "all the other party to manage the deployment"

        > Make it super easy to see if there's a new app version,

        Good point "update notifications" are also on our backlog, this feature should be definitely considered.

    • nonameiguess 21 days ago

      > We assume that most applications handle upgrades inside their applications and not on an infrastructure level. Do you have an exact upgrade case in mind?

      Part of my job is handling application installs and upgrades for military customers and contractors to the military who don't have sufficient internal expertise to do it themselves, or ideally creating something like your platform, but unfortunately often bespoke to their specific needs.

      A few services that have become ubiquitous for orgs that have to self-host everything are things like Keycloak, Gitlab, the Harbor container registry or something like Artifactory or Nexus if they need to host package types other than just container images.

      These aren't my own applications. They're third party. So it'd be great if I could make them single-click, but the reality of self-hosted web services is they tend to provide far too many options for the end-user for that to be possible. For instance, they all require some kind of backing datastore and usually a cache. Most of the time that's Postgres and Redis, but whether you also self-host those or use a managed cloud service, and if you self-host, whether you try to do that also via Kubernetes using Operators and StatefulSets, or whether you run those on dedicated VMs, dictates differences in how you upgrade them in concert with the application using them.

      There's also stupid peculiarities you discover along the way. Keycloak has an issue when deployed as multi-instance where the Infinispan cache it runs on its own local volumes that I guess is the standard Java Quarkus cache, may be invalidated if you try to replace instances in-place, so the standard Kubernetes rolling update may not work. Most of the time it will, but when it doesn't, it can bite you. They don't document this anywhere, but I found buried in a Github issue comment from a Keycloak developer two years ago a statement saying they recommend you scale to 0, then upgrade, then scale back to however many replicas you were running before.

      The stupidest example so far is a customer who wanted to self-host Overleaf, a web-based editor for collaborative LaTeX documents. Overleaf itself only provides a docker compose toolkit, but it's not as simple as running "docker compose up." You run an installer and upgrader script instead that injects a bunch of discovered values from the environment into templates and then runs docker compose up. Basically, they just recreated Helm but using docker compose. Well, this customer is running everything else in Kubernetes. At one point, they had a separate VM for Overleaf but wanted to consolidate, so I wrote them a Helm chart that comes as close as possible to mimicking what the docker compose toolkit does, but I'm not the developer of Overleaf and I can't possibly know all the peculiarities of their application. They use MongoDB and the AWS equivalent managed service last I remember was not available in us-gov-east-1 or possibly not available in either Gov Cloud region, so we needed an internal Mongo self-hosted. I'm not remotely qualified to do that but tried my best and it mostly works, except every time we upgrade and cycle the cluster nodes themselves and Mongo's data volume has to migrate from one VM to another, the db won't come back up on restart except with some extra steps. I scripted these, but it still results in yet one more command you need to run besides just "helm upgrade."

      Gitlab is its own beast. If you run it the way they recommend, you install Gitaly as an external service that runs on a VM, as they don't recommend having a Git server actually in Kubernetes because memory management in the Kubelet and Linux kernel don't work well with each other for whatever reason. We had no problem for years, until their users started using Gitlab for hosting test data using LFS and pushed enormous commits, which brought down the whole server sometimes until we migrated to an external Gitaly that runs on a VM.

      But that means upgrading Gitlab now requires upgrading Gitaly separately, totally outside of a container orchestration platform. Also, Gitlab's provided Helm chart doesn't allow you to set the full securityContext on pods for whatever reason. It will ignore the entire context and only set user and group. So when you run with the restricted PSA configuration, as every military customer is going to do, you can't do a real Helm install. You need to render the template, then patch all of the security contexts to be in compliance, then apply those. Ideally, a post-render would do that, which is what it's supposed to do, but I could never get it to work and instead end up having to run helm template and kustomize as separate steps.

      It's hellacious, but honestly I can't think of a single application ever that was as simple as it should have been to upgrade, which is why companies and government orgs end up in the ridiculous situation of having to hire an external consultant just to figure out how to upgrade stuff for them, because they would not have understood the failure modes and how to work around them.

      It'd be nice if the applications themselves could just provide an easy button like you seem to be trying to offer, but the reality is once they have even just an external datastore, and they allow self-hosting, they can't do that, because they need to support customers running that in the same cluster using sub-charts, in the same cluster but using separate charts, using a cloud platform's managed service, self-hosted but on its own VM or bare metal server, possibly some combination of both cloud and on-prem or multi-cloud. A single easy button can't handle all of those cases. Generally, this is probably why so many companies try to make you use their managed services only and don't want you to self-host, because supporting that very quickly becomes a problem of combinatorial explosion.

      • pmig 21 days ago

        Interestingly this is the problem we initially tried to solve. We build a Kubernetes Operator (back in the day in Kotlin) that does the full life cycle management of these kind of apps. Including management of CRs for database upgrades etc.

        This project still runs at customers in production, but ultimately - as you pointed out - individual configurations became to big of a problem as it was not feasible to build a new operator versions for configuration deviations. It was a a fun project though: https://github.com/glasskube/operator

hobofan 22 days ago

Very cool!

I'm a big fan of the trend of on-premise deployable software standardizing on Kubernetes/Helm as a deployment method. From my perspective as an external freelancers it provides a nice hand-off point to the internal IT/devops team.

Looking forward to trying this out on future projects!

----

I'm wondering how this relates to the glasskube package manager? Obviously you just launched, and maybe you are still in the discover phase, but is the general idea that this is the enterprisey incarnation of the glasskube ideas, and that it will over time gain similar features as the existing glasskube?

  • pmig 22 days ago

    Good question, the Glasskube package manager is an alternative to Helm. As Distr is the management / distribution layer the layer connecting the deployment target to the ISV. On the deployment target the software gets installed via docker-compose, Helm or - in future - with the Glasskube Package Manager. The features don't really overlap they complement each other.

mrbluecoat 22 days ago

How does this compare to https://github.com/balena-io/open-balena ?

  • louis_w_gk 22 days ago

    I wasn’t aware of OpenBalena before, but at first glance, it looks like a similar in concept. However, it seems focused solely on IoT and only supports Docker, whereas Distr is also designed for Kubernetes distribution. Happy to take a deeper look and provide a more detailed comparison if you're interested!

lclc 21 days ago

What is the advantage over a private docker registry combined with a public git-repo containing the docker-compose?

  • pmig 21 days ago

    The main advantage is our agent component that gives you insights into the deployment status, even if you are not connected to the host.

    You can also perform updates without being connected to the host.

  • rudasn 21 days ago

    Yeah, and put ansible-pull in the mix and call it a day :)

    I think the main advantage these kind of tools bring is the user-friendlyness for both the developer and customer, in terms of UIs, security, and auditing.

    App deployments, new releases, and all they entail are never in reality a commit/button push away. There's a bunch of stuff that needs to happen before, during, and after, with each app and deployment target having its own unique set of peculiarities.

    That's the main problem these tools need to solve, but that's where the money is, I guess.

INTPenis 21 days ago

I'm IT infrastructure engineer at a software company in Sweden, and I have no idea what this product does.

Today we simply ship container images for each release, and they're deployed using IaC in a k8s cluster. What are we doing wrong that you're trying to fix?

  • pmig 20 days ago

    As PhilippGille pointed out, if you have a permanent connection to your cluster there is no need use Distr. Distr empowers you to ship and monitor your application if you only get a one time access to the deployment target. You can install the agent once and afterwards the agent creates a reverse tunnel (https polling for new actions to perform) that let's you perform upgrades and monitor the deployment even if you are not directly connected to the deployment.

    • INTPenis 20 days ago

      Thanks, great explanation.

      So this could be solved with headscale.

  • PhilippGille 21 days ago

    I think one feature is like a reverse tunnel, so that a freelancer/consultant can check/maintain their software that's running in a customer's data center.

zbhoy 22 days ago

Is this a competitor or alternative to something like replicated.com

  • louis_w_gk 22 days ago

    Yes, Distr is an Open Source alternative to replicated

popalchemist 22 days ago

Wow. Very impressed, going to look into this.

  • louis_w_gk 22 days ago

    Thanks, let us know what you think!