psviderski a day ago

Hey, creator here. Thanks for sharing this!

Uncloud[0] is a container orchestrator without a control plane. Think multi-machine Docker Compose with automatic WireGuard mesh, service discovery, and HTTPS via Caddy. Each machine just keeps a p2p-synced copy of cluster state (using Fly.io's Corrosion), so there's no quorum to maintain.

I’m building Uncloud after years of managing Kubernetes in small envs and at a unicorn. I keep seeing teams reach for K8s when they really just need to run a bunch of containers across a few machines with decent networking, rollouts, and HTTPS. The operational overhead of k8s is brutal for what they actually need.

A few things that make it unique:

- uses the familiar Docker Compose spec, no new DSL to learn

- builds and pushes your Docker images directly to your machines without an external registry (via my other project unregistry [1])

- imperative CLI (like Docker) rather than declarative reconciliation. Easier mental model and debugging

- works across cloud VMs, bare metal, even a Raspberry Pi at home behind NAT (all connected together)

- minimal resource footprint (<150MB ram)

[0]: https://github.com/psviderski/uncloud

[1]: https://github.com/psviderski/unregistry

  • topspin a day ago

    "I keep seeing teams reach for K8s when they really just need to run a bunch of containers across a few machines"

    Since k8s is very effective at running a bunch of containers across a few machines, it would appear to be exactly the correct thing to reach for. At this point, running a small k8s operation, with k3s or similar, has become so easy that I can't find a rational reason to look elsewhere for container "orchestration".

    • jabr a day ago

      I can only speak for myself, but I considered a few options, including "simple k8s" like [Skate](https://skateco.github.io/), and ultimately decided to build on uncloud.

      It was as much personal "taste" than anything, and I would describe the choice as similar to preferring JSON over XML.

      For whatever reason, kubernetes just irritates me. I find it unpleasant to use. And I don't think I'm unique in that regard.

      • 1dom a day ago

        > For whatever reason, kubernetes just irritates me. I find it unpleasant to use. And I don't think I'm unique in that regard.

        I feel the same. I feel like it's a me problem. I was able to build and run massive systems at scale and never used kubernetes. Then, all of a sudden, around 2020, any time I wanted to build or run or do anything at scale, everywhere said I should just use kubernetes. And then when I wanted to do anything with docker in production, not even at scale, everywhere said I should just use kubernetes.

        Then there was a brief period around 2021 where everyone - even kubernetes fans - realised it was being used everywhere, even when it didn't need to be. "You don't need k8s" became a meme.

        And now, here we are, again, lots of people saying "just use k8s for everything".

        I've learned it enough to know how to use it and what I can do with it. I still prefer to use literally anything else apart from k8s when building, and the only time I've ever felt k8s has been really needed to solve a problem is when the business has said "we're using k8s, deal with it".

        It's like the Javascript or WordPress of the infrastructure engineering world - it became the lazy answer, IMO. Or the me problem angle: I'm just an aged engineer moaning at having to learn new solutions to old problems.

        • hhh 21 hours ago

          It’s a nice portable target, with very well defined interfaces. It’s easy to start with and pretty easy to manage if you don’t try to abuse it.

        • tick_tock_tick 18 hours ago

          I mean the real answer it is got easily to deploy k8s so the justification for not using it kinda vanished.

      • tw04 a day ago

        How many flawless, painless major version upgrades have you had with literally any flavor of k8s? Because in my experience, that’s always a science experiment that results in such pain people end up just sticking at their original deployed version while praying they don’t hit any critical bugs or security vulnerabilities.

        • mxey a day ago

          I’ve run Kubernetes since 2018 and I can count on one hand the times there were major issues with an upgrade. Have sensible change management and read the release notes for breaking changes. The amount of breaking changes has also gone way down in recent years.

          • freedomben 2 hours ago

            Same. I think maybe twice in that time frame we've had a breaking change, and those did warn us for several versions. Typically the only "fix" we need to apply is changing the API version on objects that have matured beyond beta.

        • jauntywundrkind a day ago

          I applaud you for having a specific complaint. 'You might not need it' 'its complex' and 'for some reason it bothers me' are all these vibes based winges that are so abundant. But with nothing specific, nothing contestable.

    • nullpoint420 a day ago

      100%. I’m really not sure why K8S has become the complexity boogeyman. I’ve seen CDK apps or docker compose files that are way more difficult to understand than the equivalent K8S manifests.

      • this_user a day ago

        Docker Compose is simple: You have a Compose file that just needs Docker (or Podman).

        With k8s you write a bunch of manifests that are 70% repetitive boilerplate. But actually, there is something you need that cannot be achieved with pure manifest, so you reach for Kustomize. But Kustomize actually doesn't do what you want, so you need to convert the entire thing to Helm.

        You also still need to spin up your k8s cluster, which itself consists of half a dozen pods just so you have something where you can run your service. Oh, you wanted your service to be accessible from outside the cluster? Well, you need to install an ingress controller in your cluster. Oh BTW, the nginx ingress controller is now deprecated, so you have to choose from a handful of alternatives, all of which have certain advantages and disadvantages, and none of which are ideal for all situations. Have fun choosing.

        • stego-tech a day ago

          Literally got it in one, here. I’m not knocking Kubernetes, mind, and I don’t think anyone here is, not even the project author. Rather, we’re saying that the excess of K8s can sometimes get in the way of simpler deployments. Even streamlined Kubernetes (microk8s, k3s, etc) still ultimately bring all of Kubernetes to the table, and that invites complexity when the goal is simplicity.

          That’s not bad, but I want to spend more time trying new things or enjoying the results of my efforts than maintaining the underlying substrates. For that purpose, K8s is consistently too complicated for my own ends - and Uncloud looks to do exactly what I want.

        • quectophoton a day ago

          > Docker Compose is simple: You have a Compose file that just needs Docker (or Podman).

          And if you want to use more than one machine then you run `docker swarm init`, and you can keep using the Compose file you already have, almost unchanged.

          It's not a K8s replacement, but I'm guessing for some people it would be enough and less effort than a full migration to Kubernetes (e.g. hobby projects).

        • horsawlarway a day ago

          This is some serious rose colored glasses happening here.

          If you have a service with a simple compose file, you can have a simple k8s manifest to do the same thing. Plenty of tools convert right between the two (incl kompose, which k8s literally hands you: https://kubernetes.io/docs/tasks/configure-pod-container/tra...)

          Frankly, you're messing up by including kustomize or helm at all in 80% of cases. Just write the (agreed on tedious boilerplate - the manifest format is not my cup of tea) yaml and be done with the problem.

          And no - you don't need an ingress. Just spin up a nodeport service, and you have the literal identical experience to exposing ports with compose - it's just a port on the machines running the cluster (any of them - magic!).

          You don't need to touch an ingress until you actually want external traffic using a specific hostname (and optionally tls), which is... the same as compose. And frankly - at that point you probably SHOULD be thinking about the actual tooling you're using to expose that, in the same way you would if you ran it manually in compose. And sure - arguably you could move to gateways now, but in no way is the ingress api deprecated. They very clearly state...

          > "The Ingress API is generally available, and is subject to the stability guarantees for generally available APIs. The Kubernetes project has no plans to remove Ingress from Kubernetes."

          https://kubernetes.io/docs/concepts/services-networking/ingr...

          ---

          Plenty of valid complaints for K8s (yaml config boilerplate being a solid pick) but most of the rest of your comment is basically just FUD. The complexity scale for K8s CAN get a lot higher than docker. Some organizations convince themselves it should and make it very complex (debatably for sane reasons). For personal needs... Just run k3s (or minikube, or microk8s, or k3ds, or etc...) and write some yaml. It's at exactly the same complexity as docker compose, with a slightly more verbose syntax.

          Honestly, it's not even as complex as configuring VMs in vsphere or citrix.

          • KronisLV a day ago

            > And no - you don't need an ingress. Just spin up a nodeport service, and you have the literal identical experience to exposing ports with compose - it's just a port on the machines running the cluster (any of them - magic!).

            https://kubernetes.io/docs/concepts/services-networking/serv...

            Might need to redefine the port range from 30000-32767. Actually, if you want to avoid the ingress abstraction and maybe want to run a regular web server container of your choice to act as it (maybe you just prefer a config file, maybe that's what your legacy software is built around, maybe you need/prefer Apache2, go figure), you'd probably want to be able to run it on 80 and 443. Or 3000 or 8080 for some other software, out of convenience and simplicity.

            Depending on what kind of K8s distro you use, thankfully not insanely hard to change though: https://docs.k3s.io/cli/server#networking But again, that's kind of going against the grain.

      • everforward a day ago

        It's not the manifests so much as the mountain of infra underlying it. k8s is an amazing abstraction over dynamic infra resources, but if your infra is fairly static then you're introducing a lot of infra complexity for not a ton of gain.

        The network is complicated by the overlay network, so "normal" troubleshooting tools aren't super helpful. Storage is complicated by k8s wanting to fling pods around so you need networked storage (or to pin the pods, which removes almost all of k8s' value). Databases are annoying on k8s without networked storage, so you usually run them outside the cluster and now you have to manage bare metal and k8s resources.

        The manifests are largely fine, outside of some of the more abnormal resources like setting up the nginx ingress with certs.

      • esseph a day ago

        Managing hundreds or thousands of containers across hundreds or thousands of k8s nodes has a lot of operational challenges.

        Especially in-house on bare metal.

        • lnenad a day ago

          But that's not what anyone is arguing here, nor what (to me it seems at least) uncloud is about. It's about simpler HA multinode setup with a single/low double digit containers.

          • esseph 14 hours ago

            > I’m really not sure why K8S has become the complexity boogeyman.

            Was what i was responding to. It's not the app management that becomes a pain, it's the cluster management, lifecycle, platform API deprecations, etc.

        • Glemkloksdjf a day ago

          Which is fine because it absolutly matches the result.

          You would not be able to operate hundreds or thousand of any nodes without operation complexlity and k8s helps you here a lot.

        • nullpoint420 a day ago

          Talos has made this super easy in my experience.

        • sceptic123 a day ago

          I don't think that argument matches with they "just need to run a bunch of containers across a few machines"

    • psviderski a day ago

      That’s awesome if k3s works for you, nothing wrong with this. You’re simply not the target user then.

    • PunchyHamster a day ago

      k3s makes it easy to deploy, not to debug any problems with it. It's still essentially adding few hundred thousand lines of code into your infrastructure, and if it is a small app you need to deploy, also wasting a bit of ram

      • topspin 20 hours ago

        "not to debug any problems with it"

        K3s is just a repackaged, simplified k8s distro. You get the same behavior and the same tools as you have any time you operate an on-premises k8s cluster, and these, in my experience, are somewhere between good and excellent. So I can't imagine what you have in mind here.

        "It's still essentially adding few hundred thousand lines of code into your infrastructure"

        Sure. And they're all there for a reason: it's what one needs to orchestrate containers via an API, as revealed by a vast horde of users and years of refinement.

    • tevon a day ago

      Perhaps it feels so easy given your familiarity with it.

      I have struggled to get things like this stood up and hit many footguns along the way

    • matijsvzuijlen a day ago

      If you already know k8s, this is probably true. If you don't it's hard to know what bits you need, and need to learn about, to get something simple set up.

      • epgui a day ago

        you could say that about anything…

        • morcus a day ago

          I don't understand the point? You can say that about anything, and that's the whole reason why it's good that alternatives exist.

          The clear target of this project is a k8s-like experience for people who are already familiar with Docker and docker compose but don't want to spend the energy to learn a whole new thing for low stakes deployments.

          • Glemkloksdjf a day ago

            Uncloud is so far away from k8s, its not k8s like.

            A normal person wouldn't think 'hey lets use k8s for the low stakes deployment over here'.

            • psviderski 17 hours ago

              >A normal person wouldn't think 'hey lets use k8s for the low stakes deployment over here'.

              I'm afraid I have to disappoint you

    • sureglymop 18 hours ago

      Except it isn't just "a way to run a bunch of containers across a few machines".

      It seems that way but in reality "resource" is a generic concept in k8s. K8s is a management/collaboration platform for "resources" and everything is a resource. You can define your own resource types too. And who knows, maybe in the future these won't be containers or even linux processes? Well it would still work given this model.

      But now, what if you really just want to run a bunch of containers across a few machines?

      My point is, it's overcomplicated and abstracts too heavily. Too smart even... I don't want my co workers to define our own resource types, we're not at a google scale company.

    • _joel a day ago

      Indeed, it seems a knee jerk response without justification. k3s is pretty damn minimal.

  • tex0 a day ago

    This is a cool tool, I like the idea. But the way `uc machine init` works under the hood is really scary. Lot's of `curl | bash` run as root.

    While I would love to test this tool, this is not something I would run on any machine :/

    • psviderski a day ago

      Totally valid concern. That was a shortcut to iterate quickly in early development. It’s time to do it properly now. Appreciate the feedback. This is exactly the kind of thing I need to hear before more people try it.

    • redrove a day ago

      +1 on this

      I wanted to try it out but was put off by this[0]. It’s just straight up curl | bash as root from raw.githubusercontent.com.

      If this is the install process for a server (and not just for the CLI) I don’t want to think about security in general for the product.

      Sorry, I really wanted to like this, but pass.

      [0] https://github.com/psviderski/uncloud/blob/ebd4622592bcecedb...

    • jabr a day ago

      There is a `--no-install` flag on both `uc machine init` and `uc machine add` that skips that `curl | bash` install step.

      You need to prepare the machine some other way first then, but it's just installing docker and the uncloud service.

      I use the `--no-install` option with my own cluster, as I have my own pre-provisioning process that includes some additional setup beyond the docker/uncloud elements.

    • tontony a day ago

      Curious, what would be an ideal (secure) approach for you to install this (or similar) tool?

      • yabones a day ago

        The correct way would be to publish packages on a proper registry/repository and install them with a package manager. For example, create a 3rd party Debian repository, and import the config & signing key on install. It's more work, sure, but it's been the best practice for decades and I don't see that changing any time soon.

        • tontony 21 hours ago

          Sure, but it all boils down to trust at the end of the day. Why would you trust a third-party Debian repository (that e.g. has a different user namespace and no identity linking to GitHub) more than running something from evidently the same user from GitHub, in this specific case?

          I'm not arguing that a repository is nice because versioning, signing, version yanking, etc, and I do agree that the process should be more transparent and verifiable for people who care about it.

      • rovr138 a day ago

        It's deploying a script, which then downloads uncloud using curl.

        The alternative is, deploying the script and with it have the uncloud files it needs.

  • zbuttram a day ago

    Very cool! I think I'll have some opportunity soon to give it a shot, I have just the set of projects that have been needing a tool like this. One thing I think I'm missing after perusing the docs however is, how does one onboard other engineers to the cluster after it has been set up? And similarly, how does deployment from a CI/CD runner work? I don't see anything about how to connect to an existing cluster from a new machine, or at least not that I'm recognizing.

    • jabr a day ago

      There isn't a cli function for adding a connection (independently of adding a new machine/node) yet, but they are in a simple config file (`~/.config/uncloud/config.yaml`) that you can copy or easily create manually for now. It looks like this:

          current_context: default
          contexts:
            default:
              connections:
                - ssh: admin@192.168.0.10
                  ssh_key_file: ~/.ssh/uncloud
                - ssh: admin@192.168.0.11
                  ssh_key_file: ~/.ssh/uncloud
                - ssh: administrator@93.x.x.x
                  ssh_key_file: ~/.ssh/uncloud
                - ssh: sysadmin@65.x.x.x
                  ssh_key_file: ~/.ssh/uncloud
      
      And you really just need one entry for typical use. The subsequent entries are only used if the previous node(s) are down.
  • sam-cop-vimes a day ago

    I really like what is on offer here - thank you for building it. Re the private network it builds with Wireguard, how are services running within this private network supposed to access AWS services such as RDS securely? Tailscale has this: https://tailscale.com/kb/1141/aws-rds

    • psviderski 16 hours ago

      Thanks! If you're running the ucloud cluster in AWS, service containers should be able to access RDS the same way the underlying EC2 instances can (assuming RDS is in the same VPC or reachable via VPC peering).

      The private container IPs will get NATed to the underlying EC2 IPs so requests to RDS will appear as coming from those instances. The appropriate Security Group(s) need to be configured as well. The limitation is that you can't segregate access at the service level, only at the EC2 instance level.

  • olegp a day ago

    How's this similar to and different from Kamal? https://kamal-deploy.org/

    • psviderski a day ago

      I took some inspiration from Kamal, e.g. the imperative model but kamal is more a deployment tool.

      In addition to deployments, uncloud handles clustering - connects machines and containers together. Service containers can discover other services via internal DNS and communicate directly over the secure overlay network without opening any ports on the hosts.

      As far as I know kamal doesn’t provide an easy way for services to communicate across machines.

      Services can also be scaled to multiple replicas across machines.

      • olegp a day ago

        Thanks! I noticed afterwards that you mention Kamal in your readme, but you may want to add a comparison section that you link to where you compare your solution to others.

        Are you working on this full time and if so, how are you funding it? Are you looking to monetize this somehow?

        • psviderski a day ago

          Thank you for the suggestion!

          I’m working full time on this, yes. Funding from my savings at the moment and don’t have plans for any external funding or VC.

          For monetisation, considering building a self-hosted and managed (SaaS) webUI for managing remote clusters and apps on them with value-added PaaS-like features.

          • olegp a day ago

            That sounds interesting, maybe I could help on the business side of things somehow. I'll email you my calendar link.

      • cpursley a day ago

        This is neat, regarding clustering - can this work with distributed erlang/elixir?

        • psviderski a day ago

          I don't know what the specific requirements for the distributed erlang/elixir but I believe the networking should support it. Containers get unique IPs on a WireGuard mesh with direct connectivity and DNS-based service discovery.

  • INTPenis a day ago

    We have similar backgrounds, and I totally agree with your k8s sentiment.

    But I wonder what this solves?

    Because I stopped abusing k8s and started using more container hosts with quadlets instead, using Ansible or Terraform depending on what the situation calls for.

    It works just fine imho. The CI/CD pipeline triggers a podman auto-update command, and just like that all containers are running the latest version.

    So what does uncloud add to this setup?

    • psviderski 17 hours ago

      Great setup! Where Uncloud helps is when you need containers across multiple machines to talk to each other.

      Your setup sounds like single-node or nodes that don't need to discover each other. If you ever need multi-node with service-to-service communication, that's where stitching together Ansible + Terraform + quadlets + some networking layer starts to get tedious. Uncloud tries to make that part simple out of the box.

      You also get the reverse proxy (Caddy) that automatically reconfigures depending on what containers are running on machines. You just deploy containers and it auto-discovers them. If a container crashes, the configuration is auto-updated to remove the faulty container from the list of upstreams.

      Plus a single CLI you run locally or on CI to manage everything, distribute images, stream logs. A lot of convenience that I'm putting together to make the user experience more enjoyable.

      But if you don't need that, keep doing what works.

      • INTPenis 7 hours ago

        It's starting to sound a lot like k8s to me. :D

        Technically I could allow my web proxy to discover my services today already, but I refuse to have Traefik (in my case) running as the same user as my services. I prefer to only let them talk over TCP/IP and configure them dynamically with Ansible instead.

        It always amazed me that people used that feature in Traefik or Caddy, because it essentially requires your web proxy to have container access to all your other services. It seems a bit intimate to me, but maybe I'm old school.

    • lifty 19 hours ago

      Uncloud seems much easier to manage than writing your own ansible or terraform.

  • mosselman a day ago

    You have a graph that shows a multi provider setup for a domain. Where would routing to either machine happen? As in which ip would you use on the dns side?

    • psviderski a day ago

      For the public cluster with multiple ingress (caddy) nodes you'd need a load balancer in front of them to properly handle routing and outage of any of them. You'd use the IP of the load balancer on the DNS side.

      Note that a DNS A record with multiple IPs doesn't provide failover, only round robin. But you can use the Cloudflare DNS proxy feature as a poor man's LB. Just add 2+ proxied A records (orange cloud) pointing to different machines. If one goes down with a 52x error, Cloudflare automatically fails over to the healthy one.

      • snthpy 9 hours ago

        I looked into this yesterday for making Caddy HA on my Proxmox cluster and stumbled upon keepalivd. It will provide you with a virtual IP and failover but not load balancing so you'd need to still point that at something like HAProxy for that.

        Could be something interesting to integrate though.

    • calgoo a day ago

      Not OP, but you could do "simple" dns load balancing between both endpoints.

      • psviderski 16 hours ago

        As I mentioned in the sibling comment, please note that in this case you only get round-robin, not failover. If one of the addresses is down, the DNS record will continue returning it and users will hit a dead end.

        A proper load balancer or Cloudflare DNS proxy would handle this.

  • avan1 a day ago

    Thanks for the both great tools. just i didn't understand one thing ? the request flow, imaging we have 10 servers where we choose this request goes to server 1 and the other goes to 7 for example. and since its zero down time, how it says server 5 is updating so till it gets up no request should go there.

    • psviderski a day ago

      I think there are two different cases here. Not sure which one you’re talking about.

      1. External requests, e.g. from the internet via the reverse proxy (Caddy) running in the cluster.

      The rollout works on the container, not the server level. Each container registers itself in Caddy so it knows which containers to forward and distribute requests to.

      When doing a rollout, a new version of container is started first, registers in caddy, then the old one is removed. This is repeated for each service container. This way, at any time there are running containers that serve requests.

      It doesn’t say any server that requests shouldn’t go there. It just updates upstreams in the caddy config to send requests to the containers that are up and healthy.

      2. Service to service requests within the cluster. In this case, a service DNS name is resolved to a list of IP addresses (running containers). And the client decides which one to send a request to or whether to distribute requests among them.

      When the service is updated, the client needs to resolve the name again to get the up-to-date list of IPs. Many http clients handle this automatically so using http://service-name as an endpoint typically just works. But zero downtime should still be handled by the client in this case.

  • unixfox a day ago

    Awesome tool! Does it provide some basic features that you would get from running a control plane.

    Like rescheduling automatically a container on another server if a server is down? Deploying on the less filled server first if you have set limits in your containers?

    • psviderski a day ago

      Thank you! That's actually the trade off.

      There is no automatic rescheduling in uncloud by design. At least for now. We will see how far we can get without it.

      If you want your service to tolerate a host going down, you should deploy multiple replicas for that service on multiple machines in advance. 'uc scale' command can be used to run more replicas for an already deployed service.

      Longer term, I'm thinking we can have a concept of primary/standby replicas for services that can only have one running replica, e.g. databases. Something similar to how Fly.io does this: https://fly.io/docs/apps/app-availability/#standby-machines-...

      Regarding deploying on the less filled machine first is doable but not supported right now. By default, it picks the first machine randomly and tries to distributes replicas evenly among all available machines. You can also manually specify what target machine(s) each service should run on in your Compose file.

      I want to avoid recreating the complexity with placement constraints, (anti-)affinity, etc. that makes K8s hard to reason about. There is a huge class of apps that need more or less static infra, manual placement, and a certain level of redundancy. That's what I'm targeting with Uncloud.

  • snthpy 9 hours ago

    Wow, this sounds very cool.

    I share the same concern as top comments on security but going to check out out in more detail.

    I wonder if you integrated some decentralized identity layer with DIDs, if this could be turned into some distributed compute platform?

    Also, what is your thinking on high availability and fail failovers?

  • 11mariom a day ago

    > - uses the familiar Docker Compose spec, no new DSL to learn

    But this goes with assumption that one already know docker compose spec. For exact same reason I'm in love for `podman kube play` to just use k8s manifests to quickly test run on local machine - and not bother with some "legacy" compose.

    (I never liked Docker Inc. so I never learned THEIR tooling, it's not needed to build/run containers)

    • TingPing a day ago

      podman-compose works fine. It’s a very simple format.

  • utopiah a day ago

    Neat, as you include quite a few tool for services to be reachable together (not necessarily to the outside), do you also have tooling to make those services more interoperable?

    • jabr a day ago

      Do you have an example of what you mean? I'm not entirely clear on your question.

  • oulipo2 a day ago

    So it's a kind of better Docker Swarm? It's interesting, but honestly I'd rather have something declarative, so I can use it with Pulumi, would it be complicated to add a declarative engine on top of the tool? Which discovers what services are already up, do a diff with the new declaration, and handles changes?

    • psviderski a day ago

      This is exactly how it works now. The Compose file is the declarative specification of your services you want to run.

      When you run 'uc deploy' command:

      - it reads the spec from your compose.yaml

      - inspects the current state of the services in the cluster

      - computes the diff and deployment plan to reconcile it

      - executes the plan after the confirmation

      Please see the docs and demo: https://uncloud.run/docs/guides/deployments/deploy-app

      The main difference with Docker Swarm is that the reconciliation process is run on your local/CI machine as part of the 'uc deploy' CLI command execution, not on the control plane nodes in the cluster.

      And it's not running in the loop automatically. If the command fails, you get an instant feedback with the errors you can address or rerun the command again.

      It should be pretty straightforward to wrap the CLI logic in a Terraform or Pulumi provider. The design principals are very similar and it's written in Go.

      • snthpy 9 hours ago

        That's really interesting and cool. In that case calling it imperative rather than declarative is underselling it imho. I haven't worked that much with Terraform but in my usage from the cli, that is how it works too and I consider that declarative.

        I get that putting the declarative spec in the control plane and having the service autoreconcile continuously is another layer but this is great as a start.

        In fact could you not just cron the cli deployment command on the nodes and get an effective poor man's declarative layer to guard against node failures if your ok with a 1 min or 1 sec recovery objective?

        • jabr an hour ago

          > In fact could you not just cron the cli deployment command on the nodes and get an effective poor man's declarative layer

          In the project discord, a user recently experimented with a custom setup that sounds very similar to what you describe.

          In fact, a big part of uncloud’s appeal to me is that it also provides powerful building blocks for more complex, custom systems like this, not just the streamlined workflow for simpler, standard cases.

  • woile a day ago

    does it support ipv6?

    • psviderski a day ago

      There is an open issue that confirms enabling ipv6 for containers works: https://github.com/psviderski/uncloud/issues/126 But this hasn’t been enabled by default.

      What specifically do you mean by ipv6 support?

      • woile a day ago

        I'm no expert, so I'm not sure if I'll explain it correctly. But I've been using docker swarm in a server, I use traefik as reverse proxy, and it just doesn't seem to work (I've tried a lot) with ipv6 (issue that might be related https://github.com/moby/moby/issues/24379)

      • miyuru a day ago

        > What specifically do you mean by ipv6 support?

        This question does not make sense. This is equivalent to asking "What specifically do you mean by ipv4 support"

        These days both protocols must be supported, and if there is a blocker it should be clearly mentioned.

        • justincormack a day ago

          How do you want to allocate ipv6 addresses to containers? Turns out there are lots of answers. Some people even want to do ipv6 NAT.

          • lifty a day ago

            A really cool way to do it is how Yggdrasil project does it (https://yggdrasil-network.github.io/implementation.html#how-...). They basically use public keys as identities and they deterministically create an IPv6 address from the public key. This is beautiful and works for private networks, as well as for their global overlay IPv6 network.

            What do you think about the general approach in Uncloud? It almost feels like a cousin of Swarm. Would love to get your take on it.

          • GoblinSlayer a day ago

            Like docker? --fixed-cidr-v6=2001:db8:1::/64

  • doctorpangloss a day ago

    haha, uncloud does have a control plane: the mind of the person running "uc" CLI commands

    > I’m building Uncloud after years of managing Kubernetes

    did you manage Kubernetes, or did you make the fateful mistake of managing microk8s?

  • knowitnone3 a day ago

    but if they already know how to use k8s, then they should use it. Now they have to know k8s AND know this tool?

  • Glemkloksdjf a day ago

    So you build an insecure version of nomad/kubernetes and co?

    If you do anything professional, you better choose proven software like kubernetes or managed kubernetes or whatever else all the hyperscalers provide.

    And the complexity you are solving now or have to solve, k8s solved. IaC for example, Cloud Provider Support for provisioning a LB out of the box, cert-manager, all the helm charts for observability, logging, a ecosystem to fall back to (operators), ArgoCD <3, storage provisioning, proper high availability, kind for e2e testing on cicd, etc.

    I'm also aways lost why people think k8s is so hard to operate. Just take a managed k8s. There are so many options out there and they are all compatible with the whole k8s ecosystem.

    Look if you don't get kubernetes, its use casees, advantages etc. fine absolutly fine but your solution is not an alternative to k8s. Its another container orchestrator like nomad and k8s and co. with it own advantages and disadvantages.

    • bluepuma77 a day ago

      It's not a k8s replacement. It's for the small dev team with no k8s experience. For people that might not use Docker Swarm because they see it's a pretty dead project. For people who think "everyone uses k8s", so we should, too.

      I need to run on-prem, so managed k8s is not an option. Experts tells me I should have 2 FTE to run k8s, which I don't have. k8s has so many components, how should I debug that in case of issues without k8s experience? k8s APIs change continuously, how should I manage that without k8s experience?

      It's not a k8s replacement. But I do see a sweet spot for such a solution. We still run Docker Swarm on 5 servers, no hyperscalers, no API changes expected ;-)

      • snthpy 9 hours ago

        I still run docker swarm on 3 servers. Haven't needed to update it much over the past 5 years.

      • psviderski 16 hours ago

        How was your Swarm experience so far? It's so disappointing that Docker seems to slowly but steadily abandoning it. There is only a couple dozen mainly maintenance commits in the swarmkit repo for the entire 2025 year :sigh:

    • mgaunard a day ago

      Those are all sub-par cloud technologies which perform very badly and do not scale at all.

      Some people would rather build their own solutions to do these things with fine-grain control and the ability to handle workloads more complex that a shopping cart website.

      • RyanHamilton 14 hours ago

        I've tried to refrain from commenting but your comment pushed me over the edge. I either want to dismiss your comment as ignorant that amazon is just a shopping cart or ignorant that you even need cloud technologies until you have 1000s of customers. But I must concede there's a chance you fall in that middle area and I'm wrong. It's < 5 percent. But yeah sure.. we have a scale problem and you're right you've identified the nonsense cloud technologies that won't fix it. I'm glad you chimed in to convince us but to build our own for 5000 customers.

        • mgaunard 7 hours ago

          Even kubectl slows down to a crawl with a thousand deployments on the same cluster.

          The protocols are bad, as is the tech supporting them.

JohnMakin a day ago

Having spent most of my career in kubernetes (usually managed by cloud), I always wonder when I see things like this, what is the use case or benefit of not having a control plane?

To me, the control plane is the primary feature of kubernetes and one I would not want to go without.

I know this describes operational overhead as a reason, but how it relates to the control plane is not clear to me. even managing a few hundred nodes and maybe 10,000 containers, relatively small - I update once a year and the managed cluster updates machine images and versions automatically. Are people trying to self host kubernetes for production cases, and that’s where this pain comes from?

Sorry if it is a rude question.

  • psviderski a day ago

    Not rude at all. The benefit is a much simpler model where you simply connect machines in a network where every machine is equal. You can add more, remove some. No need to worry about an HA 3-node centralised “cluster brain”. There isn’t one.

    It’s a similar experience when a cloud provider manages the control plane for you. But you have to worry about the availability when you host everything yourself. Losing etcd quorum results in an unusable cluster.

    Many people want to avoid this, especially when running at a smaller scale like a handful of machines.

    The cluster network can even partition and each partition continues to operate allowing to deploy/update apps individually.

    That’s essentially what we all did in a pre-k8s era with chef and ansible but without the boilerplate and reinventing the wheel, and using the learnings from k8s and friends.

    • JohnMakin a day ago

      If you are a small operation and trying to self host k3s or k8s or any number of out of the box installations that are probably at least as complex as docker compose swarms, for any non trivial production case, presents similar problems in monitoring and availability as ones you’d get with off the shelf cloud provider managed services, except the managed solutions come without the pain in the ass. Except you don’t have a control plane.

      I have managed custom server clusters in a self hosted situation. the problems are hard, but if you’re small, why would you reach for such a solution in the first place? you’d be better off paying for a managed service. What situation forces so many people to reach to self hosted kubernetes?

      • snthpy 8 hours ago

        Yes, not everyone is allowed to use cloud services. There's also the cost. I haven't spent a cent on infrastructure in 5 years (other than my time). Using cloud services comes with extra costs and meetings to justify those costs. Plus, not everyone is in the USA or 1st world country where those costs are negligible. Necessity is the mother of invention.

      • rahkiin a day ago

        Price. Data sovereignty. Legal. All are valid reasons to self-host

    • _joel a day ago

      k3s uses sqlite, so not etcd.

      • davidgl a day ago

        It can use sqlite (single master), or for cluster it can use pg, or mysql, but etcd by default

        • _joel a day ago

          No, it's not. Read the docs[1] - sqlite is the default.

          "Lightweight datastore based on sqlite3 as the default storage backend. etcd3, MySQL, and Postgres are also available."

          [1]https://docs.k3s.io/

          • cobolcomesback a day ago

            This thread is about using multi-machine clusters, and sqlite cannot be used for multi-machine clusters in k3s. etcd is the default when starting k3s in cluster mode [1].

            [1] https://docs.k3s.io/datastore

            • _joel a day ago

              No, this thread is about multiple containers across machines. What you describe is multi-master for the server. You can run multple agents across serveral nodes therefore clustering the container workload across multiple container hosting servers. Multi-master is something different.

              • cobolcomesback a day ago

                The very first paragraph of the first comment you replied to is about multi-master HA. The second sentence in that comment is about “every machine is equal”. k3s with sqlite is awesome, but it cannot do that.

        • _joel a day ago

          apologies, I misread this and gave a terse reply.

  • kelnos a day ago

    > a few hundred nodes and maybe 10,000 containers, relatively small

    That feels not small to me. For something I'm working on I'll probably have two nodes and around 10 containers. If it works out and I get some growth, maybe that will go up to, say, 5-7 nodes and 30 or so containers? I dunno. I'd like some orchestration there, but k8s feels way too heavy even for my "grown" case.

    I feel like there are potentially a lot of small businesses at this sort of scale?

  • baq a day ago

    > Are people trying to self host kubernetes

    Of course they are…? That’s half the point of k8s - if you want to self host, you can, but it’s just like backups: if you never try it, you should assume you can’t do it when you need to

  • snthpy 9 hours ago

    For an SME with nonetheless critical workloads, 10000 containers is not small. To me that's massive in fact. I run less than 10 but I need those to be HA. uncloud sounds great for my use case.

  • davedx a day ago

    > a few hundred nodes and maybe 10,000 containers, relatively small

    And that's just your CI jobs, right? ;)

  • motoboi a day ago

    Kubernetes is not only an orchestrator but a scheduler.

    Is a way to run arbitrary processes on a bunch of servers.

    But what if your processes are known beforehand? Than you don't need a scheduler, nor an orchestrator.

    If it's just your web app with two containers and nothing more?

  • esseph a day ago

    Try it on bare metal where you're managing the distributed storage and the hardware and the network and the upgrades too :)

    • JohnMakin a day ago

      Why would you want to do that though?

      On cloud, in my experience, you are mostly paying for compute with managed kubernetes instances. The overhead and price is almost never kubernetes itself, but the compute and storage you are provisioning, which, thanks to the control plane, you have complete control over. what am i missing?

      I wouldn’t dare try to with a small shop try to self host a production kubernetes solution unless i was under duress. But I just dont see what the control plane has to do with it. It’s the feature that makes kubernetes worth it.

      • LelouBil a day ago

        I am in the process of redoing all of my self-hosting (cloud storage, sso, media server, and a lot more), which previously was a bunch of docker compose files deployed by Ansible. This quickly became unmanageable.

        Now I almost finished the setting up part using a single-node (for now) Kubernetes cluster running with Talos Linux, and all of the manifest files managed with Cue lang (seriously, I would have abandoned it if I had not discovered Cue to generate and type check all of the yaml).

        I think Kubernetes is the right solution for the complexity of what I'm running, but even though it was a hassle to manage the storage, the backups, the auth, the networking and so on, I much prefer having all of this hosted at my house.

        But I agree with the control plane part, just pointing out my use case for self-hosting k8s

      • esseph a day ago

        May need low latency, may have several large data warehouses different workloads need to connect to, etc.

        Regulated industries, transportation and logistics companies, critical industries, etc.

      • esseph 14 hours ago

        I was responding to:

        > even managing a few hundred nodes and maybe 10,000 containers, relatively small - I update once a year and the managed cluster updates machine images and versions automatically. Are people trying to self host kubernetes for production cases, and that’s where this pain comes from?

        Much of the pain of kubernetes is not the day to day care and feeding of the applications most of the time, it's managing the cluster itself, upgrades, hardware, etc.

        Many regulated industries can not run certain workloads in "the cloud", hence where the pain of running kubernetes (at least at first) comes from.

    • lillecarl a day ago

      Tinkerbell / MetalKube, ClusterAPI, Rook, Cilium?

      A control plane makes controlling machines easier, that's the point of a control plane.

      • esseph a day ago

        I'm not debating the control plane, I'm responding to the question about "are people running it themselves in production?” (non-managed on-prem infra)

        It can be challenging. Lots and lots of knobs.

  • weitendorf a day ago

    I'm working on a similar project (here's the v0 of its state management and the libraries its "local control plane" will use to implement a mesh https://github.com/accretional/collector) and worked on the data plane for Google Cloud Run/Functions:

    IMO kubernetes is great if your job is to fiddle with Kubernetes. But damn, the overhead is insane. There is this broad swathe of middle-sized tech companies and non-tech Internet application providers (eg ecommerce, governments, logistics, etc.) that spend a lot of their employees' time operating Kubernetes clusters, and a lot of money on the compute for those clusters, which they probably overprovision and also overpay for through some kind of managed Kubernetes/hyperscaler platform + a bunch of SaaS for things like metrics and logging, container security products, alerting. A lot of these guys are spending 10-40% of their budget on compute, payroll, and SaaS to host CRUD applications that could probably run on a small number of servers without a "platform" team behind it, just a couple of developers who know what they're doing.

    Unless they're paying $$$ each of these deployments is running their own control plane and dealing with all the operational and cognitive overhead that entails. Most of those are running in a small number of datacenters alongside a bunch of other people running/managing/operating kubernetes clusters of their own. It's insanely wasteful because if there were a proper multitenant service mesh implementation (what I'm working on) that was easy to use, everybody could share the same control plane ~per datacenter and literally just consume the Kubernetes APIs they actually need, the ones that let them run and orchestrate/provision their application, and forget about all the fucking configuration of their cluster. BTW, that is how Borg works, which Kubernetes was hastily cobbled-together to mimic in order to capitalize on Containers Being So Hot Right Now.

    The vast majority of these Kubernetes users just want to run their applications, their customers don't know or care that Kubernetes is in the picture at all, and the people writing the checks would LOVE to not be spending so much and money on the same platform engineering problems as every other midsize company on the Internet.

    > what is the use case or benefit of not having a control plane?

    All that is to say, it's not having to pay for a bunch of control plane nodes and SaaS and a Kubernetes guy/platform team. At small and medium scales, it's running a bunch of container instances as long as possible without embarking on a 6-24mo, $100k-$10m+ expedition to Do Kubernetes. It's not having to secure some fricking VPC with a million internal components and plugins/SaaS, it's not letting some cloud provider own your soul, and not locking you in to something so expensive you have to hire an entire internal team of Kubernetes-guys to set it up.

    All the value in the software industry comes from the actual applications people are paying for. So the better you can let people do that without infrastructure getting in the way, the better. Making developers deal with this bullshit (or deciding to have 10-30% of your developers deal with it fulltime) is what gets in the way: https://kubernetes.io/docs/concepts/overview/components/

    • JohnMakin 21 hours ago

      The experience you are describing has overwhelmingly not been my own, nor anyone in my space I know.

      I can only speak most recently for EKS, but the cost is spent almost entirely on compute. I’m a one man shop managing 10,000 containers. I basically only spend on the compute itself, which is not all that much, and certainly far, far less than hiring a sys admin. Self hosted anything would be a huge PITA for me and likely end up costing more.

      Yes, you can avoid kubernetes and being a “slave” to cloud providers, but I personally believe you’re making infrastructure tradeoffs in a bad way, and likely spending as much in the long run anyway.

      maybe my disconnect here is that I mostly deal with full production scale applications, not hobby projects I am hosting on my own network (nothing wrong with that, and I would agree k8s is overkill for something like that).

      Eventually though, at scale, I strongly believe you will need or want a control plane of some type for your container fleets, and that typically ends up looking or acting like k8s.

strzibny 5 hours ago

I think this is pretty cool. Congrats on the project, might play with it later.

If you want something even simpler, something that doesn't run on your servers at all, you can look at Kamal: https://kamal-deploy.org

What I like about Kamal is that it's backed by a company that actually fully moved out of K8s and cloud, so every release is battle-tested first.

raphinou a day ago

I'm a docker swarm user, and this is the first alternative that looks interesting to me!

Some questions I have based on my swarm usage:

- do you plan to support secrets?

- with swarm and traefik, I can define url rewrite rules as container labels. Is something equivalent available?

- if I deploy 2 compose 'stacks', do all containers have access to all other containers, even in the other stack?

  • psviderski 15 hours ago

    >with swarm and traefik, I can define url rewrite rules as container labels. Is something equivalent available?

    Yep, you define the mapping between the domain name and the internal container port as `x-ports: app.example.com:8000/https` in the compose file. Or you can specify a custom Caddy config for the service as `x-caddy: Caddyfile` which allows to customise it however you like. See https://uncloud.run/docs/concepts/ingress/publishing-service...

    >if I deploy 2 compose 'stacks', do all containers have access to all other containers, even in the other stack?

    Yes, there is no network isolation between containers from different services/stacks at the moment. Here is an open discussion on stack/namespace/environment/project concepts and isolation: https://github.com/psviderski/uncloud/discussions/94.

    What's your use case and how would you want this to behave?

    • raphinou 8 hours ago

      My personal preference is to have the different stack isolated by default (+ intra-stack isolation possibility using networks).

      I'm deploying Swarm and traefik as described here: https://dockerswarm.rocks/traefik/#create-the-docker-compose...

      I like that I can put my containers to be exposed on the traefik-public network, and keep others like databases unreachable from traefik. This organisation of networks is very useful, allowing to make containers reachable across stacks, but also to keep some containers in a stack reachable only from other containers on the same network in that same stack.

  • tontony 19 hours ago

    Secrets -- yes, it's being tracked here: https://github.com/psviderski/uncloud/issues/75 Compose configs are already supported and can be used to inject secrets as well, but there'll be no encryption at rest there in that case, so might not be ideal for everyone.

    Regarding questions 2 and 3, the short answers are "not at the moment" and "yes, for now", here's a relevant discussion that touches on both points: https://github.com/psviderski/uncloud/discussions/94

    Speaking of Swarm and your experience with it: in your opinion, is there anything that Swarm lacks or makes difficult, that tools like Uncloud could conceptually "fix"?

    • raphinou 7 hours ago

      Swarm is not far from my dream deploy solution, but here are some points that might be better, some of them being already better in uncloud I think:

      - energy in the community is low, it's hard to find an active discussion channel of swarm users

      - swarm does not support the complete compose file format. This is really annoying

      - sometimes, deploys fail for unclear reasons (eg a network was not found, but why as it's defined in the compose file?) and work the next try. This is never lead to problems, but doesn't feel right

      - working with authenticate/custom registries is somewhat cumbersome

      - having to work with registries to have the same image deployed on all nodes is sometimes annoying. It could be cool to have images spreading across nodes.

      - there's no contact between devs and users. I've just discovered uncloud and I've had more contact with its devs here than in years of using swarm!

      - the firewalling is not always clear/clean

      - logs accessibility (service vs container) and containers identification: when a container fails to start, it's sometimes harder than needed to debug (esp when it is because the image is not available)

      • tontony 6 hours ago

        Thanks for the detailed overview!

nrki 5 hours ago

Very neat tool. Thank you for your efforts!

I am wondering how state replication works on the backend. The design mentions using crdt and a gossip proto, but I'm not sure what is actually implemented as it is a tad vague. I haven't dug into the code so forgive me if it is obvious or explained elsewhere.

wg0 a day ago

This look really neat and great! Amazing job!

BTW just looking at other variations on the theme:

- https://dokploy.com/

- https://coolify.io/

- https://demo.kubero.dev/

Feel free to add more.

  • dewey 19 hours ago

    There's a good list here: https://dbohdan.com/self-hosted-paas

    I'm always looking for new alternatives there, I've recently tried Coolify but it didn't feel very polished and mostly clunky. I'm still happy with Dokku at this point but would love to have a better UI for managing databases etc.

    • wg0 19 hours ago

      Okay could you please share your thoughts as a user:

      - What databases you want to work with?

      - What functionality you want from such a UI?

      - What database size we are talking here?

      Asking because I am tinkering with a similar idea.

chuckadams a day ago

I'm pretty happy with k3s, but I'm also happy to see some development happening in the space between docker compose and full-blown kubernetes. The wireguard integration in particular intrigues me.

stevefan1999 a day ago

If not K8S, why not Nomad (https://github.com/hashicorp/nomad)?

  • tontony a day ago

    Nomad still has a tangible learning curve, which (in my very biased opinion) is almost non-existent with Uncloud assuming the user has already heard about Docker and Compose.

  • weitendorf a day ago

    Their license does not allow you to modify it and then offer it as a service to others: https://github.com/hashicorp/nomad/blob/main/LICENSE

    You can't really do anything with it except work for Hashicorp for free, or create a fork that nobody is allowed to use unless they self-host it.

    • stevefan1999 15 hours ago

      well you can always fork the version before the license change

  • m1keil a day ago

    Nomad is great, but you will still end up with a control plane.

    • sgt a day ago

      Isn't Nomad pretty much dead now?

      • m1keil a day ago

        They had quite a few release in the last year so it's not dead that's for sure, but unclear how many new customers they are able to sign up. And with IBM in charge, it's also unclear at what moment they will loose interest.

vanillax 18 hours ago

Kubernetes is not hard. Taking the time to read the manual seems to be hard for most people.

indigodaddy a day ago

Very cool, so sort of like Dokku but simpler/easier to use?

Looks like the docs assume the management of a single cluster. What if you want to manage multiple/distinct clusters from the same uc client/management env?

  • tontony a day ago

    > What if you want to manage multiple/distinct clusters

    Uncloud supports having multiple contexts (think - clusters) in the same configuration file, or you can also use separate config files (via --uncloud-config attribute).

    https://uncloud.run/docs/cli-reference/uc_ctx

fuckinpuppers a day ago

Does it support a way to bundle things close to each other, for example, not having a database container hosted in a different datacenter than the web app?

  • jabr a day ago

    The `compose.yaml` spec for services let's you specify which machines to deploy it on, so you could target the database and web app to the same machine (or subset of machines).

    There is also an internal DNS for service discovery and it supports a `nearest.` prefix, which will preferentially use instances of a service running on the same machine. For example, I run a globally replicated NATS service and then connect to it from other services using the `nearest.nats.internal` address to connect to the machine-local NATS node.

HisNameIsTor a day ago

Very nice!

This would have been my choice had it existed three months ago. Now it feels like I learned kubernetes in vain xD

  • psviderski 15 hours ago

    Haha the K8s knowledge will definitely pay off. But Uncloud did exist three months ago. Clearly my marketing needs work :D

    On the bright side, you can always use both

sergioisidoro a day ago

This is extremely interesting to me. I've been using docker swarm, but there is this growing feeling of staleness. Dokku feels a bit too light, K8 absolutely too heavy. This proposition strikes my sweet spot - especially the part where I keep my existing docker compose declarations

nake89 a day ago

How does this compare to k3s?

  • tontony a day ago

    Uncloud is not a Kubernetes distribution and doesn't use K8s primitives (although there are of course some similarities). It's closer to Compose/Swarm in how you declare and manage your services. Which has pros and cons depending on what you need and what your (or your team's) experience with Kubernetes is.

scottydelta a day ago

As a happy user of coolify, what’s the difference between these two?

Even coolify lets you add as many machines as you want and then manage docker containers in all machines from one coolify installation.

  • psviderski 15 hours ago

    Uncloud is a bit lower-level, CLI-only (for now), with no central server. If some nodes go offline, the rest keep working and stay manageable.

    It also has the WireGuard overlay networking built in so containers across machines get direct connectivity without having to map ports to the host. For example, securely access a database running on another machine. This also allows you to horizontally scale your services to multiple replicas on different machines and distribute traffic between them with minimal configuration.

    The current state of Uncloud is the primitives and foundation that could be used to build a more higher-level PaaS-like solution such as Coolify.

  • lillecarl a day ago

    Vendor lock-in, Kubernetes is future proof.

    • rc_mob 14 hours ago

      No we're looking for comparisons of coolify vs uncloud

throwaway77385 a day ago

How does this compare to something like Dokku?

  • Cwizard a day ago

    Is dokku multi node?

    • throwaway77385 a day ago

      It supports docker swarm, but I've never used it like that. As I may need multi node in the future, I was asking the question to see if it would make it 'easy' to orchestrate multiple containers. The simplicity of Dokku is hard to beat, however.

      edit: Well, it would appear that the very maintainer of Dokku himself replied to the parent comment. My information is clearly outdated and I'd only look at this comment[0] to get the proper info.

      [0] https://news.ycombinator.com/item?id=46144275#46145919

    • josegonzalez a day ago

      Dokku is multi node. It supports docker-local (single node) and k3s (multi-node) as schedulers, with most features implemented as expected when deploying to k3s.

apexalpha a day ago

Wow this looks really cool. Congratulations!

sigmonsays a day ago

really need to disclose the status of this application. It's ridiculous to pipe curl | bash in the actual code.

Gonna burn bridges with this lack of transparency. I love the intent but the implementation is so bad that I probably wont look back.

m1keil a day ago

Looks lovely.. I'll definitely will give it a try when time comes.

oulipo2 a day ago

I'm using Dokploy, would that be very similar, or quite different?

  • psviderski a day ago

    Uncloud is lower-level. Dokploy seems to be positioning as a PaaS with a web UI using Docker Swarm under the hood for multi-node container management. Uncloud operates at that same layer as Swarm but with a simpler operating model that's friendlier for troubleshooting, WireGuard mesh networking built in, and the ability to connect nodes from different clouds or locations.

    No UI yet (planned) so if that's critical, Dokploy is likely a better choice for now.

    However, some unique features like building and pushing images directly to your nodes without an external registry give Uncloud a PaaS-like feel, just CLI-first. Really depends on what you're hosting and what you're optimising for.

    See short deploy demo: https://uncloud.run/docs/guides/deployments/deploy-app

raw_anon_1111 a day ago

I know just enough about Kubernetes to not sound like an idiot when I’m in the room and mostly I deploy Docker containers to various managed services on AWS - Lambda, ECS, etc.

But, as a lead for implementations, I just couldn’t in good conscience permit something that is not an industry standard and not supported by my cloud provider. First from a self interested standpoint, it looks a lot better on their resume to say “I did $x using K8s”.

From an onboarding standpoint, just telling a new employee “we use K8s -here you go” means nothing new to learn.

If you are part of the industry, just suck it up and learn Kubernetes. Your future self won’t regret it - coming from someone who in fact has not learn K8s.

This is a challenge any new framework is going to have.

  • codegeek a day ago

    With your logic, we will never have any new innovations being tried. This is a Show HN for a product someone built. No one is asking every corporation/company to replace K8s with this. It is a different and simpler take on the complexity of deploying containers without a control plane. I personally welcome these types of post. A lot of great products started out with someone just trying to solve their own problem.

    • raw_anon_1111 a day ago

      In another reply, I commended him on doing it if the reasoning was “because he felt like the world needed it” or if it was to scratch an itch. But if he wants to make this a business - that’s a different beast with different challenges that he needs to address.

      I can’t find it now, but I could swear I saw somewhere he said he’s working on this full time and living off of savings. If I’m wrong he will correct me I am sure. If that’s the case, I assume he wants to make this a business.

      He has to think about those objection. Would he better off by first creating a known Kubernetes environment and then building an easier wrapper around that so someone could manage the K8s cluster in case they don’t want a dependency only on his product? I don’t have those answers

  • psviderski a day ago

    Fair points on the career and onboarding angle. It’s hard to argue against "everyone knows it". But with that mentality, we'd never challenge anything. COBOL was the industry standard once. So were bare metal servers or fat VMs without containers. Someone had to say "this is more painful than it needs to be and I want to try something different because I can".

    I know how to use k8s but I really don't enjoy it. It feels so distasteful to me that it triggered me to make an attempt at designing a nicer experience, because why not. I remember how much fun I had trying Docker when it first came out. That inspires me to at least try. It doesn't seem like the k8s community is even trying unfortunately.

    • raw_anon_1111 a day ago

      The enterprise equivalent of COBOL today is Java and to a much lesser extent C#. Those were both championed by large corporations - Sun and Microsoft.

      The move to VMs at first and then to the cloud were also marketed by existing companies with huge budgets where people who made decisions had the “No one ever got fired for choosing $LargeWellknownCompany that is in the upper right corner of Gartner’s Magic Square”.

      I love Docker. I think everyone going to EKS before they need to is dumb. There are dozens of services out there that let you give it a Docker container and just run it.

      And I think that spending energy avoiding “cloud lock-in” is dumb. Choose your infrastructure and build. Migrations are going to be a pain at any decent scale anyway and you are optimizing for the wrong thing if you are worried about lock in.

      As an individual especially in today’s market, it’s foolish (not referring to you - any developer or adjacent) not to always be thinking of what keeps you the most employable if the rug gets pulled from under you.

      As a decision maker who is held accountable for architecture and when things go wrong they look at or when the next person has to come along to maintain it, they are going to look at me like I am crazy if I choose a non industry standard solution just because I was too lazy to choose the industry standard.

      Again I don’t mean that you are being “lazy”. That’s how people think.

      But if I were hiring someone - and I’m often interviewing people for cloudy/devOps type roles. Why would I hire someone with experience with a Docker orchestration framework I never heard of over someone who knew K8s?

      And the final question you should ask yourself is why are you really doing this?

      Is it to scratch an itch out of passion and it’s something that you feel the world should have? If so in all sincerity, I wish you luck on your endeavor. You might get lucky like Bun just did. I had effusive praise for them doing something out of passion instead of as VC bait.

      Are you doing it for financial gain? If so, you have to come up with a strategy to overcome resistance from people like Ive outlined.