I don't use Debian for servers nor personal computers anymore, but the fact that they themselves host a page explaining potential privacy issues with Debian makes me trust them a lot more, and feel safer recommending it to others when it fits.
Not really, sometimes it forces me to apply updates on shutdown/restart, even though I don't want to do it. None of the registry hacks seems to be able to disable this behavior. I've heard some people talking about a special distribution/version of Windows where you can disable this, but don't really feel like re-installing the entire OS just so when I boot into/away from Windows I don't get forced to wait for the slow update twice (one now, another in the future when I boot Windows next time).
All because Ableton cannot be bothered to support Linux :/ I understand that though, just sucks...
I'm on the market for a decent laptop. Don't want to side-line the thread, but is Arch supported decently on, say, Dell or any "enterprise grade" laptops?
More color: I was happy running Arch on a 2012 vintage Dell Latitude (Intel, integrated graphics) for several years. I'm currently quite happy running Arch on a Lenovo Thinkpad T14s (gen2, AMD, integrated graphics).
I haven’t tried much, but as long as you avoid nvidia or fancy laptops with weird components, you will be good. My recommendation is to go for business line, as they have more standardized peripherals. Better if there’s some linux support guarantee.
This policy is missing from nixpkgs, although there is a similar policy for the build process for technical reasons.
So I can add spotify or signal-desktop to NixOS via nixpkgs, and they won’t succeed at updating themselves. But they might try, which would be a violation of Debian’s guidelines.
It’s a tough line — I like modern, commercial software that depends on some service architecture. And I can be sure it will be sort of broken in 10-15 years because the company went bust or changed the terms of using their service. So I appreciate the principles upheld by less easily excited people who care about the long term health of the package system.
In the process of trying to update, Spotify on NixOS will likely display some big error message about how it's unable to install updates, which results in a pretty bad user experience when everything is actually working as intended. It seems fair to patch software to remove such error messages.
To be fair, we (Nixpkgs maintainers) do remove or disable features that phone home sometimes even though it's not policy. That said, it would be nice if it was policy. Definitely was discussed before (most recently after the devbox thing I guess.)
Why can't I get GNOME stop calling home? (on a Debian installation) Each time I fire up my Debian VM with GNOME here on my OSX host system Little Snitch pops up because some weird connection to a GNOME web endpoint. One major pet peeve of mine.
I was extremely disappointed to recently learn that visidata(1) phones home, and that this functionality has not been disabled in the Debian package, despite many people requesting its removal:
Infuriating. The developer is just making excuses and refusing to address the users' actual concern. And why are they phoning home in the first place? What is this critical use case that requires this intrusion?
"This daily count of users is what keeps us working on the project, because otherwise we have feel like we are coding into a void."
So, they wrote code to phone home (by default) and then digging in and defending it... just for their feelings? You've got to be kidding me!
No they don't. The formulation in TFA is a bit too generic - Debian will usually not remove any code that "calls home". There are perfectly valid reasons for software to "phone home", and yes, that includes telemetry. In fact, Debian has its own "telemetry" system:
Telemetry is perfectly acceptable as long as it is opt-in and does not contain personal data, and both apply to Go's telemetry, so there's no need for a fork.
> Telemetry is perfectly acceptable as long as it is opt-in and does not contain personal data
Telemetry contains personal data by definition. It just varies how sensitive & how it's used. Also it's been shown repeatedly that 'anonymized' is shaky ground.
In that popcon example, I'd expect some Debian-run server to collect a minimum of data, aggregate, and Debian maintainers using it to decide where to focus effort w/ respect to integrating packages, keeping up with security updates, etc. Usually ok.
For commercial software, I'd expect telemetry to slurp whatever is legally allowed / stays under users' radar (take your pick ;), vendor keeping datapoints tied to unique IDs, and sell data on "groups of interest" to the highest bidder. Not ok.
Personal preference: eg. a crash report: "report" or "skip" (default = skip), with a checkbox for "don't ask again". That way it's no effort to provide vendor with helpful info, and just as easy to have it get out of users' way.
It's annoying the degree to which vendors keep ignoring the above (even for paying customers), given how simple it is.
Yeah, isn't that a shame? Wouldn't it be nice if instead of catastrophizing that telemetry data is always only ever there to spy on us, that we might assume that there are actually trustworthy projects out there? Especially for FOSS projects, which can usually not afford extensive in-house user testing, telemetry provides extremely valuable data to see how their software is used and where it can be improved, especially in the UX department, where many FOSS is severely lacking. This thread here is a perfect example of this kind of black/white thinking that telemetry must be ripped out of software no matter what, usually based on some fundamental viewpoint that anonymity is impossible anyway, so why bother even trying. This is not helping. I usually turn on telemetry for FOSS that offers it, because I hope they will use this to actually improve it.
Why it has to include PII by definition? I'd say DNF Counting (https://github.com/fedora-infra/mirrors-countme) should be considered "telemetry", yet it doesn't seem to collect any personal data, at least by what I understand telemetry and personal data to mean.
I'm guessing that you'd either have to be able to argue that DNF Counting isn't telemetry, or that it contains PII, but I don't see how you could do either.
Yes, so the vendor must not store it. Something along those lines is usually said in the privacy policy. If you don't trust the vendor to do that, then do not opt-in to sending data, or even better, do not use the vendor's software at all.
Sometimes, we have to or we simply want to run software from developers we don't know or entirely trust. This just means that the software developer needs to be treated as an attacker in your threat model and mitigate accordingly.
I would argue that users can't inherently trust the average developer anymore. Ideas about telemetry, phoning home, conducting A/B tests and other experiments on users, and fundamentally, making the software do what the developer wants instead of what the user wants, have been thoroughly baked in to many, many developers over the last 20 or so years. This is why actually taking privacy seriously has become a selling point: It stands out because most developers don't.
I can't argue that you are wrong, but I can argue that, for myself, if I don't trust a developer to not screw me over with telemetry, I cannot trust the developer to not screw me over with their code. I can't think of a scenario where this trust isn't binary, either I can trust them (with telemetry AND code execution), or I can't trust them with either.
Could you describe what scenario I am missing?
Many corporate privacy policies per their customer contracts agree with this. Even a single packet regardless of contents is sending the IP address and that is considered by many companies to be PII. Not my opinion, it's in thousands of contracts. Many companies want to know every third party involved in tracking their employees. Deviating from this is a compliance violation and can lead to an audit failure and monetary credits. These policies are strictly followed on servers and less so on workstations but I suspect with time that will change.
I can only repeat myself from above: it's about what data you store and analyze. By your definition, all internet traffic would fall under PII regulations because it contains IP addresses, which would be ludicrous, because at least in the EU, there are very strict regulations how this data must be handled.
If you have a nginx log and store IP addresses, then yes: that contains PII. So the solution is: don't store the IP addresses, and the problem is solved. Same goes for telemetry data: write a privacy policy saying you won't store any metadata regarding the transmission, and say what data you will transmit (even better: show exactly what you will transmit). Telemetry can be done in a secure, anonymous way. I wonder how people who dispute this even get any work done at all. By your definitions regarding PII, I don't see how you could transmit any data at all.
By your definitions regarding PII, I don't see how you could transmit any data at all.
On the server side you would not. Your application would just do the work it was intended to do and would not dial out for anything. All resources would be hosted within the data-center.
On the workstation it is up to the corporate policy and if there is a known data-leak it would be blocked by the VPN/Firewalls and also on the corporate managed workstations by IT by setting application policies. Provided that telemetry is not coded in a way to be a blocking dependency this should not be a problem.
Oh and this is not my definition. This is the definition within literally thousands of B2B contracts in the financial sector. Things are still loosely enforced on workstations meaning that it is up to IT departments to lock things down. Some companies take this very seriously and some do not care.
> Telemetry is perfectly acceptable as long as it is opt-in and does not contain personal data, and both apply to Go's telemetry, so there's no need for a fork.
This changed somewhat recently. Telemetry is enabled by default (I think as of Golang 1.23?)
Attempts to contact external telemetry servers under default configuration is the issue. That not all of the needlessly locally aggregated data would actually be transmitted is separate.
“Will remove” means that it’s one of the typical/accepted reasons why patches are applied by Debian maintainers, as in meaning 4 here [0], not that there is a guarantee of all telemetry being removed.
Between snap and having completely different network implementations between "desktop" and "server" versions really made me fall back down the learning curve of nix.
Especially since I was novice at best before the systemd thing, and my Ubuntu dive involved trying to navigate all 3 of these pretty drastic changes at once (oh yea and throw containers on top of that).
I went into it with the expectation that it was going to piss me off, and boy did it easily exceeded that threshold.
God, I wish someone would do this to discord already. I'm so sick of updating it through my package manager every other day only for discord to then download its own updates anyway.
Yes, I've disabled the update check. No, it doesn't solve the problem.
Most obvious example is Firefox. The Debian Project allows Firefox to update outside the packaging system, automatically, at the whim of Firefox.
And there's the inclusion of non-Free software in the base install, which is completely against the Debian Social Contract.
The Debian Project drastically changed when they decided to allow Ubuntu to dictate their release schedule.
What used to be a distro by sysadmins for sysadmins, and which prized stability over timeliness has been overtaken by Ubuntu and the Freedesktop.Org people. I've been running Debian since version 3, and I used to go _years_ between crashes. These days, the only way to avoid that is to 1) rip out all the Freedesktop.Org code (pulseaudio, udisks2, etc.), and 2) stick with Debian 9 or lower.
Firefox only updates on its own if installed outside of the package manager. This applies to Debian and its forks. If I click on Help -> About it says, "Updates disabled by your organization". I personally would like to see distributions suggest installing Betterfox [1] or Arkenfox [2] to tighten up Firefox a bit.
> Most obvious example is Firefox. The Debian Project allows Firefox to update outside the packaging system, automatically, at the whim of Firefox.
No, it's not. Stable ships ESR which has its update mechanism is disabled. Same for Testing/Unstable. It follows standard releases, but autoupdate is disabled.
Even Official Firefox Package for Debian from Mozilla has its auto-updates disabled and you get updates from the repository.
Only auto-updating version is the .tar.gz version which you extract to your home folder.
This is plain FUD.
Moreover:
Debian doesn't ship pulseaudio anymore. It's pipewire since forever. Many people didn't notice this, it was that smooth. Ubuntu's changes are not allowed to permeate without proper rigor (I follow debian-devel), and it's still released when it's ready. Ubuntu follows Debian Unstable, and Unstable suite is a rolling release, and they can snapshot it and start working on it whenever they want.
I'm using Debian since version 3 too, and I still reboot or tend my system only at kernel changes. It's way snappier w.r.t. Ubuntu with the same configuration for the same tasks, and is the Debian we all know and like (maybe sans systemd. I'll not open that can of worms).
Long time Debian fan, current Devuan user. I'm sure it still has it's problems, but it feels nice and stable, especially on older hardware that is struggling with the times. (Thinkpad R61i w/core2duo T8100 swapped in and middleton bios)
>Most obvious example is Firefox. The Debian Project allows Firefox to update outside the packaging system, automatically, at the whim of Firefox.
It seems likely that you personally chose to install a flatpak or tar.gz version probably because you are running an older no longer supported version of Debian.
>These days, the only way to avoid that (crashes) is...
Running older unsupported versions with known never to be fixed security holes isn't good advice nor is ripping out the plumbing. Its almost never a great idea to start ripping out the floorboards to get at the pipes.
Pipewire seems pretty stable and if you really desire something more minimal it's better to start with something minimal than stripping something down.
As someone who maintained a (PHP) library that Debian distributed, it fucking sucked that they made source modifications. There were a number of times where they broke the library in subtle ways, and there was little to no indication to users of the library that they were running a forked version. I also never had any contact from them about the supposed "bugs" they were patching.
> it fucking sucked that they made source modifications
As a maintainer, I can certainly understand how it feels like that, I'd probably wouldn't feel great about it either. As a user, I'm curious what kind of modifications they felt were needed, what exactly did they change in your library?
The library I was maintaining (SimplePie) was an RSS feed parser which supported every flavour of the RSS/Atom specs. Because of the history of those particular formats, there were a huge number of compatibility hacks necessary to parse real-world data, and cases where the "spec" (actually just a vague page on a website) was inaccurate compared to actual usage.
This was a while ago (10+ years), but my recollection is that someone presumably had reported that parts of the library didn't conform to the spec, and Debian patched those. This broke parsing actual feeds, and caused weeks of debugging issues that couldn't be replicated. Had they reported upstream in the first instance, I could have investigated, but there was no opportunity to do so.
That sounds very careless. Not only does this break an obviously deliberate feature, it also violates the robustness principle. Whether one likes it or not, it’s a guiding principle for the web. Most importantly this „fix“ was bad for its users.
Good intentions, but unfortunately bad outcome.
There was a somewhat recent discussion on here on how OS projects on GitHub are pestered by reports as well. Some athors commented that it even took away their motivation to publish code.
It’s always the same mechanism isn’t it. The „why we can’t have nice things“ issue. Making everything at least slightly worse, because there are people who exploit a system or trust based relationship.
Ah, that sounds like a terrible solution from Debian's side, and very unexpected. Sure, patching security/privacy issues kind of makes sense (from a user's POV), but change functionality like that? Makes less sense, especially if they didn't even try to get the changes upstream.
First I want to say that I love Debian. They have a great distro that is simple and quite frankly a joy to use, and manage to keep it all going on basically nothing but volunteer effort.
However, I do believe that in certain areas, they give too much freedom to package maintainers. The bar for being a package maintainer in Debian is relatively low, but once a package _has_ a maintainer--and barring any actual Debian policy violations--that person seems to have the final say in all decisions related to the package. Sometimes those decisions end up being controversial.
Your case is one example. Package maintainers ideally _should_ work with upstream authors, but are not required to because a LOT of upstream authors either cannot be reached, or actively refuse to be bothered by any downstream user. (The source tarball is linked on their home page and that's where their support ends!) I don't know what the solution is here, but there are probably improvements that could and should be made that don't require all upstream authors to subscribe to Debian development mailing lists.
> The bar for being a package maintainer in Debian is relatively low
It's typically an unglamorous, demanding, unpaid, volunteer position a few rungs above volunteering at a soup kitchen or food bank. It's unsurprising that the bar is low.
It's also trivial for upstream maintainers to set up their own competing Debian package repos that entirely ignore Debian rules - Microsoft has one for VS Code.
Agreed. The intent is good, but there are recurring problems that arise because of insufficient communication between the distro package maintainers and upstream, insufficient domain experience, or both. I think the solution is to set stronger expectations about what kinds of customizations maintainers should be making (in general, fewer than they make today), how to communicate with upstream, and more code review.
Users trust Debian (and in turn its maintainers) more than the upstream providers to keep the entire OS stable. Upstream, by definition, are likely to be OS-agnostic, only care about their package and perhaps their preferred dependencies.
Debian has earned that trust, and it's software update rules are battle-tested and well-understood.
The counterpoint would be the Debian-specific loss of private key entropy [1] back in 2008. While this is now a very ancient bug, the obvious follow-up question would be: how does Debian prevent or mitigate such incidents today? Was there any later (non-security, of course) incident of similar nature?
The upstream GnuPG project (and the standards faction they belong to) specifically opposes the use of keys without user IDs as it is a potential security issue. It is also specifically disallowed by the RFC4880 OpenPGP standard. By working through the Debian process, the proponents of such keys are bypassing the position of the upstream project and the standard.
> There is a lot of political stuff in there related to standards.
To be fair, in Debian's case politics come with the territory. Debian is a vision of what an OS should be like. With policies, standards & guidelines aimed at that, treating the OS as a whole.
That goes well beyond "gather packages, glue together & upload".
Same goes for other distributions I suppose (some more, some less).
Do you have any statistics that show that Debian patches introduce more CVE worthy bugs than the software already contains? OpenSSL doesn't really have a pristine history.
Let's not forget that the patch had been posted on the OpenSSL mailing list and had received a go ahead comment before that.
Having said that, if you're asking if there's a penetration test team that reviews all the patches. No there isn't. Like there isn't any such thing on 99.999999999% of all software that exists.
The patch was posted on the wrong OpenSSL mailing list, and frankly that particular Debian bug was worse than anything else we've seen even from OpenSSL.
Last I knew Debian didn't do dedicated security review of patches to security-critical software, which is normal practice for other distributions.
It was plausibly the worst computer security bug in human history, but by the same token, it's hard to see it as indicating a systemic problem with either Debian or OpenSSL. When we're dealing with a once-in-history event like that, where it happens is pretty random. It's the problem of inference from a small sample.
I think it's important to learn from incidents. It's clear there were design issues on both projects' sides that allowed that bug to happen, and in fact several of them were fixed in subsequent years (though not quickly, and not until major corporate sponsors got concerned about OpenSSL's maintenance).
On the other hand it exposed that OpenSSL was depending on Undefined Behavior always working predictably. Something as simple as a GCC update could have had the same effect across far more systems than just Debian, with no patch to OpenSSL itself.
> On the other hand it exposed that OpenSSL was depending on Undefined Behavior always working predictably. Something as simple as a GCC update could have had the same effect across far more systems than just Debian, with no patch to OpenSSL itself.
No it wasn't. It was reading (and xoring into the randomness that would become the key being generated) uninitialised char values from an array whose address was taken, that results in unspecified values not undefined behaviour.
I see you're correct, I misremembered. That isn't really much better, since there's no requirement that unspecified values ever actually change. Compiler developers are free to always return `0x00` when reading any unspecified `char` value, which wouldn't provide any entropy. XORing it in guaranteed that it couldn't subtract entropy, but if there were no other entropy sources they failed to return an error. OpenSSL being able to generate 0 entropy and not return an error in its RNG was still an important bug to fix.
The crazy thing is that after this incident they restored the uninitialized usage and retained it there for the next half decade. It wasn't as mild as being a risk of future compilers destroying the universe: it made valgrind much less useful on essentially all users of OpenSSL, exactly what you want for security critical software.
(meanwhile, long before this incident fedora just compiled openssl with -DPURIFY which disabled the bad behavior in a safe and correct way).
That was the kind of answer I wanted to hear, thanks. (Of course I don't think Debian should be blamed for incidents.) Does Debian send other patches as well? For example, I didn't know that Debian also often creates a man page by its own.
Debian definitely aims to contribute upstream, but that doesn't always happen, due to upstream CLAs, because most Debian packagers are volunteers, many Debian contributors are busy, many upstreams are inactive and other reasons.
Ah, I meant more about policies and guidelines. I'm not well-versed in Debian processes so I can for example imagine that only some patches get sent to the upstream only at the maintainers' discretion. It seems that Debian at least has a policy to maintain patches separate from the upstream source though.
Debian uses Quilt system for per-package patch maintenance. While packaging a software you get the original source (i.e. orig.tar.gz), and add patches on top of it with Quilt, and build it that way.
Then you run the tests, and if they pass, you package and upload it.
This allows a patch(set) can be sent to the upstream as a package saying "we did this, and if you want to include them, this apply cleanly to version x.y.z, any feedback is welcome".
In theory you want all patches sent to upstream, but if they're for some specific debian reason then you can not send them.
Patches are maintained separately because debian doesn't normally repack the .tar.gz (or whatever) that the projects publish, as to not invalidate signatures and let people check that the file is in fact the same. An exception is done when the project publishes a file that contains files that cannot legally be redistributed.
https://research.swtch.com/openssl provides more context: openssl was asked about the change, and seemingly approved it (whether everyone understood what was being approved is a different question). It's not clear why openssl never adopted the patch (was everyone else just lucky?), but I wonder what the reaction would have been if the patch had been applied (or the lines hidden away by a build switch).
instead of taking a closer look / trying to understand what exactly went on there / causes the problem, the maintainer simply commented out / disabled those accesses...
mistakes happen, but the debian-community handled this problem very well - as in my impression they always do and did.
idk ... i prefere the open and community-driven approach from debian anytime over distributions which are associated to companies.
last but not least, the have a social contract.
long story short: at least for me this was an argument for the debian gnu/linux distribution, not against :))
But why patch it in debian, and not file an upstream bug?
It’s doubly important to upstream issues for security libraries: There are numerous examples of bad actors intentionally sabotaging crypto implementations. They always make it look like an honest mistake.
For all we know, prior or future debian maintainers of that package are working for some three letter agency. Such changes should be against debian policy.
All of these reasons are good, but they're not comprehensive. Unless someone can tell me what category Debian's alterations to xscreensaver fall under, maybe. As far as I can tell, that was just done for aesthetic reasons and packagers disagreeing with upstream.
91_remove_version_upgrade_warnings.patch is the one for asthetic reasons.
Debian keeps ancient versions that have many fixed bugs. Upstream maintainer has to deal with fallout of bug reports of obsolete version. To mitigate his workload, he added obsolete version warning. Debian removed it.
I'll admit that I haven't inspected the patch, but how could that warning possibly work without checking version information somewhere on the internet? That was listed in OP.
IIRC it just hardcodes the release date and complains if it is more than 2 or 3 years later.
It’s somewhat reasonable. I agree Debian should patch out phone-home and autoupdate (aka developer RCE). They should have left the xscreensaver local-only warning in, though. It is not a privacy or system integrity issue.
jwz however is also off the rails with entitlement.
I don't think that approach is reasonable. When you are effectively making a fork, don't freeload on existing project name and burden him with problems you cause.
> jwz however is also off the rails with entitlement.
Always remember to not link to his site from HN because you'll get a testicle NSFW image when you click on a link to his site from HN. dang used to have rel=noreferrer on outgoing links, but that led to even more drama with other people...
Some people in the FOSS scene just love to stir drama, and jwz is far from the only one. Another person with such issues IMHO is the systemd crowd, although in this case ... IMHO it's excusable to a degree, as they're trying to solve real problems that make life difficult for everyone.
The testicle speaks for itself [1]. He has held a serious political grudge against VC way over a decade back [2], the earliest mention of the JWZ testicles appearing on HN that I could find is over 9 years old [3].
I respect the hell out of Debian and am grateful for everything they do for the larger ecosystem, but this is why I use Arch. It's so much easier just to refer to the official documentation for the software and know it will be correct. Also, I've never really encountered a situation where interop between software is broken by just sticking to vanilla upstream. Seems like modifying upstream is just a ton of work with so many potential breakages and downsides it's not really worth it.
You seem to be implying that Debian makes large significant changes to upstream software for the sake of integration with the rest of the OS and that Arch makes none at all. Neither of these is true.
Also if that means the program won't run at all? Or a bug that has a patch to fix it doesn't get fixed? Or a device that could be supported is instead not supported?
I've made patches to a bunch of stuff to improve kde on mobile/tablets. After short or very long time they do get merged, but meanwhile people (like me) who own a tablet can actually use the software.
The best part is when they swap FFmpeg or other libraries, make things compile somehow, don't test the results, and then ship completely broken software.
Debian isn't a single person. A lot of patches are backport fixes for CVEs.
Then there's stuff like: "this project only compiles with an obsolete version of gcc" so the alternatives are dropping it or fixing it. Closely related are bugs that only manifest on certain architectures, because the developers only use amd64 and never compile or run it on anything else, so they make incorrect assumptions.
Then there's python that drops standard library modules every release, breaking stuff. So they get packaged separately outside of python.
There's also cherry picking of bugfixes from projects that haven't released yet.
Is there any reason you think debian developers are genetically more prone to making mistakes than anyone else? Considering that debian has an intensive test setup that most projects don't even have.
What gives you the idea I think Debian are any more prone to mistakes than anyone else? It’s one of the two distros I use at home. I admire the devs a great deal.
I mean, it would depend on what the patch is? If you're adding a missing manpage, I'm not sure what can go wrong? Is changing the build options (e.g. enabling or disabling features) a patch, or an expected change (and if such a config option is bad, what blame should be put on upstream for providing it)? What about default config files (which could both make the software more secure or less, such as what cyphers to use with TLS or SSH)?
The point about manual pages has always seemed to me to be one of the points where the system fails us. There are a fair number of manual pages that the world at large would benefit from having in the original softwares, that are instead stuck buried in a patches subdirectory in a Debian git repository, and have been for years.
This is not to say that Debian is the sole example of this. The FreeBSD/NetBSD packages/ports systems have their share of globally useful stuff that is squirrelled away as a local patch. The point is not that Debian is a problem, but that it too systematizes the idea that (specifically) manual pages for external stuff go primarily into an operating system's own source control, instead of that being the last resort.
Usually the Debian manual page author or package maintainer will send that upstream. Same goes for patches. Sometimes upstream doesn't want manual pages, or wants it in a different format, and the Debian person doesn't have time to rewrite it.
There's a belief that this is usual. But having watched the process for a couple of decades, it seems to me that that is just a belief, and actual practice doesn't work that way. A lot of times this stuff just gets stuck and never sent along.
I also think that the idea that original authors must not accept manual pages is a way of explaining how the belief does not match reality, without accepting that it is the belief itself that is wrong. Certainly, the number of times that things work out like the net-tools example elsethread, where clearly the original authors do want manual pages, because they eventually wrote some, and end up duplicating Debian's (and FreeBSD's/NetBSD's) efforts, is evidence that contradicts the belief that there's some widespread no-manual-pages culture amongst developers.
It's also easy for people to have the opinion the those who do the unpaid work of packaging software should do even more work for free.
I have sent about 50 or so patches upstream for the 300 packages I maintain and while it reduces the amount of work long-term it's also surprisingly amount of work.
Typically the Debian patches are licensed under the same license as the original project. So there is nothing stopping anyone who feels that more patches should be sent upstream to send them.
I didn't ask for you to second-guess my software. I didn't ask you to ship modified (potentially broken and/or substantially different in opinionated ways) versions of my software under the same name.
If you're going to do that, then you should actually let people know. Otherwise don't do it. It's not about "but the license allows it", it's about what the right thing to do is.
Debian has given me the most grief of any Linux distro by far. Actually, Debian is the only system I can recall giving me grief. Debian pushes a lot of work to the broader ecosystem to people who never asked for it.
I didn't choose to be associated with Debian, but I have no choice in the matter. You did choose to be associated with the packages you maintain.
So don't give me any of that "but my unpaid time!". Either do the job properly or don't do it at all. Both are fine; no maintainer asked you to package anything. They're just asking you to not indirectly push work on them by shipping random (potentially broken and/or highly opinionated) patches they're never even told about.
> If you're going to do that, then you should actually let people know. Otherwise don't do it. It's not about "but the license allows it", it's about what the right thing to do is.
Okay, I am hereby letting you know: Every single distro patches software. All of them. Debian, Arch, Fedora, Gentoo, NixOS, Alpine, Void, big, small, commercial, hobbyist. All of them.
That's simply not true. Some distros may patch a few build issues, or maybe the rare breaking bug, but nothing like what Debian does. To claim anything else is Trumpian levels of falsehood.
And often it's not an unhelpful upstream, just an upstream that sees little use for man pages in their releases, and doesn't want to spend time maintaining documentation in parallel to what their README.md or --help provides (with which the man page must be kept in sync).
I spent years packaging software (mostly Gnome 2.x) for NetBSD. I almost-always tried to upstream the local patches that were needed either for build fixes or improvements (like flexibility to adapt to non-Linux file system hierarchies or using different APIs).
It was exhausting though, and an uphill battle. Most patches were ignored for months or years, with common “is this still necessary?” or “please update the patch; it doesn’t apply anymore” responses. And it was generally a lot of effort. So patches staying in their distros is… “normal”.
Another issue is that these manpages can become outdated (and/or are downright wrong).
Overall I feel it's one of those Debian policies stuck in 1995. There are other reasonable ways to get documentation these days, and while manpages can be useful for some types of programs, they're less useful for others.
Not the best name for the article. My first guess was version changes, or software being added/removed from repo. Turns out this is about source code modification.
As a native (British) English speaker, I was also unclear until reading the article.
Personally, I believe s/change/modify would make more sense, but that's just my opinion.
That aside, I'm a big fan of Debian, it has always "felt" quieter as a distro to me compared to others, which is something I care greatly about; and it's great to see that removing of calling home is a core principle.
All the more reason to have a more catchy/understandable title, because I believe the information in those short and sweet bullet points are quite impactful.
Patching out privacy issues isn't in Debian Policy, its just part of the culture of Debian, but there are still unfixed/unfound issues too, it is best to run opensnitch to mitigate some of those problems.
> it is best to run opensnitch to mitigate some of those problems
Opensnitch is a nice recommendation for someone concerned about protecting their workstation(s); for me, I'm more concerned about the tens of VMs and containers running hundreds of pieces of software that are always-on in my Homelab, a privacy conscious OS is a good foundation, and there are many more layers that I won't go into unsolicited.
Homelabs are usually running software not from a distro too, so potentially more privacy issues there too. Firewalling outgoing networking, along with a filtering SOCKS proxy like privoxy might be a good start.
Me too. I was hoping for an explanation of why the software I have got used to and works very well and isn't broken keeps being removed from Debian in the next version because it is "unmaintained".
They usually send everything upstream, and everything is public in their source control. Some maintainers look at repology.org to find package stuff from other distros.
They even have a portal that publishes this information specifically, with statistics, and many notes as to why a specific change has been made: https://udd.debian.org/patches
> Do distro maintainers share patches, man pages, call home metrics and other data with other distros’ maintainers (and them back)?
Yes, at a minimum the patches are in the Debian source packages. Moreover, maintainers are highly encouraged to send patches upstream, both for the social good and to ease the distribution's maintenance burden. An automated tool to look for such patches is the "patches not yet forwarded upstream" field on https://udd.debian.org/patches.cgi
Yeah no thanks, just look at the abomination like pure-ftpd, apache, nginx, etc. I don't need some weird opinion configuration framework to go with the software I use.
Tbh I’d rather have MariaDB. It’s wire-compatible, but has way more features, like a RETURNING clause. Why MySQL has never had that is a mystery (just kidding, it’s because Oracle).
I second that. Not only are there not infrequent cases of package maintainers breaking software, it's effectively nothing but the "app store" model, having an activist distributor insert themselves between the user and software.
It's why I'm really glad flatpaks/snaps/appimages and containerization are where they are at now, because it's greatly dis-intermediated software distribution.
Since this is the FOSS world, you are of course free to eschew distributions. But:
> it's effectively nothing but the "app store" model, having an activist distributor insert themselves between the user and software.
is just factually wrong. Distributions like Debian try to make a coherent operating system from tens of thousands of pieces of independently developed software. It's fine not to like that. It's fine to instead want to use and manage those tens of thousands of pieces of independent software yourself. But a distribution is neither an "app store", nor does it "insert itself" between the user and the software. The latter is impossible in the FOSS world. Many users choose to place distros between them and software. You can choose otherwise.
I'm not trying to argue which distribution model is best, or whether one should avoid distributions altogether. That's messy, complicated, and full of personal variables for each individual.
I'm just trying to correct the notion that somehow a distro is an "app store" that "inserts itself" between the software and its users. A distribution is an attempt to make lots of disparate pieces of software "work together", at varying degrees. Varying degrees of modification may or may not factor into that. On one extreme is perhaps just a collection of entirely disjoint software, without anything attaching those pieces of software together. On the other extreme is perhaps something like the BSDs. Arch and Debian sit somewhere in between, at either side.
Thoughtful people can certainly disagree about what the correct degree of "work together" or modification is.
>But a distribution is neither an "app store", nor does it "insert itself" between the user and the software.
Just scroll up to the second comment in the thread right now by the user rmccue. Given that Debian doesn't give the user any indication of the fact that it even has modified an upstream piece of software it's obviously perfectly possible for them to insert themselves without you even knowing it. And in that case, according to the developer, even introduced subtle bugs.
So you can run buggy software as a consequence of some maintainer thinking they know more than a developer, and not even know it because you have no practical info about that process. This is of course not a "choice" in any meaningful sense of the term.
Nobody ever actually wants to use a buggy php library maintained by debian over a functioning one maintained a by developer, they very likely just never even were aware that that is what they were served.
This is one of the reasons I switched to RHEL 10+ years ago.
I actually prefer the RHEL policy of leaving packages the way upstream packaged them, it means upstream docs are more accurate, I don't have to learn how my OS moves things around.
One example that sticks out in memory is postgres, RHEL made no attempt to link its binaries into PATH, I can do that myself with ansible.
Another annoying example that sticks out in Debian was how they create admin accounts in mysql, or how apt replaces one server software with another just because they both use the same port.
I want more control over what happens, Debian takes away control and attempts to think for me which is not appreciated in server context.
It swings both ways too, right now Fedora is annoying me with its nano-default-editor package. Meaning I have to first uninstall this meta package and then install vim, or it'll be a package conflict. Don't try and think for me what editor I want to use.
I don't think that's true for Red Hat, but it is true for Slackware.
If you want packages that works just like the upstream documentation, run Slackware.
Debian does add some really nice features in many of their packages, like a easy way to configure multiple uWSGI application using a file per application in a .d directory. It's a feature of uWSGI, but Debian has just package it up really nicely.
Pretty much everyone has had nano as default for ages, at least that's how it seems to me from having had to figure out which package has script support and installing vim myself after OS install for a long time.
And RedHat does a lot of fiddling in their distributions, you probably want something like Arch, which is more hands-off in that regard. Personally, I prefer Debian, it's the granite rock of Linux distributions.
Most good stuffed Distros do this. For example SUSE recently banned a package because of "calling home" e.g. did side-leading. https://security.opensuse.org/2025/05/07/deepin-desktop-remo...
Debian indeed does this. In release FF has disabled telemetry: https://wiki.debian.org/Firefox
Unfortunately that is not entirely true.
For example, when closing firefox on OpenSUSE Leap 15.6, "pingsender" is launched to collect telemetry:
https://imgur.com/a/k3Nnbbj
It has been there for years. It is also on other distros.
Did someone report/open a issue about it? Maybe it's as simple as them not being aware of it.
I wouldn't think the Firefox license allows them to do that. I thought only binaries built by Mozilla could use the FF brand.
Indeed, Mozilla only recently allowed Debian to use the brand for their modified version: https://en.wikipedia.org/wiki/Debian%E2%80%93Mozilla_tradema...
Recently? It was almost a decade ago: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=815006
That’s the reason for a while we had IceWeasel
This is unfortunately not part of Debian Policy yet, and there are still lots of privacy issues of different severities in Debian.
https://wiki.debian.org/PrivacyIssues
I don't use Debian for servers nor personal computers anymore, but the fact that they themselves host a page explaining potential privacy issues with Debian makes me trust them a lot more, and feel safer recommending it to others when it fits.
Thats just a wiki page, written by myself and a bunch of other Debian members/contributors. Don't read too much into it :)
What are you using instead now? Nixos?
Yeah, NixOS for all servers (homelab + dedicated remote ones) and Arch on desktop.
Arch is a minefield on this regard tbh
To be even more honest, it is what you make of it ¯\_(ツ)_/¯
Windows is also what you make it with enough registry hacks, I'm not recommending it to anyone though.
Well, but windows comes with spyware by default and tries to activly keep it that way. A registry hack might stop working anytime.
Windows is activly hostile to anything privacy related.
Arch comes with the default of do it yourself. Lots of footguns, but not hostile OS behavior. Great difference to me.
Not really, sometimes it forces me to apply updates on shutdown/restart, even though I don't want to do it. None of the registry hacks seems to be able to disable this behavior. I've heard some people talking about a special distribution/version of Windows where you can disable this, but don't really feel like re-installing the entire OS just so when I boot into/away from Windows I don't get forced to wait for the slow update twice (one now, another in the future when I boot Windows next time).
All because Ableton cannot be bothered to support Linux :/ I understand that though, just sucks...
Arch has been bliss for me. I'm heavy on Flatpaks and primarily use Arch as a base operating system with very minimal config changes.
I'm on the market for a decent laptop. Don't want to side-line the thread, but is Arch supported decently on, say, Dell or any "enterprise grade" laptops?
Short answer to a pretty broad question: Yes
More color: I was happy running Arch on a 2012 vintage Dell Latitude (Intel, integrated graphics) for several years. I'm currently quite happy running Arch on a Lenovo Thinkpad T14s (gen2, AMD, integrated graphics).
Arch wiki does have many pages about arch-on-a-particular-model to help once you get a short list of models you're interested in, like this: https://wiki.archlinux.org/title/Lenovo_ThinkPad_T14s_(AMD)_...
I haven’t tried much, but as long as you avoid nvidia or fancy laptops with weird components, you will be good. My recommendation is to go for business line, as they have more standardized peripherals. Better if there’s some linux support guarantee.
This policy is missing from nixpkgs, although there is a similar policy for the build process for technical reasons.
So I can add spotify or signal-desktop to NixOS via nixpkgs, and they won’t succeed at updating themselves. But they might try, which would be a violation of Debian’s guidelines.
It’s a tough line — I like modern, commercial software that depends on some service architecture. And I can be sure it will be sort of broken in 10-15 years because the company went bust or changed the terms of using their service. So I appreciate the principles upheld by less easily excited people who care about the long term health of the package system.
In the process of trying to update, Spotify on NixOS will likely display some big error message about how it's unable to install updates, which results in a pretty bad user experience when everything is actually working as intended. It seems fair to patch software to remove such error messages.
To be fair, we (Nixpkgs maintainers) do remove or disable features that phone home sometimes even though it's not policy. That said, it would be nice if it was policy. Definitely was discussed before (most recently after the devbox thing I guess.)
I'm glad that opensnitch is available in Debian trixie too, to mitigate the issues that Debian has not found yet.
Why can't I get GNOME stop calling home? (on a Debian installation) Each time I fire up my Debian VM with GNOME here on my OSX host system Little Snitch pops up because some weird connection to a GNOME web endpoint. One major pet peeve of mine.
You can ask the gnome team if they'd accept a patch. They might be of the idea that patching stuff out is bad.
Please send patches.
I was extremely disappointed to recently learn that visidata(1) phones home, and that this functionality has not been disabled in the Debian package, despite many people requesting its removal:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1001647
https://github.com/saulpw/visidata/discussions/940
The maintainer’s responses in that thread are really frustrating. They just keep describing the bug as though the package’s behavior is acceptable.
I wonder what debian’s process is for dealing with such maintainers.
I hope they make “no phone home” actual policy soon.
Infuriating. The developer is just making excuses and refusing to address the users' actual concern. And why are they phoning home in the first place? What is this critical use case that requires this intrusion?
So, they wrote code to phone home (by default) and then digging in and defending it... just for their feelings? You've got to be kidding me!This is one of my favorite things about Debian.
It's not guaranteed that they manage to catch all the software that does this though :D
Any such leftover behavior is going to be a reportable and fixable bug then.
I'm not sure it's explicitly in the policy or if any team can decide what to do…
It isn't in policy yet no.
https://wiki.debian.org/PrivacyIssues
It's not guaranteed that policies enforce every possible case though.
So they have their own Go fork?
Just one possible example, among many others that have telemetry code into them.
No they don't. The formulation in TFA is a bit too generic - Debian will usually not remove any code that "calls home". There are perfectly valid reasons for software to "phone home", and yes, that includes telemetry. In fact, Debian has its own "telemetry" system:
https://popcon.debian.org/
Telemetry is perfectly acceptable as long as it is opt-in and does not contain personal data, and both apply to Go's telemetry, so there's no need for a fork.
> Telemetry is perfectly acceptable as long as it is opt-in and does not contain personal data
Telemetry contains personal data by definition. It just varies how sensitive & how it's used. Also it's been shown repeatedly that 'anonymized' is shaky ground.
In that popcon example, I'd expect some Debian-run server to collect a minimum of data, aggregate, and Debian maintainers using it to decide where to focus effort w/ respect to integrating packages, keeping up with security updates, etc. Usually ok.
For commercial software, I'd expect telemetry to slurp whatever is legally allowed / stays under users' radar (take your pick ;), vendor keeping datapoints tied to unique IDs, and sell data on "groups of interest" to the highest bidder. Not ok.
Personal preference: eg. a crash report: "report" or "skip" (default = skip), with a checkbox for "don't ask again". That way it's no effort to provide vendor with helpful info, and just as easy to have it get out of users' way.
It's annoying the degree to which vendors keep ignoring the above (even for paying customers), given how simple it is.
The ongoing problem with popcon is that it's known not to be accurate, but since it's the data that's available, people make decisions based on it.
popcon is least likely to be turned on by:
- organizations with any kind of sensible privacy policy (which includes almost everyone running more than a handful of machines)
- individuals concerned about privacy
popcon is most likely to be turned on by Debian developers, and people new to Debian who have just installed it for the first time.
Yeah, isn't that a shame? Wouldn't it be nice if instead of catastrophizing that telemetry data is always only ever there to spy on us, that we might assume that there are actually trustworthy projects out there? Especially for FOSS projects, which can usually not afford extensive in-house user testing, telemetry provides extremely valuable data to see how their software is used and where it can be improved, especially in the UX department, where many FOSS is severely lacking. This thread here is a perfect example of this kind of black/white thinking that telemetry must be ripped out of software no matter what, usually based on some fundamental viewpoint that anonymity is impossible anyway, so why bother even trying. This is not helping. I usually turn on telemetry for FOSS that offers it, because I hope they will use this to actually improve it.
Turning it on and being unable to turn it off aren't the same.
> Telemetry contains personal data by definition
Why it has to include PII by definition? I'd say DNF Counting (https://github.com/fedora-infra/mirrors-countme) should be considered "telemetry", yet it doesn't seem to collect any personal data, at least by what I understand telemetry and personal data to mean.
I'm guessing that you'd either have to be able to argue that DNF Counting isn't telemetry, or that it contains PII, but I don't see how you could do either.
IPs are PII. You hit the server, and your anonymity is breached.
Yes, so the vendor must not store it. Something along those lines is usually said in the privacy policy. If you don't trust the vendor to do that, then do not opt-in to sending data, or even better, do not use the vendor's software at all.
Sometimes, we have to or we simply want to run software from developers we don't know or entirely trust. This just means that the software developer needs to be treated as an attacker in your threat model and mitigate accordingly.
I would argue that users can't inherently trust the average developer anymore. Ideas about telemetry, phoning home, conducting A/B tests and other experiments on users, and fundamentally, making the software do what the developer wants instead of what the user wants, have been thoroughly baked in to many, many developers over the last 20 or so years. This is why actually taking privacy seriously has become a selling point: It stands out because most developers don't.
I can't argue that you are wrong, but I can argue that, for myself, if I don't trust a developer to not screw me over with telemetry, I cannot trust the developer to not screw me over with their code. I can't think of a scenario where this trust isn't binary, either I can trust them (with telemetry AND code execution), or I can't trust them with either. Could you describe what scenario I am missing?
> Telemetry contains personal data by definition.
No. Please look up the definition of "telemetry" and "personal data". The latter always refers to an identifiable person.
Virtually all anonymization schemes are reversible, so “identifiable” isn’t carrying any weight in your definition.
“Person” isn’t either, unless the software knows for sure it’s not being uses by a person.
By your definition, all data is PII.
Many corporate privacy policies per their customer contracts agree with this. Even a single packet regardless of contents is sending the IP address and that is considered by many companies to be PII. Not my opinion, it's in thousands of contracts. Many companies want to know every third party involved in tracking their employees. Deviating from this is a compliance violation and can lead to an audit failure and monetary credits. These policies are strictly followed on servers and less so on workstations but I suspect with time that will change.
I can only repeat myself from above: it's about what data you store and analyze. By your definition, all internet traffic would fall under PII regulations because it contains IP addresses, which would be ludicrous, because at least in the EU, there are very strict regulations how this data must be handled.
If you have a nginx log and store IP addresses, then yes: that contains PII. So the solution is: don't store the IP addresses, and the problem is solved. Same goes for telemetry data: write a privacy policy saying you won't store any metadata regarding the transmission, and say what data you will transmit (even better: show exactly what you will transmit). Telemetry can be done in a secure, anonymous way. I wonder how people who dispute this even get any work done at all. By your definitions regarding PII, I don't see how you could transmit any data at all.
By your definitions regarding PII, I don't see how you could transmit any data at all.
On the server side you would not. Your application would just do the work it was intended to do and would not dial out for anything. All resources would be hosted within the data-center.
On the workstation it is up to the corporate policy and if there is a known data-leak it would be blocked by the VPN/Firewalls and also on the corporate managed workstations by IT by setting application policies. Provided that telemetry is not coded in a way to be a blocking dependency this should not be a problem.
Oh and this is not my definition. This is the definition within literally thousands of B2B contracts in the financial sector. Things are still loosely enforced on workstations meaning that it is up to IT departments to lock things down. Some companies take this very seriously and some do not care.
> Telemetry is perfectly acceptable as long as it is opt-in and does not contain personal data, and both apply to Go's telemetry, so there's no need for a fork.
This changed somewhat recently. Telemetry is enabled by default (I think as of Golang 1.23?)
I am only aware since I relatively recently ran into something similar to this on a fresh VM without internet egress: https://github.com/golang/go/issues/68976
https://github.com/golang/go/issues/68946
If golang doesn't fully address this I guess Debian really should at least change the default (of they haven't already).
It creates telemetry data, but actually transmitting it is opt-in.
Attempts to contact external telemetry servers under default configuration is the issue. That not all of the needlessly locally aggregated data would actually be transmitted is separate.
“Will remove” means that it’s one of the typical/accepted reasons why patches are applied by Debian maintainers, as in meaning 4 here [0], not that there is a guarantee of all telemetry being removed.
[0] https://www.merriam-webster.com/dictionary/will
One of the many reasons I switched from Ubuntu to Debian 2 years ago. Another reason was snap.
Between snap and having completely different network implementations between "desktop" and "server" versions really made me fall back down the learning curve of nix.
Especially since I was novice at best before the systemd thing, and my Ubuntu dive involved trying to navigate all 3 of these pretty drastic changes at once (oh yea and throw containers on top of that).
I went into it with the expectation that it was going to piss me off, and boy did it easily exceeded that threshold.
Yup. Snap is emblematic of all the complexity Canonical bakes into Ubuntu.
That and the whole systemd stack. Canonical employees had enough votes to force upstream it into debian.
I switched to devuan. It’s great, but it sucks that the community split over something so needlessly destructive.
God, I wish someone would do this to discord already. I'm so sick of updating it through my package manager every other day only for discord to then download its own updates anyway.
Yes, I've disabled the update check. No, it doesn't solve the problem.
This is no longer true.
Most obvious example is Firefox. The Debian Project allows Firefox to update outside the packaging system, automatically, at the whim of Firefox.
And there's the inclusion of non-Free software in the base install, which is completely against the Debian Social Contract.
The Debian Project drastically changed when they decided to allow Ubuntu to dictate their release schedule.
What used to be a distro by sysadmins for sysadmins, and which prized stability over timeliness has been overtaken by Ubuntu and the Freedesktop.Org people. I've been running Debian since version 3, and I used to go _years_ between crashes. These days, the only way to avoid that is to 1) rip out all the Freedesktop.Org code (pulseaudio, udisks2, etc.), and 2) stick with Debian 9 or lower.
Firefox only updates on its own if installed outside of the package manager. This applies to Debian and its forks. If I click on Help -> About it says, "Updates disabled by your organization". I personally would like to see distributions suggest installing Betterfox [1] or Arkenfox [2] to tighten up Firefox a bit.
[1] - https://github.com/yokoffing/Betterfox
[2] - https://github.com/arkenfox/user.js
> Most obvious example is Firefox. The Debian Project allows Firefox to update outside the packaging system, automatically, at the whim of Firefox.
No, it's not. Stable ships ESR which has its update mechanism is disabled. Same for Testing/Unstable. It follows standard releases, but autoupdate is disabled.
Even Official Firefox Package for Debian from Mozilla has its auto-updates disabled and you get updates from the repository.
Only auto-updating version is the .tar.gz version which you extract to your home folder.
This is plain FUD.
Moreover:
Debian doesn't ship pulseaudio anymore. It's pipewire since forever. Many people didn't notice this, it was that smooth. Ubuntu's changes are not allowed to permeate without proper rigor (I follow debian-devel), and it's still released when it's ready. Ubuntu follows Debian Unstable, and Unstable suite is a rolling release, and they can snapshot it and start working on it whenever they want.
I'm using Debian since version 3 too, and I still reboot or tend my system only at kernel changes. It's way snappier w.r.t. Ubuntu with the same configuration for the same tasks, and is the Debian we all know and like (maybe sans systemd. I'll not open that can of worms).
Long time Debian fan, current Devuan user. I'm sure it still has it's problems, but it feels nice and stable, especially on older hardware that is struggling with the times. (Thinkpad R61i w/core2duo T8100 swapped in and middleton bios)
>Most obvious example is Firefox. The Debian Project allows Firefox to update outside the packaging system, automatically, at the whim of Firefox.
It seems likely that you personally chose to install a flatpak or tar.gz version probably because you are running an older no longer supported version of Debian.
>These days, the only way to avoid that (crashes) is...
Running older unsupported versions with known never to be fixed security holes isn't good advice nor is ripping out the plumbing. Its almost never a great idea to start ripping out the floorboards to get at the pipes.
Pipewire seems pretty stable and if you really desire something more minimal it's better to start with something minimal than stripping something down.
Void is nice on this front for instance.
As someone who maintained a (PHP) library that Debian distributed, it fucking sucked that they made source modifications. There were a number of times where they broke the library in subtle ways, and there was little to no indication to users of the library that they were running a forked version. I also never had any contact from them about the supposed "bugs" they were patching.
> it fucking sucked that they made source modifications
As a maintainer, I can certainly understand how it feels like that, I'd probably wouldn't feel great about it either. As a user, I'm curious what kind of modifications they felt were needed, what exactly did they change in your library?
The library I was maintaining (SimplePie) was an RSS feed parser which supported every flavour of the RSS/Atom specs. Because of the history of those particular formats, there were a huge number of compatibility hacks necessary to parse real-world data, and cases where the "spec" (actually just a vague page on a website) was inaccurate compared to actual usage.
This was a while ago (10+ years), but my recollection is that someone presumably had reported that parts of the library didn't conform to the spec, and Debian patched those. This broke parsing actual feeds, and caused weeks of debugging issues that couldn't be replicated. Had they reported upstream in the first instance, I could have investigated, but there was no opportunity to do so.
That sounds very careless. Not only does this break an obviously deliberate feature, it also violates the robustness principle. Whether one likes it or not, it’s a guiding principle for the web. Most importantly this „fix“ was bad for its users.
Good intentions, but unfortunately bad outcome.
There was a somewhat recent discussion on here on how OS projects on GitHub are pestered by reports as well. Some athors commented that it even took away their motivation to publish code.
It’s always the same mechanism isn’t it. The „why we can’t have nice things“ issue. Making everything at least slightly worse, because there are people who exploit a system or trust based relationship.
Ah, that sounds like a terrible solution from Debian's side, and very unexpected. Sure, patching security/privacy issues kind of makes sense (from a user's POV), but change functionality like that? Makes less sense, especially if they didn't even try to get the changes upstream.
First I want to say that I love Debian. They have a great distro that is simple and quite frankly a joy to use, and manage to keep it all going on basically nothing but volunteer effort.
However, I do believe that in certain areas, they give too much freedom to package maintainers. The bar for being a package maintainer in Debian is relatively low, but once a package _has_ a maintainer--and barring any actual Debian policy violations--that person seems to have the final say in all decisions related to the package. Sometimes those decisions end up being controversial.
Your case is one example. Package maintainers ideally _should_ work with upstream authors, but are not required to because a LOT of upstream authors either cannot be reached, or actively refuse to be bothered by any downstream user. (The source tarball is linked on their home page and that's where their support ends!) I don't know what the solution is here, but there are probably improvements that could and should be made that don't require all upstream authors to subscribe to Debian development mailing lists.
> The bar for being a package maintainer in Debian is relatively low
It's typically an unglamorous, demanding, unpaid, volunteer position a few rungs above volunteering at a soup kitchen or food bank. It's unsurprising that the bar is low.
It's also trivial for upstream maintainers to set up their own competing Debian package repos that entirely ignore Debian rules - Microsoft has one for VS Code.
Agreed. The intent is good, but there are recurring problems that arise because of insufficient communication between the distro package maintainers and upstream, insufficient domain experience, or both. I think the solution is to set stronger expectations about what kinds of customizations maintainers should be making (in general, fewer than they make today), how to communicate with upstream, and more code review.
Users trust Debian (and in turn its maintainers) more than the upstream providers to keep the entire OS stable. Upstream, by definition, are likely to be OS-agnostic, only care about their package and perhaps their preferred dependencies.
Debian has earned that trust, and it's software update rules are battle-tested and well-understood.
The counterpoint would be the Debian-specific loss of private key entropy [1] back in 2008. While this is now a very ancient bug, the obvious follow-up question would be: how does Debian prevent or mitigate such incidents today? Was there any later (non-security, of course) incident of similar nature?
[1] https://en.wikipedia.org/wiki/OpenSSL#Predictable_private_ke...
Debian does a lot of patching that is not strictly required for distribution reasons. Here are the GnuPG patches for example:
* https://udd.debian.org/patches.cgi?src=gnupg2&version=2.4.7-...
There is a lot of political stuff in there related to standards. For a specific example see:
* https://sources.debian.org/src/gnupg2/2.4.7-19/debian/patche...
The upstream GnuPG project (and the standards faction they belong to) specifically opposes the use of keys without user IDs as it is a potential security issue. It is also specifically disallowed by the RFC4880 OpenPGP standard. By working through the Debian process, the proponents of such keys are bypassing the position of the upstream project and the standard.
> There is a lot of political stuff in there related to standards.
To be fair, in Debian's case politics come with the territory. Debian is a vision of what an OS should be like. With policies, standards & guidelines aimed at that, treating the OS as a whole.
That goes well beyond "gather packages, glue together & upload".
Same goes for other distributions I suppose (some more, some less).
Do you have any statistics that show that Debian patches introduce more CVE worthy bugs than the software already contains? OpenSSL doesn't really have a pristine history.
Let's not forget that the patch had been posted on the OpenSSL mailing list and had received a go ahead comment before that.
Having said that, if you're asking if there's a penetration test team that reviews all the patches. No there isn't. Like there isn't any such thing on 99.999999999% of all software that exists.
The patch was posted on the wrong OpenSSL mailing list, and frankly that particular Debian bug was worse than anything else we've seen even from OpenSSL.
Last I knew Debian didn't do dedicated security review of patches to security-critical software, which is normal practice for other distributions.
It was plausibly the worst computer security bug in human history, but by the same token, it's hard to see it as indicating a systemic problem with either Debian or OpenSSL. When we're dealing with a once-in-history event like that, where it happens is pretty random. It's the problem of inference from a small sample.
I think it's important to learn from incidents. It's clear there were design issues on both projects' sides that allowed that bug to happen, and in fact several of them were fixed in subsequent years (though not quickly, and not until major corporate sponsors got concerned about OpenSSL's maintenance).
I agree, but it's not clear that there aren't worse design issues in the alternatives.
On the other hand it exposed that OpenSSL was depending on Undefined Behavior always working predictably. Something as simple as a GCC update could have had the same effect across far more systems than just Debian, with no patch to OpenSSL itself.
> On the other hand it exposed that OpenSSL was depending on Undefined Behavior always working predictably. Something as simple as a GCC update could have had the same effect across far more systems than just Debian, with no patch to OpenSSL itself.
No it wasn't. It was reading (and xoring into the randomness that would become the key being generated) uninitialised char values from an array whose address was taken, that results in unspecified values not undefined behaviour.
I see you're correct, I misremembered. That isn't really much better, since there's no requirement that unspecified values ever actually change. Compiler developers are free to always return `0x00` when reading any unspecified `char` value, which wouldn't provide any entropy. XORing it in guaranteed that it couldn't subtract entropy, but if there were no other entropy sources they failed to return an error. OpenSSL being able to generate 0 entropy and not return an error in its RNG was still an important bug to fix.
Current kernels zero pages. The code was buggy to begin with.
The crazy thing is that after this incident they restored the uninitialized usage and retained it there for the next half decade. It wasn't as mild as being a risk of future compilers destroying the universe: it made valgrind much less useful on essentially all users of OpenSSL, exactly what you want for security critical software.
(meanwhile, long before this incident fedora just compiled openssl with -DPURIFY which disabled the bad behavior in a safe and correct way).
That was the kind of answer I wanted to hear, thanks. (Of course I don't think Debian should be blamed for incidents.) Does Debian send other patches as well? For example, I didn't know that Debian also often creates a man page by its own.
Debian definitely aims to contribute upstream, but that doesn't always happen, due to upstream CLAs, because most Debian packagers are volunteers, many Debian contributors are busy, many upstreams are inactive and other reasons.
If you go to https://tracker.debian.org/ for any package, it lists patches that need to be sent upstream.
Ah, I meant more about policies and guidelines. I'm not well-versed in Debian processes so I can for example imagine that only some patches get sent to the upstream only at the maintainers' discretion. It seems that Debian at least has a policy to maintain patches separate from the upstream source though.
Debian uses Quilt system for per-package patch maintenance. While packaging a software you get the original source (i.e. orig.tar.gz), and add patches on top of it with Quilt, and build it that way.
Then you run the tests, and if they pass, you package and upload it.
This allows a patch(set) can be sent to the upstream as a package saying "we did this, and if you want to include them, this apply cleanly to version x.y.z, any feedback is welcome".
In theory you want all patches sent to upstream, but if they're for some specific debian reason then you can not send them.
Patches are maintained separately because debian doesn't normally repack the .tar.gz (or whatever) that the projects publish, as to not invalidate signatures and let people check that the file is in fact the same. An exception is done when the project publishes a file that contains files that cannot legally be redistributed.
https://research.swtch.com/openssl provides more context: openssl was asked about the change, and seemingly approved it (whether everyone understood what was being approved is a different question). It's not clear why openssl never adopted the patch (was everyone else just lucky?), but I wonder what the reaction would have been if the patch had been applied (or the lines hidden away by a build switch).
> It's not clear why openssl never adopted the patch
OpenSSL already had an option to safely disable the bad behavior, -DPURIFY.
hello,
as always: imho (!)
i remember this incident - if my memory doesn't trick me:
it was openssl which accessed memory it didn't allocated to collect randomness / entropy for key-generation.
and valgrind complained about a possible memory-leak - its a profiling-tool with the focus on detecting memory-mgmt problems.
* https://valgrind.org/
instead of taking a closer look / trying to understand what exactly went on there / causes the problem, the maintainer simply commented out / disabled those accesses...
mistakes happen, but the debian-community handled this problem very well - as in my impression they always do and did.
idk ... i prefere the open and community-driven approach from debian anytime over distributions which are associated to companies.
last but not least, the have a social contract.
long story short: at least for me this was an argument for the debian gnu/linux distribution, not against :))
just my 0.02€
But why patch it in debian, and not file an upstream bug?
It’s doubly important to upstream issues for security libraries: There are numerous examples of bad actors intentionally sabotaging crypto implementations. They always make it look like an honest mistake.
For all we know, prior or future debian maintainers of that package are working for some three letter agency. Such changes should be against debian policy.
If it involves OpenSSL, I will give the benefit of the doubt to everyone else first over OpenSSL.
Why? Heartbleed.
As some other comments say, the patch was posted to OpenSSL and someone said it was fine. They later said it wasn't a proper review.
What would that have to do with phoning home?
All of these reasons are good, but they're not comprehensive. Unless someone can tell me what category Debian's alterations to xscreensaver fall under, maybe. As far as I can tell, that was just done for aesthetic reasons and packagers disagreeing with upstream.
The patches and their explanations are listed here:
https://udd.debian.org/patches.cgi?src=xscreensaver&version=...
Edit: can't find any that are for aesthetic reasons.
91_remove_version_upgrade_warnings.patch is the one for asthetic reasons.
Debian keeps ancient versions that have many fixed bugs. Upstream maintainer has to deal with fallout of bug reports of obsolete version. To mitigate his workload, he added obsolete version warning. Debian removed it.
I'll admit that I haven't inspected the patch, but how could that warning possibly work without checking version information somewhere on the internet? That was listed in OP.
IIRC it just hardcodes the release date and complains if it is more than 2 or 3 years later.
It’s somewhat reasonable. I agree Debian should patch out phone-home and autoupdate (aka developer RCE). They should have left the xscreensaver local-only warning in, though. It is not a privacy or system integrity issue.
jwz however is also off the rails with entitlement.
They’re both wrong.
People unfamiliar with code base can easily screw it, here is SimplePie example:
https://news.ycombinator.com/item?id=44061563
I don't think that approach is reasonable. When you are effectively making a fork, don't freeload on existing project name and burden him with problems you cause.
> jwz however is also off the rails with entitlement.
Always remember to not link to his site from HN because you'll get a testicle NSFW image when you click on a link to his site from HN. dang used to have rel=noreferrer on outgoing links, but that led to even more drama with other people...
Some people in the FOSS scene just love to stir drama, and jwz is far from the only one. Another person with such issues IMHO is the systemd crowd, although in this case ... IMHO it's excusable to a degree, as they're trying to solve real problems that make life difficult for everyone.
> Always remember to not link to his site from HN because you'll get a testicle NSFW image
What's his reason for targeting HN users this way?
The testicle speaks for itself [1]. He has held a serious political grudge against VC way over a decade back [2], the earliest mention of the JWZ testicles appearing on HN that I could find is over 9 years old [3].
[1] NSFW https://imgur.com/32R3qLv
[2] (Redirects to NSFW, so open in incognito or you'll get the testicles) https://www.jwz.org/blog/2011/11/watch-a-vc-use-my-name-to-s...
[3] https://news.ycombinator.com/item?id=10804953
I respect the hell out of Debian and am grateful for everything they do for the larger ecosystem, but this is why I use Arch. It's so much easier just to refer to the official documentation for the software and know it will be correct. Also, I've never really encountered a situation where interop between software is broken by just sticking to vanilla upstream. Seems like modifying upstream is just a ton of work with so many potential breakages and downsides it's not really worth it.
You seem to be implying that Debian makes large significant changes to upstream software for the sake of integration with the rest of the OS and that Arch makes none at all. Neither of these is true.
I understand it's a spectrum, but I lean toward minimal possible changes to upstream.
Also if that means the program won't run at all? Or a bug that has a patch to fix it doesn't get fixed? Or a device that could be supported is instead not supported?
I've made patches to a bunch of stuff to improve kde on mobile/tablets. After short or very long time they do get merged, but meanwhile people (like me) who own a tablet can actually use the software.
Why wait several months or even years?
The best part is when they swap FFmpeg or other libraries, make things compile somehow, don't test the results, and then ship completely broken software.
You run another distro that does things better?
Fedora? Arch Linux? I have massive respect for the Debian maintainers, but I've had way fewer problems on those Distros.
> Debian avoids including anything in the main part of its package archive it can’t legally distribute...
Related: netadata to be removed form Debian https://github.com/coreinfrastructure/best-practices-badge/i...
But what does Debian see as the risks of patching the software they distribute and how do they mitigate them?
Debian isn't a single person. A lot of patches are backport fixes for CVEs.
Then there's stuff like: "this project only compiles with an obsolete version of gcc" so the alternatives are dropping it or fixing it. Closely related are bugs that only manifest on certain architectures, because the developers only use amd64 and never compile or run it on anything else, so they make incorrect assumptions.
Then there's python that drops standard library modules every release, breaking stuff. So they get packaged separately outside of python.
There's also cherry picking of bugfixes from projects that haven't released yet.
Is there any reason you think debian developers are genetically more prone to making mistakes than anyone else? Considering that debian has an intensive test setup that most projects don't even have.
I have phrased it as the author has phrased it.
What gives you the idea I think Debian are any more prone to mistakes than anyone else? It’s one of the two distros I use at home. I admire the devs a great deal.
I mean, it would depend on what the patch is? If you're adding a missing manpage, I'm not sure what can go wrong? Is changing the build options (e.g. enabling or disabling features) a patch, or an expected change (and if such a config option is bad, what blame should be put on upstream for providing it)? What about default config files (which could both make the software more secure or less, such as what cyphers to use with TLS or SSH)?
The point about manual pages has always seemed to me to be one of the points where the system fails us. There are a fair number of manual pages that the world at large would benefit from having in the original softwares, that are instead stuck buried in a patches subdirectory in a Debian git repository, and have been for years.
This is not to say that Debian is the sole example of this. The FreeBSD/NetBSD packages/ports systems have their share of globally useful stuff that is squirrelled away as a local patch. The point is not that Debian is a problem, but that it too systematizes the idea that (specifically) manual pages for external stuff go primarily into an operating system's own source control, instead of that being the last resort.
Usually the Debian manual page author or package maintainer will send that upstream. Same goes for patches. Sometimes upstream doesn't want manual pages, or wants it in a different format, and the Debian person doesn't have time to rewrite it.
There's a belief that this is usual. But having watched the process for a couple of decades, it seems to me that that is just a belief, and actual practice doesn't work that way. A lot of times this stuff just gets stuck and never sent along.
I also think that the idea that original authors must not accept manual pages is a way of explaining how the belief does not match reality, without accepting that it is the belief itself that is wrong. Certainly, the number of times that things work out like the net-tools example elsethread, where clearly the original authors do want manual pages, because they eventually wrote some, and end up duplicating Debian's (and FreeBSD's/NetBSD's) efforts, is evidence that contradicts the belief that there's some widespread no-manual-pages culture amongst developers.
It's also easy for people to have the opinion the those who do the unpaid work of packaging software should do even more work for free.
I have sent about 50 or so patches upstream for the 300 packages I maintain and while it reduces the amount of work long-term it's also surprisingly amount of work.
Typically the Debian patches are licensed under the same license as the original project. So there is nothing stopping anyone who feels that more patches should be sent upstream to send them.
Typically the Debian maintai
I didn't ask for you to second-guess my software. I didn't ask you to ship modified (potentially broken and/or substantially different in opinionated ways) versions of my software under the same name.
If you're going to do that, then you should actually let people know. Otherwise don't do it. It's not about "but the license allows it", it's about what the right thing to do is.
Debian has given me the most grief of any Linux distro by far. Actually, Debian is the only system I can recall giving me grief. Debian pushes a lot of work to the broader ecosystem to people who never asked for it.
I didn't choose to be associated with Debian, but I have no choice in the matter. You did choose to be associated with the packages you maintain.
So don't give me any of that "but my unpaid time!". Either do the job properly or don't do it at all. Both are fine; no maintainer asked you to package anything. They're just asking you to not indirectly push work on them by shipping random (potentially broken and/or highly opinionated) patches they're never even told about.
> If you're going to do that, then you should actually let people know. Otherwise don't do it. It's not about "but the license allows it", it's about what the right thing to do is.
Okay, I am hereby letting you know: Every single distro patches software. All of them. Debian, Arch, Fedora, Gentoo, NixOS, Alpine, Void, big, small, commercial, hobbyist. All of them.
That's simply not true. Some distros may patch a few build issues, or maybe the rare breaking bug, but nothing like what Debian does. To claim anything else is Trumpian levels of falsehood.
This.
And often it's not an unhelpful upstream, just an upstream that sees little use for man pages in their releases, and doesn't want to spend time maintaining documentation in parallel to what their README.md or --help provides (with which the man page must be kept in sync).
I spent years packaging software (mostly Gnome 2.x) for NetBSD. I almost-always tried to upstream the local patches that were needed either for build fixes or improvements (like flexibility to adapt to non-Linux file system hierarchies or using different APIs).
It was exhausting though, and an uphill battle. Most patches were ignored for months or years, with common “is this still necessary?” or “please update the patch; it doesn’t apply anymore” responses. And it was generally a lot of effort. So patches staying in their distros is… “normal”.
Another issue is that these manpages can become outdated (and/or are downright wrong).
Overall I feel it's one of those Debian policies stuck in 1995. There are other reasonable ways to get documentation these days, and while manpages can be useful for some types of programs, they're less useful for others.
That only happens if the project lacks a manual page or if it's really bad.
"only happens" is a lot more often that you think. In my experience, "only" is quite frequent.
A randomly picked case in point:
Debian has had a local manual page for the original software's undocumented (in the old Sourceforge version) iptunnel(8) command for 7 years:
https://salsa.debian.org/debian/net-tools/-/blob/debian/sid/...
Independently, the original came up with its own, quite different, manual page 3 years later:
https://github.com/ecki/net-tools/blob/master/man/en_US/iptu...
Then Debian imported that!
https://salsa.debian.org/debian/net-tools/-/blob/debian/sid/...
This sort of thing isn't a rare occurrence.
Not the best name for the article. My first guess was version changes, or software being added/removed from repo. Turns out this is about source code modification.
As a native (British) English speaker, I was also unclear until reading the article.
Personally, I believe s/change/modify would make more sense, but that's just my opinion.
That aside, I'm a big fan of Debian, it has always "felt" quieter as a distro to me compared to others, which is something I care greatly about; and it's great to see that removing of calling home is a core principle.
All the more reason to have a more catchy/understandable title, because I believe the information in those short and sweet bullet points are quite impactful.
Patching out privacy issues isn't in Debian Policy, its just part of the culture of Debian, but there are still unfixed/unfound issues too, it is best to run opensnitch to mitigate some of those problems.
https://wiki.debian.org/PrivacyIssues
Thanks for the link, that'll come in very useful.
> it is best to run opensnitch to mitigate some of those problems
Opensnitch is a nice recommendation for someone concerned about protecting their workstation(s); for me, I'm more concerned about the tens of VMs and containers running hundreds of pieces of software that are always-on in my Homelab, a privacy conscious OS is a good foundation, and there are many more layers that I won't go into unsolicited.
Homelabs are usually running software not from a distro too, so potentially more privacy issues there too. Firewalling outgoing networking, along with a filtering SOCKS proxy like privoxy might be a good start.
I understood what it meant immediately, but i think only because i already knew that Debian are infamous for doing this.
Me too. I was hoping for an explanation of why the software I have got used to and works very well and isn't broken keeps being removed from Debian in the next version because it is "unmaintained".
OpenBSD too, but for security and proper POSIX functions vs Linuxisms, such as wordexp.
Do distro maintainers share patches, man pages, call home metrics and other data with other distros’ maintainers (and them back)?
Further, do they publish any change information publicly?
There should be a source package for every binary package, and patches are usually in a subdirectory of the package.
They usually send everything upstream, and everything is public in their source control. Some maintainers look at repology.org to find package stuff from other distros.
> ... do they publish any change information publicly?
This is utter FUD, of course they do, it is an open source distribution. Everything can be found from packages.debian.org
They even have a portal that publishes this information specifically, with statistics, and many notes as to why a specific change has been made: https://udd.debian.org/patches
> Do distro maintainers share patches, man pages, call home metrics and other data with other distros’ maintainers (and them back)?
Yes, at a minimum the patches are in the Debian source packages. Moreover, maintainers are highly encouraged to send patches upstream, both for the social good and to ease the distribution's maintenance burden. An automated tool to look for such patches is the "patches not yet forwarded upstream" field on https://udd.debian.org/patches.cgi
Yeah no thanks, just look at the abomination like pure-ftpd, apache, nginx, etc. I don't need some weird opinion configuration framework to go with the software I use.
MySQL? Nah you get mariadb
Tbh I’d rather have MariaDB. It’s wire-compatible, but has way more features, like a RETURNING clause. Why MySQL has never had that is a mystery (just kidding, it’s because Oracle).
I second that. Not only are there not infrequent cases of package maintainers breaking software, it's effectively nothing but the "app store" model, having an activist distributor insert themselves between the user and software.
It's why I'm really glad flatpaks/snaps/appimages and containerization are where they are at now, because it's greatly dis-intermediated software distribution.
Since this is the FOSS world, you are of course free to eschew distributions. But:
> it's effectively nothing but the "app store" model, having an activist distributor insert themselves between the user and software.
is just factually wrong. Distributions like Debian try to make a coherent operating system from tens of thousands of pieces of independently developed software. It's fine not to like that. It's fine to instead want to use and manage those tens of thousands of pieces of independent software yourself. But a distribution is neither an "app store", nor does it "insert itself" between the user and the software. The latter is impossible in the FOSS world. Many users choose to place distros between them and software. You can choose otherwise.
I'm using Arch and, AFAIK, it tries to use upstream code as much as possible. That's much better model IMO.
I'm not trying to argue which distribution model is best, or whether one should avoid distributions altogether. That's messy, complicated, and full of personal variables for each individual.
I'm just trying to correct the notion that somehow a distro is an "app store" that "inserts itself" between the software and its users. A distribution is an attempt to make lots of disparate pieces of software "work together", at varying degrees. Varying degrees of modification may or may not factor into that. On one extreme is perhaps just a collection of entirely disjoint software, without anything attaching those pieces of software together. On the other extreme is perhaps something like the BSDs. Arch and Debian sit somewhere in between, at either side.
Thoughtful people can certainly disagree about what the correct degree of "work together" or modification is.
Why do you assume Debian packagers don’t do the same?
Because it's well known that debian packagers alter software they package with unnecessary patches.
Of course it's well known, but is it true?
It's a better model until you fix a bug, but upstream is unresponsive.
Don't fix bugs, leave it to developers.
Do you also leave trash on the ground when you come across it in public? Try to leave things better than you found them.
> Don't fix bugs, leave it to developers
Said the developer.
Meanwhile the user is stuck with a broken software.
>But a distribution is neither an "app store", nor does it "insert itself" between the user and the software.
Just scroll up to the second comment in the thread right now by the user rmccue. Given that Debian doesn't give the user any indication of the fact that it even has modified an upstream piece of software it's obviously perfectly possible for them to insert themselves without you even knowing it. And in that case, according to the developer, even introduced subtle bugs.
So you can run buggy software as a consequence of some maintainer thinking they know more than a developer, and not even know it because you have no practical info about that process. This is of course not a "choice" in any meaningful sense of the term.
Nobody ever actually wants to use a buggy php library maintained by debian over a functioning one maintained a by developer, they very likely just never even were aware that that is what they were served.
This is one of the reasons I switched to RHEL 10+ years ago.
I actually prefer the RHEL policy of leaving packages the way upstream packaged them, it means upstream docs are more accurate, I don't have to learn how my OS moves things around.
One example that sticks out in memory is postgres, RHEL made no attempt to link its binaries into PATH, I can do that myself with ansible.
Another annoying example that sticks out in Debian was how they create admin accounts in mysql, or how apt replaces one server software with another just because they both use the same port.
I want more control over what happens, Debian takes away control and attempts to think for me which is not appreciated in server context.
It swings both ways too, right now Fedora is annoying me with its nano-default-editor package. Meaning I have to first uninstall this meta package and then install vim, or it'll be a package conflict. Don't try and think for me what editor I want to use.
> I actually prefer the RHEL policy of leaving packages the way upstream packaged them
I don't think RHEL is the right choice if this is your criteria. Arch is probably what you are looking for
"I actually prefer the RHEL policy of leaving packages the way upstream packaged them"
Are you kidding now? Red Hat was always notorious of patching their packages heavily, just look download an SRPM and have a look.
I don't think that's true for Red Hat, but it is true for Slackware.
If you want packages that works just like the upstream documentation, run Slackware.
Debian does add some really nice features in many of their packages, like a easy way to configure multiple uWSGI application using a file per application in a .d directory. It's a feature of uWSGI, but Debian has just package it up really nicely.
> I actually prefer the RHEL policy of leaving packages the way upstream packaged them
Unless something has changed in the last 10 years that has passed since I last used anything RHEL-based, there are definitely no such policy.
Pretty much everyone has had nano as default for ages, at least that's how it seems to me from having had to figure out which package has script support and installing vim myself after OS install for a long time.
And RedHat does a lot of fiddling in their distributions, you probably want something like Arch, which is more hands-off in that regard. Personally, I prefer Debian, it's the granite rock of Linux distributions.