nrw.social ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Wir sind eine freundliche Mastodon Instanz aus Nordrhein-Westfalen. Ob NRW'ler oder NRW-Sympathifanten, jeder ist hier willkommen.

Serverstatistik:

2,9 Tsd.
aktive Profile

I was considering replying to this comment on the “please update xz package” bugreport earlier with that the discussion is not irrelevant and that it’s the maintainer’s responsibility on new upgrades to check for new legal issues and “other hidden gems”.

I didn’t because I didn’t want to bother going in with an annoyed self-righteous “user”.

Now it turns out all three of the involved ones were “string + number @ freemailer” #JiaT75 sockpuppets, so it’s probably okay I didn’t bother.

Not that I blame Sebastian — it was very well hidden, and even my usual diffing between old and new version would not have found it.

I do take away from this to also check the diff between VCS repo at the time of the release and release tarball. Perhaps also between branch and tag if they, like Apache Tomcat, introduce extra commits there.

bugs.debian.org#1067708 - xz-utils: New upstream version available - Debian Bug report logs

@mirabilos What I do at work (mostly because I don't want to end up with test code/test artefacts in production binaries): I build each component twice in my build pipeline. All tests are run this first time, but I discard the output. Then, I do a fresh checkout, delete all test code, and then compile everything again, using the build output for packaging. Would that have helped in the current scenario? So far, I understand the malicious payload was disguised as test data.

@cloud_manul no, because another part of the thing made parts of the test data parts of the binary

@mirabilos But that cannot happen if every file that is part of the test suite (!= the "production code") were deleted before the start of the build...?

@cloud_manul that’s @dalias ’ approach of not shipping the tests in the same repo, IIUC

Which I’m not yet convinced of (to be fair, I only read it earlier this evening). mksh’s testsuite is expected to be run after building because it has a long history of finding compiler, toolchain, libc, OS, etc. bugs. Though that’s only a Perl driver and a plaintext file of inputs and expected outputs/errors/etc.

@mirabilos @cloud_manul One big win of making tests independent: they have their own history so you can run new versions of the tests against old versions of the code and see where a regression was introduced, etc.

@dalias @mirabilos @cloud_manul Yeah, that said the concerns of mirabilos might be quite true, at least I don't think I've seen a distro where musl testsuite is ran.
And I could check but having patched compatibility issues in the split testsuite of vis (which I package) I'd also doubt many people run it.

@lanodan @dalias @cloud_manul @mirabilos I think it's more likely that if you do that in isolation, people might just end up not running it.

But it is worth considering, definitely. We just need to make sure we don't introduce a different problem (e.g. automatically trying to fetch it...).

(Aside: seen lots of CMake projects do this with FetchContent, and I learned the other day that FetchContent does NOT verify TLS certs by default: cmake.org/cmake/help/latest/va)

cmake.orgCMAKE_TLS_VERIFY — CMake 3.29.0 Documentation
Cloud Manul

@thesamesam @lanodan @dalias @mirabilos Hmm, I am not familiar with too many build engines in the OSS area, but the one I frequent (openSUSE build service) disables any network connection during the build itself. If a package needs something fetched from the Internet before the build starts, it must declare it, and the downloaded artefacts become part of the build archive/documentation.

@Conan_Kudo @cloud_manul @lanodan @dalias @mirabilos Yeah, nor do we. But someone has to only run something once locally, possibly even to debug something from a packaging environment

@thesamesam @Conan_Kudo @lanodan @dalias @mirabilos Hmmm, so your concern is that a DevOps's workstation might get infected by malware from a compromised upstream repository? It is a valid security concern, but I think the xz scenario is a different problem, as a good build farm will isolate the build environment, i.e. after the build is done, the checkout/workspace is completely wiped.

@cloud_manul @Conan_Kudo @lanodan @dalias @mirabilos We're discussing what stems from making your tests separate, though.

@cloud_manul @thesamesam @dalias @mirabilos It's a fairly usual case among distros but sadly there's few times where you end up enabling network access and sadly it's not always nicely flexible (like git-based recipes in gentoo aren't sandboxed from the network…).