Not discrediting Open Source Software, but nothing is 100% safe.
Did you fabricate that CPU? Did you write that compiler? You gotta trust someone at some point. You can either trust someone because you give them money and it’s theoretically not in their interest to screw you (lol) or because they make an effort to be transparent and others (maybe you, maybe not) can verify their claims about what the software is.
It usually boils down to this, something can be strictly better but not perfect.
The ability to audit the code is usually strictly better than closed source. Though I’m sure an argument could be made about exposing the code base to bad actors I generally think it’s a worthy trade off.
deleted by creator
“Trust has no place in computing” is a concept that we are still quite distant from, in practical terms.
But yeah, definitely don’t hand your personal information over to a corporation, even if they’re offering to take a lot of your money, too!
Luckily there are people who do know, and we verify things for our own security and for the community as part of keeping Open Source projects healthy.
Open source software is safe because somebody knows how to audit it.
And to a large extent, there is automatic software that can audit things like dependencies. This software is also largely open source because hey, nobody’s perfect. But this only works when your source is available.
It’s safe because there’s always a loud nerd who will make sure everyone knows if it sucks. They will make it their life mission
Also because those people who can audit it don’t have a financial incentive to hide any flaws they find
Though one of the major issues is that people get comfortable with that idea and assume for every open source project there is some other good Samaritan auditing it
Closed-source software is inherently predatory.
It doesn’t matter if you can read the code or not, the only options that respect your freedom are open source.
I really like the idea of open source software and use it as much as possible.
But another “problem” is that you don’t know if the compiled program you use is actually based on the open source code or if the developer merged it with some shady code no one knows about. Sure, you can compile by yourself. But who does that 😉?
You can check it using the checksum. But who does that?
In all seriousness I am running NixOS right now using flakes. The package manager compiles everything unless a trusted source already has it compiled, in which case the package manager checks the checksum to ensure you still get the same result and downloads that instead. It also aims to be fully reproducible and with flakes it automatically pins all dependency versions so next time you build your configurations, you get the same result. It is all really cool, but I still don’t understand everything and I’m still learning it.
Based NixOS user
I love NixOS but I really wish it had some form of containerization by default for all packages like flatpak and I didn’t have to monkey with the config to install a package/change a setting. Other than that it is literally the perfect distro, every bit of my os config can be duplicated from a single git repo.
Great points. I kinda feel the same with containerization. I have been wanting change my OS on my home server and while NixOS is great for that, I have decided to do things differently and use OpenSUSE Micro OS. My plan was actually Fedora Core OS, but after that Red Hat drama I decided to run with SUSE instead. It is an immutable distro with atomic upgrades that is designed for being a container host. It uses Ignition as the configuration for setting up things like users, services, networking, etc. My plan is then to just use containers like I was doing before on Fedora Server and for the other things to use Nix to build container images. Instead of using DockerFile, you’d use Nix Flakes to create really minimal images. Instead of starting with a full distro like Alpine, Nix starts from scratch and copies all dependencies over as specified in your flake. So the image only contains the absolute minimum to run. I think I’d be a fun side project while learning more about Ignition, Linux containers and Nix Flakes.
As for your point on config, I think it’s just part of the trade offs of NixOS. You either have a system that can be modified easily at anytime through the shell or you have a system that you modify centrally and is fully reproducible. You can already install packages with nix-env in the command line without changing your config, but that also won’t be reproducible. Maybe a GUI app for managing your config and packages could be helpful, although I’m pretty sure that’s low priority for NixOS right now.
But another “problem” is that you don’t know if the compiled program you use is actually based on the open source code or if the developer merged it with some shady code no one knows about.
Actually, there is a Debian project working on exactly that problem, called reproducible builds
deleted by creator
safe**R** not safe. Seriously how is this a hard concept.
No, but someone knows how and does. If there’s something bad, there’ll be a big stink.
IDK why, but this had me imagining someone adding malicious code to a project, but then also being highly proactive with commenting his additions for future developers.
“Here we steal the user’s identity and sell it on the black market for a tidy sum. Using these arguments…”
The point is not that you can audit it yourself, it’s that SOMEBODY can audit it and then tell everybody about it. Only a single person needs to find an exploit and tell the community about it for that exploit to get closed.
You shouldn’t automatically trust open source code just because its open source. There have been cases where something on github contains actual malicious code, but those are typically not very well known or don’t have very many eyes on it. But in general open source code has the potential to be more trustworthy especially if its very popular and has a lot of eyes on it.
It’s one reason I haven’t rushed to try out every lemmy app that has come out yet.
Also, recompile the source code yourself if you think the author is pulling a fast one on you.
is there not a way to check if thw sourvw and releasw arent the same? would be cool if github / gitlab / etc… produced a version automatically or there was some instant way to check
A lot of bad takes in here.
Here are a few things that apparently need to be stated:
- Any code that is distributed can be audited, closed or open source.
- It is easier to audit open source code because, well, you have the source code.
- Closed source software can still be audited using reverse engineering techniques such as static analysis (reading the disassembly) or dynamic analysis (using a debugger to walk through the assembly at runtime) or both.
- Examples of vulnerabilities published by independent researchers demonstrates 2 things: people are auditing open source software for security issues and people are in fact auditing closed source software for security issues
- Vulnerabilities published by independent researchers doesn’t demonstrate any of the wild claims many of you think they do.
- No software of a reasonable size is 100% secure. Closed or open doesn’t matter.
Closed source software can still be audited using reverse engineering techniques such as static analysis (reading the disassembly) or dynamic analysis (using a debugger to walk through the assembly at runtime) or both.
How are you going to do that if it’s software-as-a-service?
See the first bullet point. I was referring to any code that is distributed.
Yeah, there’s no way to really audit code running on a remote server with the exception of fuzzing. Hell, even FOSS can’t be properly audited on a remote server because you kind of have to trust that they’re running the version of the source code they say they are.
Ohhh, code that is distributed. The implication of that word flew over my head lmao, thanks for the clarification.
deleted by creator
Second bullet point, it’s much easier to audit when you have the source code. Just wanted to point out it’s not important to audit closed source software. It’s just more time consuming and fewer people have the skills to do so.
Also, just because you can see the source code does not mean it has been audited, and just because you cannot see the source code does not mean it has not been audited. A company has a lot more money to spend on hiring people and external teams to audit their code (without needing to reverse engineer it). More so than some single developer does for their OSS project, even if most of the internet relies on it (see openssl).
And just because a company has the money to spend on audits doesn’t mean they did, and even when they did, doesn’t mean they acted on the results. Moreover, just because code was audited doesn’t mean all of the security issues were identified.
Yup, all reasons why it does not matter if the software is open or closed as to how secure it might be. Both open and closed source code can be developed in a more or less secure fashion. Just because something could be done does not mean it has been done.
Nah I wouldn’t say that. Especially if you consider privacy a component to security. The fact that a piece of software can more easily be independently reviewed, either by you or the open source community at large, is something I value.
Good security is a component to privacy. But you can have good security with no privacy - that is the whole idea of a surveillance state (which IMO is a horrifying concept). Both are worth having, but my previous responses were only about the security aspect of OSS. There are many other good arguments to have about the benefits of OSS, but increased security is not a valid one.
You guise look at the code?
Of course. I don’t understand any of it, but it can’t hurt check for a stealData function.
That you formated that appropriately means you still know more about code than the vast majority of people
deleted by creator
While I generally agree, the project needs to be big enough that somebody looks through the code. I would argue Microsoft word is safer than some l small abandoned open source software from some Russian developer
Ehmm. if nobody uses it, it kinda doen’t matter if it’s safe. And for this example: I bet more people had a look at the code of LibreOffice than MS Office. And i dont think it sends telemetry home in default settings.
deleted by creator
That’s true, but I’m not a programmer and on a GitHub project with 3 stars I can’t count on someone else doing it. (Of course this argument doesnt apply to big projects like libre office) With Microsoft I can at least trust that they will be in trouble or at least get bad press when doing something malicious.
deleted by creator
- Yes, I do it occasionally
- You don’t need to. If it’s open source, it’s open to billions of people. It only takes one finding a problem and reporting it to the world
- There are many more benefits to open source: a. It future proofs the program (many old software can’t run on current setups without modifications). Open source makes sure you can compile a program with more recent tooling and dependencies rather than rely on existing binaries with ancient tooling or dependencies b. Remove reliance on developer for packaging. This means a developer may only produce binaries for Linux, but I can take it and compile it for MacOS or Windows or a completely different architecture like ARM c. It means I can contribute features to the program if it wasn’t the developer’s priority. I can even fork it if the developer didn’t want to merge it into their branch.
Regarding point 2. I get what you’re saying but I instantly thought of Heartbleed. Arguably one of the most used examples of open source in the world, but primarily maintained by one single guy and it took 2 years for someone to notice the flaw.
So believing something is „safe“ just because it is open source and „open to billions of people“ can be problematic.
Uhh… so? The NSA was sitting on the vulnerability for EternalBlue in Windows for over 5 years.
Dont understand what that has to do with the discussion so far. How is this relevant here?
No more or less relevant than heartbleed. Yes vulns exist in open source software, sometimes for a while. Being open source can lead to those vulns getting discovered and fixed quicker than with closed source.
And how does this negate my initial point that you shouldn’t trust in the security of something just because it is open source? I think you misunderstood what I was saying.
deleted by creator
Alright then, have a nice day!
We trust open source apps because nobody would add malicious codes in his app and then release the source code to public. It doesn’t matter if someone actually looks into it or not, but having the guts to publish the source codes alone brings a lot of trust on the developer. If the developer was shady, he would rather hide or try to hide the source code and make it harder for people to find it out.