How do we know if there aren't a bunch of more undetected backdoors?

I have been thinking about self-hosting my personal photos on my linux server. After the recent backdoor was detected I’m more hesitant to do so especially because i’m no security expert and don’t have the time and knowledge to audit my server. All I’ve done so far is disabling password logins and changing the ssh port. I’m wondering if there are more backdoors and if new ones are made I can’t respond in time. Appreciate your thoughts on this for an ordinary user.

thingsiplay,

We can’t know. If we would know, those weren’t undetected backdoors at all. It’s not possible to know something you don’t know. So the question in itself is a paradox. :D The question is not if there are backdoors, but if the most critical software is infected? At least what I ask myself.

Do you backups man, do not install too many stuff, do not trust everyone, use multiple mail accounts and passwords and 2 factor authentication. We can only try to minimize the effects of when something horrible happens. Maybe support the projects you like, so that more people can help and have more eyes on it. Governments and corporations with money could do that as well, if they care.

qprimed,

if you are self hosting and enjoy over-engineering systems… VLANS, ACLs between subnets and IDS/IPS should be part of.your thinking. separate things into zones of vulnerability / least-privilege and maintain that separation with an iron fist. this is a great rabbit hole to fall down if you have the time. however, given a skilled adversary with enough time and money, any network can be infiltrated eventually. the idea is to try to minimize the exposure when it happens.

if the above is not a part of your daily thinking, then don’t worry about it too much. use a production OS like Debian stable, don’t expose ports to the public internet and only allow systems that should initiate communication to the internet to actually do so (preferably only on their well known protocol ports - if possible).

avidamoeba,
@avidamoeba@lemmy.ca avatar

We don’t. That’s why we use multiple layers of security. For example keeping all services accessible only via VPN and using a major OS that a lot of production workloads depend on such as Debian, Ubuntu LTS or any of the RHEL copycats. This is a huge plus of the free tier of Ubuntu Pro BTW. It’s commercial level security support for $0. Using any of these OSes means that the time between a vulnerability being discovered, patched and deployed is as short as possible. Of course you have to have automatic security updates turned on, unattended-upgrades in Debian-speak.

sgtlion,

You can’t trust any of it to be totally secure, it’s effectively impossible. But, this is true of all software, at least open source is being audited and scrutinised all the time (as demonstrated).

All you can do is follow best practices.

MonkeMischief,

I’m not a security specialist either. I learn new things every day, but this is why my NextCloud is accessible through TailScale only and I have zero ports exposed to the outside world.

The only real convenience I lose is being able to say “check out this thing on my personal server” with a link to someone outside my network, but that’s easily worked around.

redcalcium,

Next: how do we know tailscale’s network hasn’t been backdoored?

ReversalHatchery,

Headscale. And then you don’t even have to trust any outside auth provider to not log in in your name.

MonkeMischief,

I figure there’s a certain amount of trust you have to have in strangers for a LOT of things we use every day.

I try to be selective with where I put that trust, especially when I can’t just homebrew an advanced custom solution, but I figure Tailscale is much better than attempting to just host it on my LAN with an open a port to the big scary web and hope a bot doesn’t find a gap and ransomware it all lol.

3-2-1 backups and a certain bit of trust.

Because heck, even CPUs have been found with exploitable microcode. (Spectre and Meltdown?) At some point you just gotta balance “best rational protection” with not going insane, right?

Headscale mentioned here is pretty neat too, but I feel like spinning up Dockers on Proxmox and Tailscale is as much moving parts as I’m willing to manage alongside everything else in life. :)

mfat,

I think you can use Tailscale Funnels for that.

shortwavesurfer,

I would say you can’t, but if you are using open source software, then somebody can and will find them eventually and they will be patched. Unlike with closed source software, you will never know if it has a backdoor or not. This whole episode shows both the problems with open source, being lack of funding for security audits, and the beauty of open source, being that eventually it will be detected and removed.

nelsnelson,

Security is not a wall. It is a maze.

neo,
@neo@lemmy.comfysnug.space avatar

Reading the source code for everything running on your machine and then never updating is the only way to be absolutely 100% sure.

possiblylinux127,

Even with that you will miss something

rotopenguin,
@rotopenguin@infosec.pub avatar

This is a sliver of one patch, there is a bug here that disabled a build tool that breaks the attack. Can you find it?

https://infosec.pub/pictrs/image/f55ead66-fbfd-445a-8d88-c10d0d9b5309.png

rotopenguin,
@rotopenguin@infosec.pub avatar

hintIt is one singular character. Everything else is fine.

genuineparts,
@genuineparts@infosec.pub avatar

SolutionIt’s the dot on line 9

AnnaFrankfurter,

MaybeDot after include

gerdesj,

I do IT security for a living. It is quite complicated but not unrealistic for you to DIY.

Do a risk assessment first off - how important is your data to you and a hostile someone else? Outputs from the risk assessment might be fixing up backups first. Think about which data might be attractive to someone else and what you do not want to lose. Your photos are probably irreplaceable and your password spreadsheet should probably be a Keepass database. This is personal stuff, work out what is important.

After you’ve thought about what is important, then you start to look at technologies.

Decide how you need to access your data, when off site. I’ll give you a clue: VPN always until you feel proficient to expose your services directly on the internet. IPSEC or OpenVPN or whatevs.

After sorting all that out, why not look into monitoring?

possiblylinux127,

Fun fact, you can use let’s encrypt certs on a internal environment. All you need is a domain.

delirious_owl,
@delirious_owl@discuss.online avatar

Just be aware that its an information leakage (all your internal DNS names will be public)

amju_wolf,
@amju_wolf@pawb.social avatar

…which shouldn’t be an issue in any way. For extra obscurity (and convenience) you can use wildcard certs, too.

delirious_owl,
@delirious_owl@discuss.online avatar

Are wildcard certs supported by LE yet?

amju_wolf,
@amju_wolf@pawb.social avatar

Have been for a long time. You just have to use the DNS validation. But you should do that (and it’s easy) if you want to manage “internal” domains anyway.

delirious_owl,
@delirious_owl@discuss.online avatar

Oh, yeah, idk. Giving API access to a system to modify DNS is too risky. Or is there some provider you recommend with a granular API that only gives the keys permission to modify TXT and .well-known (eg so it can’t change SPF TXT records or, of course, any A records, etc)

amju_wolf,
@amju_wolf@pawb.social avatar

What you can (and absolutely should) do is DNS delegation. On your main domain you delegate the _acme-challenge. subdomains with NS records to your DNS server that will do cert generation (and cert generation only). You probably want to run Bind there (since it has decent and fast remote access for changing records and other existing solutions). You can still split it with separate keys into different zones (I would suggest one key per certificate, and splitting certificates by where/how they will be used).

You don’t even need to allow remote access beyond the DNS responses if you don’t want to, and that server doesn’t have anything to do with anything else in your infrastructure.

delirious_owl,
@delirious_owl@discuss.online avatar

Ok, so no API at all. Its the internal DNS server itself that runs certbot and makes the changes locally?

amju_wolf,
@amju_wolf@pawb.social avatar

Yes, that’s one option. Then you only have to distribute the certificates and keys.

Or you allow remote access to that DNS server (Bind has a secure protocol for this), do the challenge requests and cert generation on some other machine. Depends on what is more convenient for you (the latter is better if you have lots of machines/certs).

Worst case if someone compromises that DNS server they can only generate certificates but not change your actual valuable records because these are not delegated there.

possiblylinux127,

Not if you setup a internal dns

delirious_owl,
@delirious_owl@discuss.online avatar

How would that prevent this? To avoid cert errors, you must give the DNS name to let’s encrypt. And let’s encrypt will add it to their public CT log.

possiblylinux127,

Sorry I though you were referring to IP leakage. Apologizes

gerdesj,

I do use it quite a lot. The pfSense package for ACME can run scripts, which might use scp. Modern Windows boxes can run OpenSSH daemons and obviously, all Unix boxes can too. They all have systems like Task Scheduler or cron to pick up the certs and deploy them.

rotopenguin, (edited )
@rotopenguin@infosec.pub avatar

How do you know there isn’t a logic bug that spills server secrets through an uninitialized buffer? How do you know there isn’t an enterprise login token signing key that accidentally works for any account in-or-out of that enterprise (hard mode: logging costs more than your org makes all year)? How do you know that your processor doesn’t leak information across security contexts? How do you know that your NAS appliance doesn’t have a master login?

This was a really, really close one that was averted by two things. A total fucking nerd looked way too hard into a trivial performance problem, and saw something a bit hinky. And, just as importantly, the systemd devs had no idea that anything was going on, but somebody got an itchy feeling about the size of systemd’s dependencies and decided to clean it up. This completely blew up the attacker’s timetable. Jia Tan had to ship too fast, with code that wasn’t quite bulletproof (5.6.0 is what was detected, 5.6.1 would have gotten away with it).

https://infosec.pub/pictrs/image/4f3d0ee2-0e47-4454-9684-3afbd424f46a.png

rotopenguin,
@rotopenguin@infosec.pub avatar

In the coming weeks, you will know if this attacker recycled any techniques in other attacks. People have furiously ripped this attack apart, and are on the hunt for anything else like it out there. If Jia has other naughty projects out here and didn’t make them 100% from scratch, everything is going to get burned.

rotopenguin, (edited )
@rotopenguin@infosec.pub avatar

I think the best assurance is - even spies have to obey certain realities about what they do. Developing this backdoor costs money and manpower (but we don’t care about the money, we can just print more lol). If you’re a spy, you want to know somebody else’s secrets. But what you really want, what makes those secrets really valuable, is if the other guy thinks that their secret is still a secret. You can use this tool too much, and at some point it’s going to “break”. It’s going to get caught in the act, or somebody is going to connect enough dots to realize that their software is acting wrong, or some other spying-operational failure. Unlike any other piece of software, this espionage software wears out. If you keep on using it until it “breaks”, you don’t just lose the ability to steal future secrets. Anybody that you already stole secrets from gets to find out that “their secrets are no longer secret”, too.

Anyways, I think that the “I know, and you don’t know that I know” aspect of espionage is one of those things that makes spooks, even when they have a God Exploit, be very cautious about where they use it. So, this isn’t the sort of thing that you’re likely to see.

What you will see is the “commercial” world of cyberattacks, which is just an endless deluge of cryptolockers until the end of time.

bitman,
@bitman@techhub.social avatar

@mfat It's the old problem about bugs. To know that a piece of software has no bugs you should be able to count them and if you could do it then should be able to locate them and make a fix. But you can't then there's no way to know there's no more undetected backdoor

Of course being open source helps a lot but there's no solver bullet

hperrin,

If backdoors exist, they’re probably enough to get your data no matter where it’s stored, so self hosting should be fine. Just keep it up to date and set up regular automatic backups.

delirious_owl,
@delirious_owl@discuss.online avatar

Check the source or pay someone to do it.

If you’re using closed source software, its best to assume it has backdoors and there’s no way to check.

ouch,

Even if there are nation state level backdoors, your personal server is not a valuable enough target to risk exposing them. Just use common sense, unattended-upgrades, and don’t worry too much about it.

MazonnaCara89,
@MazonnaCara89@lemmy.ml avatar

Ah shit we are back to “Ken Thompson Compiler Hack” again

cypherpunks,
@cypherpunks@lemmy.ml avatar

for those unfamiliar: Reflections on trusting trust by Ken Thompson

  • All
  • Subscribed
  • Moderated
  • Favorites
  • linux@lemmy.ml
  • fightinggames
  • All magazines