What I Learned Using Linux This Week (04)

Edit: I’ve made an account here on lemmy.ml as I routinely can’t comment or post from my account on lemmy.world.

Bit of a week! As usual, had a lot of fun tinkering. Here’s my takeaways from this past week(ish).

I finally learned how to set up a cron job with elevated privileges

This is something I’ve had on my , “I should really get this figured out” list for about two years now, but instead have been inconsistently typing my rsync commands (Since I’ve also been too lazy to set up the aliases for these commands).

I spent a couple of days rebuilding my server from the OS up (for reasons which I will explain momentarily), and since I’m up on a fresh OS with all my containers and services up and running, I figured it was time I figure out this cron job thing.

The approach I took was to write a simple bash script for my backup. The script is four lines. Three of which are sudo rsync …, and the last of which is a curl -d … command.

The rsync commands are to incrementally back up my server data, cache, and docker volumes.

The curl command triggers a notification through my ntfy instance, (link is to ntfy, not to my instance), to let me know the backups have successfully completed.

In order for that to run properly, I also had to learn…

How to update sudoers privileges

After reading about crontab and privileges, I know I could have just edited /etc/crontab and run my script as root, but what would be the fun in that when I could also learn about changing privileges through sudoers! So I learned how to modify sudo priviliges by creating a new file in sudoers.d with the command:

sudo visudo -f /etc/sudoers.d/name-of-my-sudoers-file

And why that and not just editing /etc/sudoers directly with nano or vim or emacs? That was my first question when I saw that command and thought, “Oh, shit, I’m going to have to brush up on my vi/m.”

Turns out, if you just rando edit sudoers (or add a file to /etc/sudoers.d/) with any old editor, you can fuck up the syntax, and if you fuck up the syntax, you can fuck up your ability to use sudo, and then you can’t do anything requiring sudo on your machine without going through tremendous headache to fix it.

However, if you use sudo visudo …, you get syntax verification to prevent you from breaking sudo.

And, on Ubuntu server, visudo uses nano by default, which meant I didn’t have to worry about vim just yet (vim is on my roadmap of things to learn)

(Also, you can change the default editor visudo uses, but I don’t remember the command because I won’t be changing it until I get a grip on vim and can make a decision about which editor I want to use.)

With all that being said, I created a file in /etc/sudoers.d and added a line to allow my backup script to run with elevated privileges without requiring a password with this syntax:

username ALL=(root) NOPASSWD: /path/to/my/script

Good documentation/notes will save you like good backups will save you

This isn’t something that’s new to me, or to linux (Arch wiki ftw) but it’s something that 100% made rebuilding my server from the OS up a pretty worry free breeze.

So why did I rebuild my server again a little over a month after rebuilding it from the OS up? Turns out when you accidentally kick a stool while carrying a heavy box and that stool knocks the fuck out of your server, your OS can get fucked up.

This happened a few weeks ago, and boy was I panicked when I first kicked that stool into the server. After putting down the boxes I turned on the monitor and the screen was freaking out. It looked like a scrambled Max Headroom. I held the power button to force a shutdown, and after rebooting the server everything came back up and I thought, ”Holy shit, I dodged a bullet!”

(Bonus lesson, I learned to not leave the stool in front of the server rack!)

But, all was not well. My server data and cache are on zfs pools, and every time I tried to bulk add some of the shows or movies I prepared to the data pool, I would get this procsys kernel panic error. I had repeatedly been checking my zpool status, and everything was good there. So I was furiously searching trying to figure out what the error meant, and I kept finding folks with the same or similar errors who talked about checking logs, but whenever I checked logs I couldn’t find anything to indicate what was actually going on.

Finally, after a few days additional searching, I ran across a comment on a thread that said this particular error (I neglected to save the error, I wish I had) was usually a hardware related issue, like a loose connector, and I thought, ”Holy shit, that makes perfect sense after knocking the shit of my server!”

So I shut it down, opened it up, and sure enough there was a loose cable on the motherboard. I reseated it, checked the rest, rebooted, and over the course of the next week, it seemed all was well.

But I kept getting these weird errors. Not actual error messages, no more kernel panics, and data wrote to the zpool just fine. It was little things not working as expected. Commands that typically ran very speedily (like ls) were lagging, opening a file in nano took multiple seconds instead of being near instant, stuff like that.

I decided to go for a nuke and pave approach, rebuilding from the OS up again, which is where the documentation comes in. Since I started messing about with self hosting 2-3 years ago, I’ve kept meticulous notes on everything I have done and learned so that if I had to re-do it, I could open up Joplin, search for whatever I needed, and proceed. This has saved my ass multiple times over the years as I tinker, break shit, and fix it using my notes.

So yeah, in addition to having a good backup system, you should also keep good documentation for yourself.

edit: removed extra 4 from post title

Quazatron,
@Quazatron@lemmy.world avatar

Nice work.

I used to get teased by the veterans for using nano instead of vi. Nowadays, I’m the one doing the teasing. Even if you don’t like it, learn the basic stuff, it’ll save you someday.

What I learned this week:

ping _gateway

is faster than looking up the gateway’s ip address and pinging it.

I also learned how to deploy stuff on AWS using OpenTofu, but the _gateway trick is neater.

harsh3466,

Thank you! I haven’t touched aws, though I do have a vps that I also tinker with from time to time. Right now I’m looking at wireguard tunnels and trying to figure out how to set up and use those. It’s not absolutely necessary as I’ve got a reverse proxy (NGINX Proxy Manager) on my server, but it’s something I’m interested in.

mehdi_benadel,

I comment here because I read from a small different instance, and it seems I can comment too? Can’t you do that from lemmy.world to lemmy.ml? 🤔

harsh3466,

I created an account on lemmy.world awhile ago, and had been using that as my primary lemmy account, but lately, I can’t post, comment, save posts, or up/downvote from that lemmy.world account.

As a result I decided to try creating an account on lemmy.ml (since I’m most active here on !linux) and see if I would be able to post/comment/etc…

As of so far, everything works from lemmy.ml, while my lemmy.world account still can’t interact with posts.

sudneo,

I want to add a small bit of info that might be useful in the future. Your script doesn’t need really to be run with root privileges. Your backup script likely needs access to parts of the filesystem which are only readable from root but that’s all it needs. The root privileges are essentially a combination of capabilities (see man capabilities) attached to processes. In your case, what you want is the CAP_DAC_READ_SEARCH, which allows read access to every file. You can for example add this capability to rsync (or more likely, to Borg,restic or rustic - which are backup tools I recommend you look at! They do encryption, deduplication etc.) and then you can use that binary as a low-privileged user, but having that slice of root privileges. Obviously, there is a risk in this too, but can be compensated in other ways as well (for example running the backup job in a sandbox etc. - probably out of scope for now).

While in this particular case it might not be super relevant (backups are executed often as root or as a backup user which has read access), it might be useful in the future to know that very rarely full root privileges are needed, and you can run tools only with the specific capability needed to perform that privileged action. You can check setcap and getcap commands.

harsh3466,

Oooh! That is very useful information! Thank you for this! I’m saving your comment so I can look further into this and learn more about it!!

andrew_bidlaw, (edited )

I’m new to Linux and I appreciate your posts. I probably wouldn’t need every thing from them, but they are interesting to read and learn.

harsh3466,

Thanks! Glad you’re enjoying them. I used to be a lurker on that other site, but never posted anything because it always seemed so intimidating. When the great migration happened, I came over here and have found it much more welcoming.

dan, (edited )
@dan@upvote.au avatar

I’d recommend looking at Borgbackup for your backups. Like rsync, it only sends the changes since the last backup, but it also lets you keep multiple backups efficiently since all the data is deduped, and all the data is encrypted. I take a daily backup every day and retain the last two weeks of daily backups, eight weeks of weekly backups, and an indefinite number of monthly backups.

Borg also has an “append-only” mode where you can restrict particular SSH keys such that they can’t delete the backups, which provides protection in case your client system gets hacked. With rsync, someone that gains unauthorized access to the client system can delete your backups too (just rsync an empty directory and use the –delete option)

anamethatisnt,

This is important, just ask Tietoevry:
More than three weeks after the cyber attack against Tietoevry, it is clear: Quantities of backup copies were destroyed and will not be able to be recreated. This despite the fact that the agreement states that Tietoevry is responsible for backups, according to several customers.

harsh3466,

I will do that. Right now I’ve got a decently solid local backup, and a so-so off site backup (right now my offsite is a hard drive I keep in my locker at work that I bring home once a week and update).

I’ve heard about borgbackup, but haven’t looked at it, as rsync has seemed to be sufficient for my needs, but I’ve made myself a note to look into it.

cbarrick,

(Also, you can change the default editor visudo uses, but I don’t remember the command because I won’t be changing it until I get a grip on vim and can make a decision about which editor I want to use.)

It just uses your preferred editor, which you set with the EDITOR environment variable. In fact, any program that opens an editor should use this to determine the user’s preference.

I set mine to VS Code:


<span style="color:#323232;">export EDITOR="code -nw"
</span>

Examples of programs that use this variable include visudo, crontab -e, and git commit.

harsh3466,

Thank you! I just learned about setting git’s default editor a couple weeks ago when I started learning git. I didn’t realize you could set a global default. When I think about it, it makes perfect sense, but I don’t know what I don’t know, and now I know!

macattack,

This feels like a blog, in a good way. It’s interesting perspective hearing a Linux user work their way through issues, instead of the norm of being a seasoned vet. let me know if you have a blog and I’ll throw it on my RSS feed.

harsh3466,

Thanks! It’s not a blog. About a month ago I just started sharing these posts here on lemmy, and they’ve kind of evolved over time. I considered starting up a blog, but I’m content to be part of this community and share them here.

Granixo,
@Granixo@feddit.cl avatar

Great post, but where are posts 1-3?

harsh3466,

Thanks! They’re on my lemmy.world account. I just created a new account on lemmy.ml, as I consistently can’t post, comment, etc… from lemmy.world. Here are the links to 1-3: 010203

eager_eagle,
@eager_eagle@lemmy.world avatar

I’d also change the owner of /path/to/my/script to root, avoiding the open path to a privilege escalation.

harsh3466,

Is the safer route to run these scripts as root in crontab and change ownership of the scripts path? Right now the scripts are in /username/bin for the account I use to admin the server.

eager_eagle,
@eager_eagle@lemmy.world avatar

I’d just sudo chown root:root /path/to/my/script and sudo chmod 744 /path/to/my/script

harsh3466,

Thank you!

N0x0n,

Owww that’s a nice hint ! If I understand it correctly, my script in /home/server/myscript is at risk of privilege escalation because it’s owned by the user.

What’s the best practice to store scripts that have sudo commands? Because changing the owner of the home path to root doesn’t make sense.

eager_eagle,
@eager_eagle@lemmy.world avatar

because it’s owned by the user

…and executed by root. Meaning anyone with access to this user can have access to root by modifying the script that root will run for them.

changing the owner of the home path to root

just chown the script itself.

N0x0n,

Thanks :)

eager_eagle,
@eager_eagle@lemmy.world avatar

sudo crontab -e ?

edit: I just read the “I know I could have just edited /etc/crontab and run my script as root”

  • All
  • Subscribed
  • Moderated
  • Favorites
  • linux@lemmy.ml
  • fightinggames
  • All magazines