I feel like I'm taking crazy pills

I installed a few different distros, landed on Cinnamon Mint. I’m not a tech dummy, but I feel I’m in over my head.

I installed Docker in the terminal (two things I’m not familiar with) but I can’t find it anywhere. Googled some stuff, tried to run stuff, and… I dunno.

I’m TRYING to learn docker so I can set up audiobookshelf and Sonarr with Sabnzbd.

Once it’s installed in the terminal, how the hell do I find docker so I can start playing with it?

Is there a Linux for people who are deeply entrenched in how Windows works? I’m not above googling command lines that I can copy and paste but I’ve spent HOURS trying to figure this out and have gotten no where…

Thanks! Sorry if this is the wrong place for this

EDIT : holy moly. I posted this and went to bed. Didn’t quite realize the hornets nest I was going to kick. THANK YOU to everyone who has and is about to comment. It tells you how much traction I usually get because I usually answer every response on lemmy and the former. For this one I don’t think I’ll be able to do it.

I’ve got a few little ones so time to sit and work on this is tough (thus 5h last night after they were in bed) but I’m going to start picking at all your suggestions (and anyone else who contributes as well)

Thank you so much everyone! I think windows has taught me to be very visually reliant and yelling into the abyss that is the terminal is a whole different beast - but I’m willing to give it a go!

youngGoku,

I remember being so lost in the dark when starting docker. There’s 2 main approaches to launching docker containers. One is with CLI arguments and one is from a docker-compose.yml file.

I highly recommend the latter.

Try going to chatGPT and ask it to write a docker compose file for whatever service you’re trying to stand up.

lemmyvore,

There’s no point in asking ChatGPT for a generic compose, most docker images will recommend a compose that’s specifically written for them.

nexussapphire,

It is a much better way to run volatile server apps that are changing at breakneck speed.

Shareni,

If you’re not planning to actually learn Docker, use an LLM AI to help you out. I just tried the following prompt in Gemini “generate docker-compose.yml that runs audiobookshelf and Sonarr with Sabnzbd” and it generated something that looks reasonable. Then you can follow it up with prompts like “how do I auto start it on linux?” and it will generate the systemd unit, and also tell you what commands to run.

rutrum,
@rutrum@lm.paradisus.day avatar

To be fair, you’re taking on a lot of new things at once. You can spin up docker containers on windows too, all while using a UI. I think it’s great your exposing yourself to self hosting, linux, command line interface, and containerization all at once, but don’t beat yourself up for it taking longer than expected. A lot of it takes time. I encourage you to keep trying and playing. Good luck!

Pantherina, (edited )

There is docker desktop on Linux too.


<span style="color:#323232;">sudo apt install docker flatpak -y
</span><span style="color:#323232;"># add flathub if not already there
</span><span style="color:#323232;">flatpak install docker
</span>

Edit: please use Podman. And if you think about Virtualbox, please use Virt-manager instead. Both are RedHat products and they are pretty awesome. Podman is more secure and works well for your job, it is letter-for-letter compatible with docker. You can use podman-compose if you need) but that requires to run a daemon which is also possible.

You can use Podman with many container sources natively, while docker only allows dockerhub. Says enough.

possiblylinux127,

Not recommended as for one it is proprietary and two its more confusing to have tons of buttons than it is to write a docker compose.

Pantherina,

I mean I would recommend them to use Podman. Docker on Linux Mint was a mess last time I used it.

possiblylinux127,

Why?

It seems like podman would be way harder as you need to configure systemd and manage containers yourself.

With docker compose you apply it and docker creates the containers you need.

Pantherina,

I dont know if you still need an external repo for docker, podman is in the system repo.

When using Containers it works the same. Yes systemd stuff may be manual thats what Podman Desktop is probably for.

Its more secure, more free and when learning it new anyways, why not the better tool?

possiblylinux127,

Podman is not really a replacement for docker. It is its own separate thing and it has trade offs with docker.

The reason I use podman on my local machine and for Jellyfin is that it is darn fast. It makes docker look like a emulator by comparison. With that being said the issue with podman is mostly permission related. However, it also has some instability in cases where a container malfunctions. This often is happens when you try to stop and start a container at the same time.

Once that happens the runtime effectively locks up as the system is in a state that it doesn’t know how to handle.

Some of the benefits of docker include its ability to recover from just about anything. If you need a container to always be available docker can do that. It also can do on the fly patching and self healing.

Docker compose is very nice to have for larger software with multiple containers. I can write a docker compose that builds and deploys my nodejs applications with a database back end and it will just work without any issues. Deploy it and you are good.

Pantherina,

Thanks for the info, I have little personal experience especially with compose.

How is podman compose after setting it up?

possiblylinux127,

Podman compose is very much lacking and breaks easily (don’t use it)

LainTrain, (edited )

Sudo docker will do the trick. Docker does some networking shit so it needs admin privileges

Don’t give up, don’t listen to goober 🤓 itt telling you to read manpages that shit is worthless.

bionicjoey,

Better yet, add yourself to the Docker group. You shouldn’t have to Sudo it

ugh,

I’m also pretty new to Linux, but I’ve finally gotten a bit of a grasp on it. I started learning Linux to set up a home server, so I also jumped straight into Docker. You have gotten some thorough replies, but I thought I’d share my chaotic journey with it that has ended in a decent ratio of success vs confusion. Note: I have used Ubuntu from the start.

Don’t use docker desktop. It’s garbage. Also, don’t use the Snap image.

$sudo apt install docker.io

$sudo apt install docker-compose

Those are both cli “programs”. They aren’t apps like you have on Windows. It seems VERY intimidating to talk into the void of the terminal, but you’ll build confidence. Docker commands work like any other commands, all in the same place.

Now install Portainer CE. The instructions are very simple to follow. You can reach Portainer through your browser at the localhost address it gives you, which you type directly into the URL bar. I think it’s localhost:9000.

Portainer will give you an easy visual way to manage Docker. You can perform many tasks through Portainer instead of using the command line. Honestly, I’m pretty sure you could do everything on Portainer and not even touch the terminal. I don’t suggest that because you will have to have at least a basic understanding of how Linux and Docker work. You will be confused, and you will feel crazy. Eventually, you’ll get more comfortable living in that psychosis.

On to Docker Compose!! This is my preferred way to run containers. I have a designated folder in /opt that I use for my compose files. This way, I know exactly how I set up my programs. My memory is awful and I tweak things so often that I’ll completely forget how I have even gotten to this point or where ANY of my files are. It’s pretty easy to find docker compose files online that you can copy and paste and it instantly works!

To make it simple, after I have saved my docker-compose.yaml file in the designated folder, I right click on the empty area and choose “open in terminal”.

$sudo docker-compose up -d

The -d instructs the program to continue to run, even if you exit out of the terminal. At this point, your container will also show up in portainer!

I think that covers the basics. My biggest tip is to keep a notepad handy to write down commands that you have to search for. Your bookmarks will fill up very quickly otherwise. Expect to get stuck sometimes. Expect to spend hours trying to troubleshoot an issue, then have it suddenly work with no idea what you actually did to fix it. Accept the win and never touch it again.

I have done fresh installs many times. Some because I’ve played with 10 different programs that I decided against and want the leftover files gone, some because I wanted to try different mixes of distros, and once because I legitimately broke the OS.

Keep your important stuff on an external drive to avoid any loss and don’t be afraid to mess around with it!

Btw, I’m a huge KDE plasma fan. It’s lighter than GNOME, but very user friendly. I’ve settled on Kubuntu as my distro of choice.

flubba86,

Well said. I’ve been using Linux for 15 years and using Docker for 6 years. I couldn’t have communicated as well as you did. You have a knack for teaching.

lemmyvore,

Don’t use docker-compose anymore, it’s been obsolete for a while now and won’t be getting new features.

It’s best to add the docker official repo and install docker and docker-compose-plugin from there.

The -plugin version acts as a docker subcommand (docker compose) and will be updated alongside docker going forward.

ugh,

Thank you! I’ll look into that

h3ndrik, (edited )

Try a more managed and out-of-the-box solution first, then work your way down to the commandline. I’d recommend one of the NAS solutions like openmediavault (if they still do docker) or cockpit-project.org

or Docker for Desktop or podman.io

(maybe lxc containers with proxmox or unraid)

onlinepersona,

Docker is one of the container technologies

Containers vs Images

This is a very simplified explanation, which hopefully clears up for you. As with all simplifications, they aren’t entirely correct.

Containers put processes, files, and networking into a space where they are secluded from the rest. You main OS is called the host and the container is called the guest. You can selectively share resources with the guest. To use an analogy, if you house were the computer with linux, if you took a room, put tools and resources for those tools into it, put workers into it, got them to start working and locked the door, they’d be contained in the room, unable to break out. If you want to give workers access to resources, you either a window, a corridor, or even a door depending on much access you want to give them.

Containers are created from an image. Think of it as the tools, resources, and configuration required every time you create a room in your house for workers to do a job. The woodworkers will need different tools and resources than say metalworkers.

Most images are stored on DockerHub. So when you do docker pull linuxserver/sonarr you download the image. When you do docker run linuxserver/sonarr you create a container from an image.

Installation

You’re on Cinnamon Mint which is linux distribution derived from another linux distribution called debian. You have to follow the installation instructions. Everything is there. If something doesn’t work, it’s most likely because you skipped a step. The most important ones are the post-installation steps:

  • Adding your user to the docker group
  • Logging out and back in (or simply restarting)

Those are the most commonly missed steps. I’ve fallen for this trap too!

Local help

To use linux, you need to learn about ways to help yourself or find help. On linux, most well-written programs print a help. Simply running the command without any arguments most often output a help text --> running docker does so. If they don’t, then the –help flag often does --> docker --help. The shorthand is -h --> docker -h.

Some commands have sub commands e.g docker run, docker image, docker ps, … . Those subcommands also take flags of which -h and –help are available.

The help output is often not extensive and programs often have a manual. To access it the command is man --> man find will output the manual for the find command. Docker doesn’t have a local manual but an online one.

For clarification when running a command there are different ways to interpret the text after the command:

Flags/Options

These are named parameters to the command. Some do not take input like -h and –help which are called flags. Some do like –file /etc/passwd and are often called options.

Arguments

These are unnamed parameters and each command interprets them differently. echo “hello world” --> echo is the command and “hello world” is the argument. Some commands can take multiple arguments

Running containers

Imperatively

As described above docker run linuxserver/sonarr runs an image in a container. However, it runs in the foreground (as opposed to the background in what is most often called a “daemon”). Starting in the foreground is most likely not how you want to run things as that means if you close your terminal, you end the process too. To run something in the background, you use docker run --detatch linuxserver/sonarr.

You can pass options like -v or –volume to make a file or folder from your host system available in the guest e.g -v /path/on/host:/tmp/path/in/guest. Or -p / –port to forward a host port to a guest port e.g -p 8080:80. That means if you access port 8080 on your host, the traffic will be forwarded to port 80 in the guest.

These are imperatives as in you command the computer to do a specific action. Run that docker image, stop that docker container, restart these containers, start a container with this port forward and that volume with this user …

Declaratively

If you don’t want to keep typing the same commands, you can declare everything about your containers up front. Their volumes, ports, environment variables, which image is used, which network card/interface they have access to, which other network they share with other containers, and so on.

This is done with docker-compose or docker compose for newer docker versions (not all operating systems have the new docker version).

This already a long text, so if you want to know more, the best resource is the docker compose manual and the compose file reference.


Hopefully this helped with the basics and understanding what you’re doing. There are probably great video resources out there that explain it more didactically than I do with steps you can follow along.

Good luck!

CC BY-NC-SA 4.0

cypherpunks,
@cypherpunks@lemmy.ml avatar

You main OS is called the host and the container is called the guest

The word “guest” is generally used for virtual machines, not containers.

yianiris,
@yianiris@kafeneio.social avatar

Can containers boot on their own? Then they are hosts, if not they are guests.
Unless there is some kind of mutual 50/50 cohabitation of userspace with two different pid1s
pid 1 left pid 1 right

@cypherpunks @onlinepersona

cypherpunks, (edited )
@cypherpunks@lemmy.ml avatar

Can containers boot on their own? Then they are hosts, if not they are guests.

It depends what you mean by “boot”. Linux containers are by definition not running their own kernel, so Linux is never booting. They typically (though not always) have their own namespace for process IDs (among other things) and in some cases process ID 1 inside the container is actually another systemd (or another init system).

However, more often PID 1 is actually just the application being run in the container. In either case, people do sometimes refer to starting a container as “booting” it; I think this makes the most sense when PID 1 in the container is systemd as the word “boot” has more relevance in that scenario. However, even in that case, nobody (or at least almost nobody I’ve ever seen) calls containers “guests”.

As to calling containers “hosts”, I’d say it depends on if the container is in its own network namespace. For example, if you run podman run --rm -it --network host debian:bookworm bash you will have a container that is in the same network namespace as your host system, and it will thus have the same hostname. But if you omit –network host from that command then it will be in its own network namespace, with a different IP address, behind NAT, and it will have a randomly generated hostname. I think it makes sense to refer to the latter kind of container as a separate host in some contexts.

StrawberryPigtails,

Linux is a slightly different way of thinking. There are any number of ways that you can solve any problem you have. In Windows there are usually only one or two that work. This is largely a result of the hacker mentality from which linux and Unix came from. “If you don’t like how it works, rewrite it your way” and “Read the F***ing Manual” were frequent refrains when I started playing with linux.

Mint is a fine distro which is based off of Ubuntu, if I remember correctly. Most documentation that applies to Ubuntu will also apply to you.

Not sure what exactly you installed, but I’m guessing that you did something along the lines of sudo apt-get install docker.

If you did that without doing anything ahead of time, what you probably got was a slightly out of date version of docker only from Mint’s repositories. Follow the instructions here to uninstall whatever you installed and install docker from docker’s own repositories.

The Docker Desktop that you may be used to from Windows is available for linux, however it is not part of the default install usually. You might look at this documentation.

I don’t use it, as I prefer ctop combined with docker-compose.

Towards that end, here is my docker-compose.yaml for my instance of Audiobookshelf. I have it connected to my Tailscale tailnet, but if you comment out the tailscale service stuff and uncomment the port section in the audiobookshelf service, you can run it directly. Assuming your not making any changes,

Create a directory somewhere,

mkdir ~/docker

mkdir ~/docker/audiobookshelf

This creates a directory in your home directory called docker and then a directory within that one called audiobookshelf. Now we want to enter that directory.

cd ~/docker/audiobookshelf

Then create your docker compose file

touch docker-compose.yaml

You can edit this file with whatever text editor you like, but I prefer micro which you may not have installed.

micro docker-compose.yaml

and then paste the contents into the file and change whatever setting you need to for your system. At a minimum you will need to change the volumes section so that the podcast and audiobook paths point to the correct location on your system. it follows the format <system path>:<container path>.

Once you’ve made all the needed changes, save and exit the editor and start the the instance by typing

sudo docker compose up -d

Now, add the service directly to your tailnet by opening a shell in the tailscale container

sudo docker exec -it audiobookshelf-tailscale /bin/sh

and then typing

tailscale up

copy the link it gives you into your browser to authenticate the instance. Assuming that neither you or I made any typos you should now be able to access audiobookshelf from booksIf you chose to comment out all the tailscale stuff you would find it at localhost:13378

docker-compose.yaml


<span style="color:#323232;">version: "3.7"
</span><span style="color:#323232;">services:
</span><span style="color:#323232;">  tailscale:
</span><span style="color:#323232;">    container_name: audiobookshelf-tailscale
</span><span style="color:#323232;">    hostname: books                         # This will become the tailscale device name
</span><span style="color:#323232;">    image: ghcr.io/tailscale/tailscale:latest
</span><span style="color:#323232;">    volumes:
</span><span style="color:#323232;">      - "./tailscale_var_lib:/var/lib"        # State data will be stored in this directory
</span><span style="color:#323232;">      - "/dev/net/tun:/dev/net/tun"           # Required for tailscale to work
</span><span style="color:#323232;">    cap_add:                                    # Required for tailscale to work
</span><span style="color:#323232;">      - net_admin
</span><span style="color:#323232;">      - sys_module
</span><span style="color:#323232;">    command: tailscaled
</span><span style="color:#323232;">    restart: unless-stopped
</span><span style="color:#323232;">  audiobookshelf:
</span><span style="color:#323232;">    container_name: audiobookshelf
</span><span style="color:#323232;">    image: ghcr.io/advplyr/audiobookshelf:latest
</span><span style="color:#323232;">    restart: unless-stopped
</span><span style="color:#323232;">#    ports:                                                                  # Not needed due to tailscale
</span><span style="color:#323232;">#      - 13378:80                                                                                                     
</span><span style="color:#323232;">    volumes:
</span><span style="color:#323232;">      - '/mnt/nas/old_media_server/media/books/Audio Books:/audiobooks'       # This line has quotes because there is a space that needed to be escaped.
</span><span style="color:#323232;">      - /mnt/nas/old_media_server/media/podcasts:/podcasts                               # See, no quotes needed here, better to have them though.
</span><span style="color:#323232;">      - /opt/audiobookshelf/config:/config                                       # I store my docker services in the /opt directory. You may want to change this to './config' and './metadata' while your playing around
</span><span style="color:#323232;">      - /opt/audiobookshelf/metadata:/metadata
</span><span style="color:#323232;">    network_mode: service:tailscale                                  # This line tells the audiobookshelf container to send all traffic to tailscale container
</span>

I’ve left my docker-compose file as-is so you can see how it works in my setup.

Kecessa,

👆

And that is why Linux mass adoption is never coming.

velox_vulnus,

Docker is not your average GUI text editor or video player. It is supposed to be a TUI-first container app, similar to Podman, Incus, etc. The GUI applet is something you can add for your convenience.

A container is somewhere between running on bare metal vs virtual machine, in the sense that it is an ephemeral, isolated system, running on the same kernel with minimal overhead.

Docker for Windows runs the whole Linux kernel in VM. Basically, now you’re running a container inside a VM. That’s a lot of overhead, if you understand what that means. And btw, the desktop app for Windows is available on Linux. It’s just that you don’t really need it.

StrawberryPigtails,

It’s not as difficult as the length of my comment implies, and doing it in the terminal simplifies the explanation quite a bit.

The average user though might never need to use the terminal. Most of what they want can be done in the browser.

As for Linux mass adoption, that happened years ago. Just nobody noticed. Android, Chromebook, Steam Deck are all Linux based and MacOS (BSD derived) is a close relative. And Microsoft has even made it possible to run linux command line programs in Windows, with some caveats, using WSL. And that’s not counting the majority of servers, networking gear and space craft running linux or unix.

Kecessa,

“They’re all close relatives”

on which the experience has been tuned to make them as user friendly as possible to the point where they have nothing in common with desktop Linux from an average user perspective.

StrawberryPigtails,

And blackbox has nothing in common with KDE? /s

Im off for bed. Night.

Nibodhika,

Getting this setup on Windows would be even harder because it would involve installing docker manually or setting up WSL and following these steps. What OP is trying to do is a complex thing that most people don’t need, that would be the same as saying Windows is hard because setting up a VM with hardware passthrough is difficult on Windows, completely missing the point that that is a complex thing to do and that it’s complex on any other OS as well.

Kecessa,

Yeah but the difference is that even for simple things, Linux instructions look like what was posted by the person I replied to.

Nibodhika,

Being a person who replies to lots of new users questions I strongly disagree. 99% of the questions come from a Windows mindset, so it requires some deconstruction of the way the person is thinking, have you noticed how very few Mac users ask beginner questions on Linux forums?

There’s a big difference between something is different and someone is used to doing the things differently, driving on the left or right is just as difficult, bit if you’ve driven all of your life one way switching up can be difficult. Just like that a lot of Linux concepts are different from what people are used to if they come from a Windows background, but the same is true the other way around. As someone who’s been using Linux for decades I find windows weird and convoluted, but I know that this is just my perception, and that someone who’s using it daily is used to that.

Edit: if you’re going to reply to this, mind providing an example of something you think is easy on Windows but hard on Linux?

Para_lyzed,

Just to be clear, I agree with you practically 100%, and you can see my response to this person in the same thread as well, but I’m going to play devil’s advocate here. I’ll give you a few examples of things that are easier on Windows (and most also are easier on MacOS) than they are on Linux (or at least some distros depending on which you pick):

  • Using proprietary multimedia codecs (Fedora)
  • Installing Nvidia drivers that have the capability of auto-updating (any distro that doesn’t have a GUI for driver downloads)
  • Installation (most people simply use the pre-installed OS and never reinstall or install anything new)
  • Game compatibility (Linux gaming is great, but there are still major titles not supported)
  • Accessing firmware settings and profiles for laptops while booted (like Armoury Crate for Asus laptops (yes, I know about rog-control-center and asusctl, but those don’t work for all devices, and are harder to set up))

There are probably plenty more, and there are things that are easier on Linux. But again, I’m just playing devil’s advocate here. Each of those examples are less intuitive to complete on Linux (or at least some distros) than they are in Windows. As someone who has been using Linux for a decade, I don’t think that they are all hard, but many are also less intuitive in Linux than MacOS, just to address your first point. When you have to start adding PPAs/repos to get specific things, I’d argue that’s objectively less intuitive than the alternatives in other operating systems, and not merely a different way of thinking. In many cases though, for most things, there are intuitive solutions that exist in Linux. There are plenty of cases where someone overcomplicates something they want to do in Linux by using a Windows mindset, so I still agree with you there. I just think it’s a little more nuanced than you seemed to imply.

Nibodhika,

I had written a more thorough response, but the app crashed and I lost it. Sorry of this one sounds a bit harsh, I do mostly agree with you, I just think that the examples you’ve chosen are bad because they’re either distro specific (so not a Linux problem but a problem with that distro), or not Linux problems (i.e. there’s nothing Linux can do about it because the problem doesn’t lie on Linux but elsewhere)

Using proprietary multimedia codecs (Fedora)

Distro specific. It should be just like installing anything else, and it is for some distros, certainty for the ones I’ve been using.

Installing Nvidia drivers that have the capability of auto-updating (any distro that doesn’t have a GUI for driver downloads)

Distro specific, I’ve had NVIDIA drivers auto-updating for the past 15 years or so, long before Windows had that same capabilities. And it updates with my regular system update, no need to use any special GUI for it.

Installation (most people simply use the pre-installed OS and never reinstall or install anything new)

Not Linux problem. Also, while I can see the argument that’s easier to use what’s already installed, that tells you nothing of how easy one thing is in comparison to the other. If computers came with the most convolutedly complex and unusable crap of an OS, full of bloatware and spyware pre-installed people would still use it. Not to mention that the Linux installation process was much easier than Windows for the longest time (until windows finally implemented automatic driver installation)

Game compatibility (Linux gaming is great, but there are still major titles not supported)

Not Linux problem. Although this is something to bear in mind while choosing your OS, it’s the companies that make games that are at fault here, there’s nothing Linux can do to remedy this situation, so it’s unfair to judge it for it. That’s like saying Windows is harder to use because running docker containers in it is impossible without some virtualisation, while this is something to consider when deciding what OS will you use to self-host, it’s not per-se a reason why Windows is more difficult to use.

Accessing firmware settings and profiles for laptops while booted (like Armoury Crate for Asus laptops (yes, I know about rog-control-center and asusctl, but those don’t work for all devices, and are harder to set up))

Same as above.

Like I said, I agree with lots of what you said, and some of those are thing to keep in mind when choosing an OS, but those are not good arguments as for which OS is simpler than the other. The Linux way to do most of them is using the package manager, and that’s much simpler than searching the internet for the correct download.

yianiris,
@yianiris@kafeneio.social avatar

The greatest contribution of Nvidia to FOSS had been to keep many such thinking people hostage to proprietary solutions and out of our visibility.

You know, those that refuse to learn anything new, refuse to read documents, believe that by controlling input/output through terminal is inferior to gui-blindness.

@Nibodhika @Para_lyzed

Nibodhika,

Yes NVIDIA is crap which is why my next GPU will not be NVIDIA. However you need to remember AMD used to be crappier, and the last time I bought a GPU I still didn’t trusted AMD.

Also not sure what your answer has to do with the ongoing discussion.

yianiris,
@yianiris@kafeneio.social avatar

Auto downloading and installing software is pretty much a violation of ethics in the unix ecosystem, pretty much anything that begins with Auto should be rejected.

But the general public wants the convenience and luxury of having things done by others without being bothered. Many distros competing with each other for lazy newcomers (ubuntu, mint, debian, manjaro, ...) they provide all those non-unix like utilities.

Lately it is getting worse, all sorts of telemtry is branded good

@Nibodhika

Nibodhika,

I assume you’re talking about the “auto-update” drivers. That’s pretty standard Linux thing, everything “auto” updates when you tell your system to update, that’s one of the huge advantages of package managers, not sure which Linux have you used, but the vast majority of them do have a package manager that updates everything (including drivers).

yianiris,
@yianiris@kafeneio.social avatar

I have never used such a system, I don't know of a single one, and I wouldn't use such a system.

@Nibodhika

Nibodhika,

Would you mind telling us which obscure Linux distro do you use that doesn’t have a package manager? And how do you update your system?

yianiris,
@yianiris@kafeneio.social avatar

I have used apt apt-get, apk, pacman, xbps, and I have never encountered an auto-update

Even dumb-gui like synaptics or pamac don't auto-update

@Nibodhika

Nibodhika,

All of those upgrade the drivers when you upgrade your system just like I mentioned.

yianiris,
@yianiris@kafeneio.social avatar

Then what you consider automatic is a very unique perception of how things work.
In a car automatic transmission means it shifts on its own.
In a non automatic either you shift or it doesn't happen.

On most pkg managers YOU elect when to upgrade, the output is a list of "upgradable" pkgs, then you are asked whether to proceed or not. Nothing automatic about this.

Auto update would mean software has been updated on its own without you authorizing it.

@Nibodhika

Nibodhika,

No it’s not, every sane person considers automatic to have little or no human interaction, but some human interaction to trigger the flow is still a thing, next you’ll tell me that an automatic weapon fires on it’s own will, or that an automatic garage door decides when to open. A single command that updates all of your system seems pretty automated to me, if not try doing your next update manually by downloading every single package from their source, compiling it if needed, and copying it into the correct folders, do that for every one of the hundreds of packages that get updates and then tell me that a single command is not automating a lot of that away for you.

It doesn’t even work how you’re describing in Windows, you get prompted whether you want to update there.

Para_lyzed,

It seems you misunderstand what the other commenter meant. By “auto-update”, they mean that the package is fetched and updated when you request your package manager to perform an update/upgrade (meaning that the user specifically requested the packages be updated, not that it happened on its own). This comes from my use of the term “auto-updating” in reference to Nvidia drivers on Windows, which will automatically check for updates on boot, in comparison to the closest equivalent with Linux distros in which the drivers would be updated by the package manager (but still do not require the user to manually install a new version separately, as would be the case if trying to use Nvidia’s official runfile installer). I grouped the Linux drivers from a package manager into the “auto-update” category, which I realize in hindsight is a bit confusing given the nature of updating through a package manager.

Para_lyzed, (edited )

I do agree with you that these problems are not the fault of Linux, but I never meant to imply that they were. The average PC user has absolutely zero care for where the fault is, the only thing that matters to them as an end user is their experience while using the operating system. Users who actually care about the quality and ethics of the software they use are likely to already be using Linux anyway, but that is very much not the norm. The layperson is perfectly happy to never care or understand a single thing about their operating system. I will be answering your response to each of my points, as well as rebuttals for this:

The Linux way to do most of them is using the package manager, and that’s much simpler than searching the internet for the correct download.

in the following:

Distro specific. It should be just like installing anything else, and it is for some distros, certainty for the ones I’ve been using.

They are pre-installed in Windows. In fact, most people won’t even understand why their media isn’t playing, and won’t even know that they need to install something, or how to install it. Some distros have them pre-installed, but there are plenty that do not. The point here is that it is inherently less intuitive and more difficult in Linux than in Windows.

This doesn’t require installing anything in Windows. This is purely easier in Windows for many distributions, and equal at best for those who have them installed by default. Thus using the package manager is not easier or more intuitive in this sense, especially since the packages have strange names (so you’d have to look up how to do it as a new user).

Distro specific, I’ve had NVIDIA drivers auto-updating for the past 15 years or so, long before Windows had that same capabilities. And it updates with my regular system update, no need to use any special GUI for it.

Nvidia’s driver software comes pre-installed in a lot of pre-built systems nowadays. It has automatic update checking so it will prompt you on boot to ask if you want to update. Even if it didn’t come pre-installed (which is also the case with most Linux distros), Windows users don’t have to look up a tutorial on how to download and install the drivers. In Linux, the package names and installation methods vary so greatly between distros, that I still have to look it up every time I set up a new distro, even with a decade of Linux experience. In either case, the user will need to use the Internet to search for a page (either the Nvidia driver site, or a tutorial for how to do it on their distro). And no, I’m not talking about Nouveau here, it still has lots of issues and delivers much worse performance than the proprietary driver. Sure, using an AMD card is easier, but the current market share suggests most people will be coming over with Nvidia hardware.

When all the first results are the Nvidia website with official driver downloads, and don’t require the user to use the terminal (and make sure the tutorial works for their distro), Windows is easier there. You just download an executable and run it. No need to add non-free repositories to your package manager, no need to use the terminal, just a search, 4 clicks, and you’re done. Yes, it’s a very “Windows way to do things”, but it’s also objectively easier than it is in a variety of Linux distros. A select few distros have a GUI way to manage this, which I’d rate as slightly easier than the manual Windows way, but still more difficult than the “this is already installed on my system” way that’s the case for many pre-builts and laptops.

Not Linux problem. Also, while I can see the argument that’s easier to use what’s already installed, that tells you nothing of how easy one thing is in comparison to the other. If computers came with the most convolutedly complex and unusable crap of an OS, full of bloatware and spyware pre-installed people would still use it. Not to mention that the Linux installation process was much easier than Windows for the longest time (until windows finally implemented automatic driver installation)

You seem to have answered this for me. People will use what is pre-installed on their system because it is easier for them to do so. Again, not the fault of Linux, but it adds a layer of difficulty to those who want to switch. The layperson doesn’t know what an ISO image is, or how to make a liveUSB out of one.

This has nothing to do with using a package manager or the “Linux way to do things”.

Not Linux problem. Although this is something to bear in mind while choosing your OS, it’s the companies that make games that are at fault here, there’s nothing Linux can do to remedy this situation, so it’s unfair to judge it for it. That’s like saying Windows is harder to use because running docker containers in it is impossible without some virtualisation, while this is something to consider when deciding what OS will you use to self-host, it’s not per-se a reason why Windows is more difficult to use.

Most end users will not care whose fault it is. The fact of the matter is that it will dissuade a large portion of gamers away from Linux, as Riot games don’t run at all. It’s much more difficult to convince someone that they should switch to another operating system when the games they play or programs they use (like Adobe software) won’t work. Sure, in many cases there are alternatives, but that’s a massive layer of difficulty, especially if you’re expecting people to learn new, alternative software with equally steep or steeper learning curves than the Adobe suite, or give up games they’ve been playing for years.

Again, nothing to do with a package manager or the “Linux way to do things”.

Same as above.

Again, the end user doesn’t care whose fault it is. If they can’t access the features their laptop or PC came with (like the ability to use their discrete GPU), then that’s going to be a hard sell. And even if they can by installing something like rog-control-center, that is still another layer of difficulty.

If there is a solution available for a specific computer, it is inherently more difficult on Linux. The computer will come pre-installed with the correct software (no download necessary), and even if you were to reinstall, all you have to do is download a single executable and run it. On Linux, however, you have to research and figure out what kind of software would even do this (asusctl or rog-control-center, for instance), then you have to check the model number of your laptop or motherboard for compatibility because only a select few will be compatible, then you have to add a PPA/repo to your package manager (if the solution even has that available; some will require you to build from source and/or update manually every update), and only then can you install the package. Far more steps, far less intuitive, and far more difficult for an average user.

I gave you examples of things that are more difficult in Linux than Windows. None of these things have to do with a difference in perspective on how to install software, or an investment in the “Windows way” to do things. I’ve been using Linux for around a decade, and I’ve had recent experience with each of these things in Windows while helping other people. They are simply easier in Windows. I want to again make it clear that I never said any of these were the fault of Linux, but you can’t merely overlook them simply because Linux isn’t at fault. New users would still want/have to do these things, and doing them can be difficult or impossible depending on compatibility. There are plenty of arguments for Linux, but the argument that it is simpler or easier in any overarching sense is not one of them. There are very specific instances where things are easier in Linux, or the experience of a user is simpler in Linux, but those few cases do not encompass the entirety of Linux. You have said yourself that you have not used Windows recently, and that seems very apparent to me. I dislike Windows, but Linux has not gotten anywhere near a point where one of my recommendations for switching to Linux are that it is easier or simpler.

I agree that the package manager is a much better solution than the Windows way of doing things, but it has nothing to do with most of the points I made.

Adanisi,
@Adanisi@lemmy.zip avatar

No they don’t lol

Para_lyzed,

This is a discussion about Docker, which is a complex terminal-based containerization system. This is not a program that is typically used by the average user. Docker’s complexity does not imply that Linux requires this kind of set up to use as a normal desktop. This is usually server software. Docker is also available on Windows and MacOS, and is partnered with Microsoft (you know, the company that makes Windows? The desktop OS with the highest market share?). Are you going to complain about how Windows will never reach mass adoption because users are able to install complex tools that require a steep learning curve to use? You can install Docker on Windows and use the same commands and configs, so do you believe that Windows suffers this same problem?

Before you point out the start of that comment with the “Linux mentality” stuff, while some of that is certainly true, you can now do everything an average user needs to do in an intuitive GUI, just like Windows (better in many cases, actually). Half the listed commands (making directories and files) can be done in the file manager just like Windows, normal apps can be managed in app stores, and the rest of it is docker specific, which is (again), server-oriented software. I’m not a fan of their mentality about how things work in Linux, because it’s very much an old mentality that doesn’t account for the immense amount of change that has happened in the past decade to make Linux more accessible.

I don’t understand why people come to the Linux communities to complain that Linux is “too hard” or “too complex” to be usable. If you don’t have an actual interest in Linux, find another community. If you want a simple experience, use a simple distro that’s meant to be easy to use, and use software that is easy to use.

foobaz,

😅

Adanisi,
@Adanisi@lemmy.zip avatar

Because Docker, a complex program most users will never use, has a long install process?

If I posted the long setup instructions for it on Windows, would you tell me Windows mass adoption is never coming?

Kecessa,

Because instructions like these are just standard procedure for Linux.

Adanisi,
@Adanisi@lemmy.zip avatar

Except, no they’re not. Not anymore.

Nibodhika,

Ok, so I don’t know the specifics, this might not be entirely accurate, but this is a general step-by-step guide for Debian based distros like Mint.

Install docker

The first thing you need to do is install docker, this can be done via whatever GUI you use for a package manager or via the terminal using sudo apt install docker (I’m not sure docker is the name of the package, I’m just guessing, you can do an apt search docker to see what’s available)

Add yourself to dockers

This is likely not needed on Mint, but just in case your user should be in the docker group, i.e. run sudo gpasswd -a docker. I’m almost sure Mint does this by default.

Enable docker systemd

This also might not be needed, again I’m almost sure Mint does this for you when you install docker, but just in case the command is sudo systemctl enable docker

Reboot

Because there have been changes to your user groups you need to relogin, easier to reboot.

use docker

Now you have a system with docker, you can test this by running the following command docker run hello-world, if you see a bunch of text that contains “Hello from docker” docker is working.

setup a docker-compose file

Create a folder, and in that folder create a text file called docker-compose.yaml in that file. This file will tell docker what you want to run, for example to have Nextcloud (which is an awesome self-hosted drive alternative. I’m not going to teach you the specific services you want, you can figure those out by looking at their page on the linuxserver page or something) you can look here hub.docker.com/r/linuxserver/nextcloud on how to write your docker-compose file, for example you could write:


<span style="color:#63a35c;">services</span><span style="color:#323232;">:
</span><span style="color:#323232;">  </span><span style="color:#63a35c;">nextcloud</span><span style="color:#323232;">:
</span><span style="color:#323232;">    </span><span style="color:#63a35c;">image</span><span style="color:#323232;">: </span><span style="color:#183691;">lscr.io/linuxserver/nextcloud:latest
</span><span style="color:#323232;">    </span><span style="color:#63a35c;">container_name</span><span style="color:#323232;">: </span><span style="color:#183691;">nextcloud
</span><span style="color:#323232;">    </span><span style="color:#63a35c;">environment</span><span style="color:#323232;">:
</span><span style="color:#323232;">      - </span><span style="color:#183691;">PUID=1000
</span><span style="color:#323232;">      - </span><span style="color:#183691;">PGID=1000
</span><span style="color:#323232;">      - </span><span style="color:#183691;">TZ=Etc/UTC
</span><span style="color:#323232;">    </span><span style="color:#63a35c;">volumes</span><span style="color:#323232;">:
</span><span style="color:#323232;">      - </span><span style="color:#183691;">./config:/config
</span><span style="color:#323232;">      - </span><span style="color:#183691;">./data:/data
</span><span style="color:#323232;">    </span><span style="color:#63a35c;">ports</span><span style="color:#323232;">:
</span><span style="color:#323232;">      - </span><span style="color:#183691;">8080:80
</span><span style="color:#323232;">      - </span><span style="color:#183691;">443:443
</span><span style="color:#323232;">    </span><span style="color:#63a35c;">restart</span><span style="color:#323232;">: </span><span style="color:#183691;">unless-stopped
</span>

Then open a terminal on that folder and run docker compose up -d after that is done open a browser and go to http://localhost:8080 and begin using Nextcloud.

scratchandgame,

Is there a Linux for people who are deeply entrenched in how Windows works

How Windows works is different I think?

I’m not above googling command lines that I can copy and paste but I’ve spent HOURS trying to figure this out and have gotten no where…

You don’t need.

I heard you are using a debian-based distro, can you read the man pages for apt?

Then use apt to find docker, and get it.

Once it’s installed in the terminal, how the hell do I find docker so I can start playing with it?

It is not installed in the terminal. It is installed on the system, ON DISK!

docker should be installed on /usr/bin. It is on PATH. Type docker and see what happen. If not, try searching on /usr/bin (on BSDs third party software are separated from base, so docker should be installed on /usr/local/bin)

And the docker service should be started, if not. Use the fucking systemctl to start it. The service name should be docker, if I recall correctly

N0x0n, (edited )

Been there, now I have over 12 containers running h24 on an old spare laptop with everything exposed via traefik (reverse proxy), self-signed CA, local DNS… what a ride ^^'.

The best advice and thats what helped me to get going, is to watch/follow some youtube videos about docker and how to expose your first container locally, so you get the general gist on how it works.

2 years ago, NetworkChuck introduced me to docker container. Not saying he’s the best youtuber to get you into docker and learning and stuff, but it’s a GOOD starting point :).

There is also Christian Lempa, Tech world with nana, who also will you give you some good pointer with docker and docker compose.

Good luck !

Qantumentangled,

There’s not a fantastic GUI for managing docker. There are a few like dockge (my favorite) or Portainer.

I recommend spending some time learning docker run with exposed ports, bind volumes (map local folders from your drive to folders inside the container so you can access your files, configs, content, etc. Also so you don’t lose it when you delete the container and pull a newer version).

Once you’ve done that, check out the spec page for docker-compose.yaml. This is what you’ll eventually want to use to run your apps. It’s a single file that describes all the configuration and details required for multiple docker containers to run in the same environment. ie: postgres version 4.2 with a volume and 1 exposed port, nginx latest version with 2 volumes, 4 mapped ports, a hostname, restart unless-stopped, and running as user 1000:1000, etc.

I’ve been using docker for home a LIGHT business applications for 8 years now and docker-compose.yaml is really all you need until you start wanting high availability and cloud orchestration.

Some quick tips though.

  • Search some-FOSS-app-name docker-compose read through a dozen or so templates. Check the spec page to see what most of the terms mean. It’s the best way to learn how to structure your own compose files later.
  • Use other people’s compose.yaml files as templates to start from. Expect to change a few things for your own setup.
  • NEVER use restart: always. Never. Change it to restart: unless-stopped. Nothing is more annoying than stopping an app and having it keep doom spiraling. Especially at boot.
  • Take a minute to set the docker daemon or service to run at boot. It takes 1 google and 30 seconds, but it’ll save you when you drunkenly decide to update your host OS right before bed.
  • Use mapped folders for everything. If you map /srv/dumb-app/data:/data then anything that container saves to the /data folder is accessible to you on your host machine (with whatever user:group is running inside the container, so check that). If you use the docker volumes like EVERYONE seems to like doing, it’s a pain to ever get that data back out if you want to use it outside of docker.
tkk13909,

Man, good luck. Is there no other way you can accomplish that without Docker. I’ve been using Linux for years and I still don’t know how to set up a docker container lol

datavoid, (edited )

Docker is not needed for this, it just helps keep things clean.

Edit - can the next person who downvotes this please explain why I’m wrong? I have run all these services without docker with no issue.

NateSwift,

You’re right. The comments here have been really weird and kinda missed the whole point of OP’s post.

Nibodhika,
  1. Docker is not needed, I’ve had lots of self hosted things for years before using docker.
  2. Docker is not that hard, you just need to learn it like anything else, once upon a time going to a webpage was an unknown thing to all of us, yet now it’s a daily thing.
themadcodger,
@themadcodger@kbin.social avatar

I would check out tutorials or YouTube videos. Try: https://drfrankenstein.co.uk/sabnzbd-in-container-manager-on-a-synology-nas/

spaghettiwestern,

Docker can be really confusing, but IMO being able to add and remove software without having changes made throughout your system is well worth the effort.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • linux@lemmy.ml
  • fightinggames
  • All magazines