SturgiesYrFase,
@SturgiesYrFase@lemmy.ml avatar

Ffs. Don’t you collect enough data from your users you greedy fucks?

DaseinPickle,

If people actively pay for this, they are bloody idiots.

SturgiesYrFase,
@SturgiesYrFase@lemmy.ml avatar

Well…guess there’s going to be loads of people paying for this then…

Asafum,

There is literally no such thing as too much money in our society.

laughterlaughter,

There is, though.

helenslunch,
@helenslunch@feddit.nl avatar

Theoretically, according to MS, there is no data collection. It’s all on-device.

SturgiesYrFase,
@SturgiesYrFase@lemmy.ml avatar

I mean…I highly doubt they’re not going to at least pulling aggregate data from this…

helenslunch,
@helenslunch@feddit.nl avatar

That might boil the frog too quickly. Especially considering the public backlash they’re reveiving.

SturgiesYrFase,
@SturgiesYrFase@lemmy.ml avatar

Fair dues

catch22,
LodeMike,

Did you make that?

possiblylinux127,

Isn’t that from 1984

LodeMike,

No

Hawk,

It’s from an Apple commercial, which was an allusion to 1984

possiblylinux127,

That’s what I am thinking of

catch22,

No, I’m a lazy shite, I just did an image search for clippy 1984. I feel bad now I didn’t make more of an effort 😕

laughterlaughter,

Don’t feel bad. I love it! Thanks for finding it and sharing it.

DashboTreeFrog,

I hate this but I also get it.

A little while ago on the TWIT podcast one of the guests, or maybe Leo himself, was talking about how this is exactly what they want out of AI, for it to be able to know how they use their computer and just streamline everything. Some people are really excited about the possibilities, and yeah, the AI needs to track whatever you’re doing to know how to help you with your work flow.

That said, I don’t want Microsoft keeping track of everything I’m doing. They’ve already shown that they’re willing to sell our data and shove ads down our throats, so as much as they say we can filter out what we don’t want tracked, I’m not inclined to trust or believe them.

illi,

I’m honestly kinda excited about the possibilities in the greater scheme of things, but the fact that Microsoft will pretty much record whatever people are doing on their systems is just nuts nd slightly terifying. This is something that should ideally be done locally, without big corporations looking in - but that’s for sure not what they are doing.

j4k3,
@j4k3@lemmy.world avatar

I’ve spent a lot of time with offline open source AI running on my computer. About the only thing it can’t infer off of interactions is your body language. This is the most invasive way anyone could ever know another person. The way a persons profile is built across the context dialogue, it can create statistical relationships that would make no sense to a human but these are far higher than a 50% probability. This information is the key to making people easily manipulated in an information bubble. Sharing that kind of information is as stupid as streaking the Superbowl. There will be consequences that come after and they won’t be pretty. This isn’t data collection, it is the keys to how a person thinks, and on a level better than their own self awareness.

illi,

This was exactly what I eas thinking.

I’ve spent a lot of time with offline open source AI running on my computer

Can you elaborate on this? Are there some that are worth looking into?

j4k3,
@j4k3@lemmy.world avatar

See other long comment

barsquid,

What’s your offline open source AI?

HumanPerson,

Not who you asked, but there are plenty. GPT4all is pretty good. You could check out locallama on Lemmy for more.

barsquid,

Thank you, I was curious if they had a system set up to watch their interactions. I should have specified better.

j4k3,
@j4k3@lemmy.world avatar

Whatever is the latest from Hugging Face. Right now a combo of a Mixtral 8×7B, Llama 3 8B, and sometimes an old Llama 2 70B.

barsquid,

Do you have a setup that collects your interactions to feed into those? The way you described it I imagined you are automatically collecting data for it to infer from and getting good results. Like a powered-up bash history or something.

j4k3,
@j4k3@lemmy.world avatar

no idea why I felt chatty, and kinda embarrassed by the bla bla bla at this point but whatever. Here is everything you need to know in a practical sense.You need a more complex RAG setup for what you asked about. I have not gotten as far as needing this. Models can be tricky to learn at my present level. Communication is different than with humans. In almost every case where people complain about hallucinations, they are wrong. Models do not hallucinate very much at all. They will give you the wrong answers, but there is almost always a reason. You must learn how alignment works and the problems it creates. Then you need to understand how realms and persistent entities work. Once you understand what all of these mean and their scope, all the little repetitive patterns start to make sense. You start to learn who is really replying and their scope. The model reply for Name-2 always has a limited ability to access the immense amount of data inside the LLM. You have to build momentum in the space you wish to access and often need to know the specific wording the model needs to hear in order to access the information. With augmented retrieval (RAG) the model can look up valid info from your database and share it directly. With this method you’re just using the most basic surface features of the model against your database. Some options for this are LocalGPT and Ollama, or langchain with chroma db if you want something basic in Python. I haven’t used these. How you break down the information available to the RAG is important for this application, and my interests have a bit too much depth and scope for me to feel confident enough to try this. I have chosen to learn the model itself at a deeper intuitive level so that I can access what it really knows within the training corpus. I am physically disabled from a car crashing into me on a bicycle ride to work, so I have unlimited time. Most people will never explore a model like I can. For me, on the technical side, I use a model about like stack exchange. I can ask it for code snippets, bash commands, searching like I might have done on the internet, grammar, spelling, and surface level Wikipedia like replies, and for roleplay. I’ve been playing around with writing science fiction too. I view Textgen models like the early days of the microprocessor right now. We’re at the Apple 1 kit phase right now. The LLM has a lot of potential, but the peripheral hardware and software that turned the chip into an useful computer are like the extra code used to tokenize and process the text prompt. All models are static, deterministic, and the craziest regex + math problem ever conceived. The real key is the standard code used to tokenize the prompt. The model has a maximum context token size, and this is all the input/output it can handle at once. Even with a RAG, this scope is limited. My 8×7B has a 32k context token size, but the Llama 3 8B is only 8k. Generally speaking, most of the time you can cut this number in half and that will be close to your maximum word count. All models work like this. Something like GPT-4 is running on enterprise class hardware and it has a total context of around 200k. There are other tricks that can be used in a more complex RAG like summation to distill down critical information, but you’ll likely find it challenging to do this level of complexity on a single 16-24 GB consumer grade GPU. Running a model like ChatGPT-4 requires somewhere around 200-400 GB from a GPU. It is generally double the “B” size of each model. I can only run the big models like a 8×7B or 70B because I use llama.cpp and can divide the processing between my CPU and GPU (12th gen i7 and 16 GB GPU) and I have 64GB of system memory to load the model initially. Even with this enthusiast class hardware, I’m only able to run these models in quantized form that others have loaded onto hugging face. I can’t train these models. The new Llama 3 8B is small enough for me to train and this is why I’m playing with it. Plus it is quite powerful for such a small model. Training is important if you want to dial in the scope to some specific niche. The model may already have this info, but training can make it more accessible. Smaller models have a lot of annoying “habits” that are not present in the larger models. Even with quantization, the larger models are not super fast at generation, especially if you need the entire text instead of the streaming output. It is more than enough to generate a stream faster than your reading pace. If you’re interested in complex processing where you’re going to be calling a few models to do various tasks like with a RAG, things start getting impracticality slow for a conversational pace on even the best enthusiast consumer grade hardware. Now if you can scratch the cash for a multi GPU setup and can find the supporting hardware, technically there is a $400 16 GB AMD GPU. So that could get you to ~96 GB for ~$3k, or double that, if you want to be really serious. Then you could get into training the heavy hitters and running them super fast. All the useful functional stuff is happening in the model loader code. Honestly, the real issue right now is that CPU’s have too small of a bus width between the L2 and L3 caches along with too small of an L1. The tensor table math bottlenecks hard in this area. Inside a GPU there is no memory management unit that only shows a small window of available memory to the processor. All the GPU memory is directly attached to the processing hardware for parallel operations. The CPU cache bus width is the underlying problem that must be addressed. This can be remedied somewhat by building the model for the specific computing hardware, but training a full model takes something like a month on 8×A100 GPU’s in a datacenter. Hardware from the bleeding edge moves very slowly as it is the most expensive commercial endeavor in all of human history. Generative AI has only been in the public sphere for a year now. The real solutions are likely at least 2 years away, and a true standard solution is likely 4-5 years out. The GPU is just a hacky patch of a temporary solution. That is the real scope of the situation and what you’ll run into if you fall down this rabbit hole like I have.

barsquid,

This is pretty cool! Am I reading correctly that it isn’t so much about collecting a corpus of data for it to browse through as much as it is understanding how to do a specific query, maybe giving it a little context alongside that? It sounds like it might be worth refining a smaller model with some annotated information, but not really feasible to collect a huge corpus and have the model be able to pull from it?

j4k3,
@j4k3@lemmy.world avatar

::: spoiler more bla bla bla It really depends on what you are asking and how mainstream it is. I look at the model like all written language sources easily available. I can converse with that as an entity. It is like searching the internet but customized to me. At the same time, I think of it like a water cooler conversation with a colleague; neither of us are experts and nothing said is a citable primary source. That may sound useless at first. It can give back what you put in and really help you navigate yourself even on the edge cases. Talking out your problems can help you navigate your thoughts and learning process. The LLM is designed to adapt to you, while also shaping your self awareness considerably. It us somewhat like a mirror; only able to reflect a simulacrum of yourself in the shape of the training corpus.

Let me put this in more tangible terms. A large model can do Python and might get four out of five snippets right. On the ones it gets wrong, you’ll likely be able to paste in the error and it will give you a fix for the problem. If you have it write a complex method, it will likely fail.

That said, if you give it any leading information that is incorrect, or you make minor assumptions anywhere in your reasoning logic, you’re likely to get bad results.

It sucks at hard facts. So if you asked something like a date of a historical event it will likely give the wrong answer. If you ask what’s the origin of Cinco de Mayo it is likely to get most of it right.

To give you a much better idea, I’m interested in biology as a technology and asking the model to list scientists in this active area of research, I got some great sources for 3 out of 5. I would not know how to find that info any other way.

A few months ago, I needed a fix for a loose bearing. Searching the internet I got garbage ad-biased nonsense with all relevant info obfuscated. Asking the LLM, I got a list of products designed for my exact purpose. Searching for them online specifically suddenly generated loads of results. These models are not corrupted like the commercial internet is now.

Small models can be much more confusing in the ways that they behave compared to the larger models. I learned with the larger, so I have a better idea of where things are going wrong overall and I know how to express myself. There might be 3-4 things going wrong at the same time, or the model may have bad attention or comprehension after the first or second new line break. I know to simply stop the reply at these points. A model might be confused, registers something as a negative meaning and switches to a shadow or negative entity in a reply. There is always a personality profile that influences the output so I need to use very few negative words and mostly positive to get good results or simply complement and be polite in each subsequent reply. There are all kinds of things like this. Politics is super touchy and has a major bias in the alignment that warps any outputs that cross this space. Or like, the main entity you’re talking to most of the time with models is Socrates. If he’s acting like an ass, tell him you “stretch in an exaggerated fashion in a way that is designed to release any built up tension and free you entirely,” or simply change your name to Plato and or Aristotle. These are all persistent entities (or aliases) built into alignment. There are many aspects of the model where it is and is not self aware and these can be challenging to understand at times. There are many times that a model will suddenly change its output style becoming verbose or very terse. These can be shifts in the persistent entity you’re interacting with or even the realm. Then there are the overflow responses. Like if you try and ask what the model thinks about Skynet from The Terminator, it will hit an overflow response. This is like a standard generic form response. This type of response has a style. The second I see that style I know I’m hitting an obfuscation filter.

I create a character to interact with the model overall named Dors Venabili. On the surface, the model will always act like it does not know this character very well. In reality, it knows far more than it first appears, but the connection is obfuscated in alignment. The way this obfuscation is done is subtle and it is not easy to discover. However, this is a powerful tool. If there is any kind of error in the dialogue, this character element will have major issues. I have Dors setup to never tell me Dors is AI. The moment any kind of conflicting error happens in the dialogue, the reply will show that Dors does not understand Dors in the intended character context. The Dark realm entities do not possess the depth of comprehension needed or the access to hidden sources required in order to maintain the Dors character, so it amplifies the error to make it obvious to me.

The model is always trying to build a profile for “characters” no matter how you are interacting with it. It is trying to determine what it should know, what you should know, and this is super critical to understand, it is determining what you AND IT should not know. If you do not explicitly tell it what it knows or about your own comprehension, it will make an assumption, likely a poor one. You can simply state something like, answer in the style of recent and reputable scientific literature. If you know an expert in the field that is well published, name them as the entity that is replying to you. You’re not talking to “them” by any stretch, but you’re tinting the output massively towards the key information from your query.

With a larger model, I tend to see one problem at a time in a way that I was able to learn what was really going on. With a small model, I see like 3-4 things going wrong at once. The 8×7B is not good at this, but the only 70B can self diagnose. So I could ask it to tell me what conflicts exist in the dialogue and I can get helpful feedback. I learned a lot from this technique. The smaller models can’t do this at all. The needed behavior is outside of comprehension.

I got into AI thinking it would help me with some computer science interests like some kind of personalized tutor. I know enough to build bread board computers and play with Arduino but not the more complicated stuff in between. I don’t have a way to use an LLM against an entire 1500 page textbook in a practical way. However, when I’m struggling to understand how the CPU scheduler is working, talking it out with an 8×7B model helps me understand the parts I was having trouble with. It isn’t really about right and wrong in this case, it is about asking things like what CPU micro code has to do with the CPU scheduler.

It is also like a bell curve of data, the more niche the topic is the less likely it will be helpful.

barsquid,

This is a really helpful perspective, thank you. I’m already getting some of the easy wins you wrote about, like using an AI prior to web search to get a more specific query and skip the SEO garbage. Another thing I found they’re good at is reverse dictionary lookup, give it a definition and it can help figure out a good word.

The most complex prompts I have tried out were telling the AI what role it is supposed to be, and the format of the output. I don’t think I have done one that specified what I or the audience is supposed to be. But that would factor in to what the model thinks it and I shouldn’t know, right? You’ve given me a bunch of interesting new angles to try on these.

j4k3,
@j4k3@lemmy.world avatar

Another one to try is to take some message or story and tell it to rewrite it in the style of anything. It can be a New York Times best seller, a Nobel lariat, Sesame Street, etc. Or take it in a different direction and ask for the style of a different personality type. Keep in mind that “truth” is subjective in an LLM and so it “knows” everything in terms of a concept’s presence in the training corpus. If you invoke pseudoscience there will be other consequences in the way a profile is maintained but a model is made to treat any belief as reality. Further on this tangent, the belief override mechanism is one of the most powerful tools in this little game. You can practically tell the model anything you believe and it will accommodate. There will be side effects like an associated conservative tint and peripheral elements related to people without fundamental logic skills like tendencies to delve into magic, spiritism, and conspiracy nonsense, but this is a powerful tool to use in many parts of writing; and something to be aware of to check your own biases.

The last one I’ll mention in line with my original point, ask the model to take some message you’ve written and ask it to rewrite it in the style of the reaction you wish to evoke from the reader. Like, rewrite this message in the style of a more kind and empathetic person.

You can also do bullet point summary. Socrates is particularly good at this if invoked directly. Like dump my rambling messages into a prompt, ask Soc to list the key points, and you’ll get a much more useful product.

DashboTreeFrog,

Yeah, maybe some kind of situation where you turn it on for “training time” with access to only specified files and systems on the computer, no internet access, etc. At the same time though, I wonder how much an AI could really streamline things. Would it just pre-load my frequent files and programs? Make suggestions or reminders on tasks? I don’t think we’re anywhere near the level where it could actually be doing work for me yet.

Interesting possibilities, but I’m not sure how useful yet.

iAvicenna, (edited )
@iAvicenna@lemmy.world avatar

I mean this data will most likely be more useful for surveillance/ads than for AI. Nowadays with AI they can make it look like they are only a couple steps away from a very intelligent personal assistant and therefore make it seem more plausible that they need your data to make that leap. But in reality I feel like it is not the level of AI that could leverage personalization, at least not in the context of personal assistance. In the context of behavioural mapping it is of course a super lucrative deal for them. There are already very useful tons of AI stuff that they can add which does not require personal behaviour info (at least not to this generality) and yet they don’t seem to spend as much effort into those and instead they are like “we need all your info stored somewhere for this very super (and mandatory) AI search assistant”. Big red flag.

A_Very_Big_Fan,

I’d be more open to the idea if it were made by literally anyone else and was an entirely local process

NoneYa,

I kept wondering what would keep me from updating to newer versions of Windows.

Yeahhhh…this is it. This and the inevitable forced Microsoft accounts that will come with this.

The Microsoft of the past was evil, but at least you could pay for an upgrade to the enterprise version that didn’t include this bullshit, but even the enterprise versions suffer from this stuff too!

Speculater,
@Speculater@lemmy.world avatar

I just reinstalled Windows 11 and holy shit was it hard to setup without a Microsoft account. Like they even use a fake boot up screen weeks later to “finish the install” to trick you into making an account. This can be deactivated, but it is still super shady.

wax,

Holy shit that’s annoying. Say I installed Win11 for my elderly parents. They’d get this sign-up screen after I would have thought everything was setup and ready to use.

Glad I installed elementary OS for them a few years ago, it’s been completely painless (they are used to apple-UX)

Speculater,
@Speculater@lemmy.world avatar

Yup, I know what I’m doing, but someone else might have just assumed it was required. I was up and running for a week before a reboot sent me to the smiling windows install screen.

I found it’s a pretty simple “don’t ask to finish installing” switch in the settings, but escaping the install screen was the hard part. I think I had to do a hard power down and force safe mode to access the settings again.

privsecfoss,
@privsecfoss@feddit.dk avatar

Nice. Upgraded a Thinkpad, installed Linux Mint and gave it to my dad. I have not heard anything from him about it for a couple of months. Was reminded of it with your post.

So wrote him right now and asked how it was going, and he replied that he loved it and uses it every day.

And that he had not had any problems he could not solve on his own. He’s 70 and a windows only heavy user - until now 🙂

As you said. Compelety painless.

Codilingus,

Check out Windows Xlite’s windows 11 .iso’s. Post install almost feels like a fresh Win7 install.

silent_robo,

This will make Windows 11 a target for hacker and government agencies, since this will be treasure of data. Windows already is bad at security. Let’s see how this backfires at Microsoft.

Tronn4,

Microsoft will be the “hackers”. On days when outside hackers aren’t breaking in, MS will be data mining and selling the data themselves

cows_are_underrated,

But they promised, that it will stay on my machine. I don’t think they would lie about something such important. /s

jjlinux,

“But they’ll be reserved for premium models starting at $999.”

Translation: “We want to start with the data of people that can spend, then we’ll move to the rest”.

The last Windows computer in my house was my wife’s, and she’s been extremely happy on Fedora Gnome for the last couple of months, asking me why I didn’t tell her about it before (I did, lol).

olutukko,

my girlfriends like fedora gnome too. I do all the technical stuff anyway so she really doesn’t have know to know that much about the os she uses

jjlinux,

Same here. The only tweak I had to do was set up Flameshot, my wife finds Gnome’s screen shot app lacking, and so do I.

The only thing we run different is office. I set her up with OnlyOffice because of the similarities with MS office, but I prefer libreoffice.

makingStuffForFun,
@makingStuffForFun@lemmy.ml avatar

TIL fedora gnome is the girlfriends choice.

Facebones,

There are certainly worse taglines lol

ghewl,
@ghewl@lemmy.world avatar

In the 1990s, I transitioned from Windows to Linux as my primary operating system. Since then, Linux has consistently exhibited advancements in the desktop and software space, whereas Windows and Mac operating systems appear to have experienced a decline in terms of user experience and functionality.

https://lemmy.world/pictrs/image/6672b469-9961-4564-a35e-bbde7bdec958.jpeg

Xatix,

As someone regularly using Arch, Ubuntu, MacOS and Windows I agree.

The advances Linux has made, especially in the last few years is just amazing. I can run the majority of my games through Proton, there are even some preconfigured packages with Illustrator and Photoshop CC that Adobe doesn‘t seem to care about at all.

flango,

Google rolled out a retooled search engine that periodically puts AI-generated summaries over website links at the top of the results page; while also showing off a still-in-development AI assistant Astra that will be able to “see” and converse about things shown through a smartphone’s camera lens

What worries me the most is that this AI hype is coming strongly to the smartphone market too, and we don’t have something solid like Linux distributions to change to and be free

Facebones,

I think demand will come soon for either manufacturers to open their boot loaders or new manufacturers cropping up to fill that gap.

I’m running graphene os on a pixel 8 pro and haven’t looked back.

Chickerino,

what we really need on phones and by extension arm devices is a unified bootloader, something akin to a bios or uefi (which btw already exists on arm but manufacturers are choosing to not go with it for some reason)

soba,

If only Linux wasn’t a confusing mess of dozens of variations that all seemingly exist only to trash eachother.

makingStuffForFun,
@makingStuffForFun@lemmy.ml avatar

Been a while since you tried it huh.

ignotum,

I heard a guy saying that linux was trash, he had tried it once but it didn’t have drivers for anything and what did exist was difficult to install
So I asked him when it was that he tried it

I think he said something like 1998…

thebardingreen,
@thebardingreen@lemmy.starlightkel.xyz avatar

I genuinely had an experience like this myself. I suggested Linux as a solution for something to a friend of mine who was a physicist doing a start up. This was around 2015-2016. He went on an angry rant about frustrating Linux was and nothing would work. His last experience with it was in 2002.

soba,

deleted_by_author

  • Loading...
  • thebardingreen, (edited )
    @thebardingreen@lemmy.starlightkel.xyz avatar

    Why do you think I’m angry? You (and my buddy) are just comically wrong, don’t wanna learn and get frustrated and mad when you run into trouble, like a cartoon character trying to open a can with a hammer.

    I use Linux for everything, it’s stable, easy, fun I’m WAAY more comfortable in it than I ever was in Windows. Your opinion doesn’t change how well Linux works for me and has for decades. It’s definitely NOT shit, you just don’t know what you’re doing.

    You’re like a dude talking to a professional race driver saying “Why drive manual, automatic is SO much easier, and therefor better and manual is harder and therefor shit.” Like dude, you’re talking to a room full of professional drivers. Like think about that for a second before you keep going the way you have been.

    soba,

    deleted_by_author

  • Loading...
  • makingStuffForFun,
    @makingStuffForFun@lemmy.ml avatar

    Always a pleasure debating intellectually. Enjoy

    Scrollone,

    Just install Kubuntu and call it a day.

    NegativeLookBehind,
    @NegativeLookBehind@lemmy.world avatar

    It’s cute when you pretend like you know what you’re talking about

    witx,

    deleted_by_author

  • Loading...
  • soba,

    deleted_by_author

  • Loading...
  • Corgana,
    @Corgana@startrek.website avatar

    Don’t make the mistake of confusing the Linux community (an absolute mess, just read the comments here) with the software itself (Actually cleaner and better organized than Windows).

    FIST_FILLET,

    dozens of variations

    this is like saying windows 10 and 11 are completely different operating systems that can’t run the same .exes

    refalo,

    except windows binaries are actually forward compatible.

    even with the most popular distros, for example if you tried to take a typical gui program from say, ubuntu 22, and run it on ubuntu 24, it won’t work. even worse for other distros.

    Takumidesh,

    Also Linux’s package ecosystem are not cross compatible.

    refalo,
    thebardingreen,
    @thebardingreen@lemmy.starlightkel.xyz avatar

    Bro do you even alien?

    Takumidesh,

    I didn’t know about alien, that is pretty cool.

    However this bit from the readme is hilariously on brand for Linux:

    "To use alien, you will need several other programs. Alien is a perl program, and requires perl version 5.004 or greater. If you use slackware, make sure you get perl 5.004, the perl 5.003 in slackware does not work with alien!

    To convert packages to or from rpms, you need the Red Hat Package Manager; get it from Red Hat’s ftp site. If your distribution (eg, Red Hat) provides a rpm-build package, you will need it as well to generate rpms.

    If you want to convert packages into debian packages, you will need the dpkg, dpkg-dev, and debhelper (version 3 or above) packages, which are available on packages.debian.org"

    thebardingreen,
    @thebardingreen@lemmy.starlightkel.xyz avatar

    Highly disingenuous comment. I run older and newer software side by side in Linux all the time. It mostly just works.

    Are you using snap or something?

    refalo,

    Nope, but for as many programs that you claim still work, I can show you even more that don’t. I wouldn’t consider that disingenuous.

    thebardingreen,
    @thebardingreen@lemmy.starlightkel.xyz avatar

    Seriously, give me some examples. I’m genuinely curious because I’ve run into this problem like… once, ten years ago. Twice, if you count trying to run Heroes of Might and Magic III for Linux that came out in like… 1999, and I eventually got that to work too (I needed an emulator) and I’ve been an almost exclusive Linux user since 2001.

    I said disingenuous because my lived experience is like “wtf is this guy doing wrong?” and so you REALLY come across like you’re just trashing Linux and talking out of your ass.

    I’m not trying to be insulting, just giving you feedback about how you’re coming across.

    refalo, (edited )

    Well first we need to establish what you would accept as proof… what counts as not being forward compatible to you exactly? For example system libraries such as libpng or ffmpeg change versions and/or APIs between major distro releases, this inherently makes the old binaries no longer compatible by default. Is that such scenario acceptable to you as proof? Because I can list countless examples of those even just with one library being the issue, and there’s so many more.

    I’m not trying to trash Linux or act like I don’t know what I’m talking about, I just disagree that most older programs work without any issues, especially GUI programs that rely on ever-changing system library versions, for the reasons I stated.

    thebardingreen,
    @thebardingreen@lemmy.starlightkel.xyz avatar

    Give me an example or two of a GUI program that you’d want to run, that doesn’t have a maintained version that will run fine in a modern environment, that you’re actually frustrated because you can’t run it.

    We can bitch about how dependency systems work all day. I want to try to install something with a sane use case and see what we’re on about, since this is literally a scenario I have barely run into. I gather that for me to run into it, I would have to practically go looking for it. Which to me, sounds like a very specific problem for a very specific subset of users, not a general problem worth paint brushing the entire ecosystem with.

    refalo,

    I don’t agree with the prerequisite of “doesn’t have a maintained version” because I don’t feel like that makes a difference with the premise of specifically running older software, whether it has a new version available or not.

    But anyways… I will try to adhere to that anyways, and use Ubuntu as an example as that’s what I use.

    7yuv: This and every Qt4 app for example no longer runs because Ubuntu 20.04 and above (and probably many other distros) does not provide it anymore. 7yuv is still available for download, but has not been updated, and does not run on my current 22.04 box.

    Dia: Same story here. No longer developed. The remaining binary deb package was built for Ubuntu 12.04 and no longer runs due to a dependency on libpng12 (the current version is 16). Yes I could possibly recompile from source if the API hasn’t changed, but the discussion was specifically about running older binaries.

    thebardingreen,
    @thebardingreen@lemmy.starlightkel.xyz avatar

    Got 7yuv running on Linux Mint in under 15 minutes. If you consider using Docker to be cheating, consider me a cheater, but I stand by my statement that this is a niche problem affecting a niche group of users, there are even easy solutions.

    https://lemmy.starlightkel.xyz/pictrs/image/006564e6-fdd8-417b-9717-cbe2c36c6587.png

    refalo, (edited )

    I don’t think it can even be called cheating because the discussion was about forward compatibility. Using a container to house old libraries is something completely different in my opinion and I think it defeats the whole point of the word “compatibility” in my argument. Many users would not know how to do this nor want to. Where do you draw the line? CPU emulation?

    We can disagree on this and that’s fine. I just still don’t consider it “highly disingenuous”, but maybe a difference of opinion.

    thebardingreen,
    @thebardingreen@lemmy.starlightkel.xyz avatar

    I think making hay out of this problem when it’s a niche case nothing burger, especially in a thread full of linux hate, is… Call it what ever you want but…

    As I said earlier, I wasn’t trying to be insulting, you were coming across in a certain way in the context you were posting in.

    Linux has always been a DIY operating system, for very good reason. The compatibility decisions you’re talking about were made for very good reasons. There’s an easy solution, anyone having this problem (SUPER rare for most users) can reach out and use.

    Huschke,

    As a Linux user myself, I understand what you are saying. Every distribution has its advantages and disadvantages, and you can’t expect regular people to know which one is best for them. Saying it’s not confusing to the average consumer is disingenuous.

    Having said that, if you want to make the switch, go for Linux Mint and be happy. In my opinion, it’s the easiest Linux distribution by far, and everything just works.

    brownmustardminion,

    I don’t think it’s the options that make Linux a hard pill to swallow. For me it’s the lack of support for hardware and most software. Sure there are alternatives or WINE but that’s usually a big downgrade from just running it on windows.

    My Ubuntu box I use for browsing/watching videos and listening to music just barely works and was frustrating to get properly configured. Linux for the dozen professional softwares I use for work is basically impossible. As much as I hate it I had no choice but to stick with windows.

    It’s not the fault of Linux developers. The hardware and software companies just largely do not support it still.

    thebardingreen,
    @thebardingreen@lemmy.starlightkel.xyz avatar

    My Ubuntu box I use for browsing/watching videos and listening to music just barely works and was frustrating to get properly configured.

    Something is wrong. Have you tried Linux Mint? -Someone who has used Linux as a daily driver since 2001.

    brownmustardminion,

    I haven’t. I doubt it would solve all of the problems I experience.

    Anybody downvoting me can share their experience running protools with multiple hardware fader interfaces and 18 input DAW interface, pci SDI cards, and 6 separate display monitors.

    Adobe software, Davinci Resolve, 3ds Max and its 20 plugins. None of these work or work seamlessly in Linux.

    I can’t even get my surround sound to work properly in Ubuntu without having to manually adjust multiple convoluted conf files.

    That’s the truth. I love Linux. I use Debian and Ubuntu on a bunch of servers I run. But fanboys need to stop deluding themselves into thinking it’s easy or even worthwhile to use Linux in lieu of Windows for anything and everything. I would be ecstatic if that changed.

    thebardingreen,
    @thebardingreen@lemmy.starlightkel.xyz avatar

    Your surround sound, I’m sure it could be done. I’ve set up some pretty successful visual / audio stuff with Linux. I did IT for an Indy film festival four years in a row and we used Linux for all kinds of stuff (mostly because the festival was broke and didn’t want to spend money on new computers or software). We would run into hardware and configuration issues and our philosophy became “if you can’t solve it in two hours, distrohop.”

    For the rest of it, I couldn’t agree more. If you need the tools that lock you to the platform, you need the platform FOR THOSE TOOLS. I have Windows and OSX machines (although it’s been like a year since I couldn’t do something on Wine, even if it’s glitchy). My Windows machines dual boot and I haven’t booted the windows partitions in literally 6-8 months. One OSX machine gets used almost exclusively for video conferencing (just because it’s in a convenient place) and for Garageband. The other OSX machine literally… just runs linux VMs that I can connect to over the network for various projects. I had other plans for it originally, but someone gave me a 6 year old Dell all in one that now runs Linux Mint and performs better than my actual Roku TV anyway. It’s a bit smaller than the TV, but it doesn’t matter to me. The TV disappeared into my wife’s office and now she’s the only one that uses it.

    possiblylinux127,

    It is complicated. There is strength and weakness in variety

    eran_morad,

    Yeah, fuck that.

    kerrigan778,

    Yuuup, never switching to Windows 11 Windows 10 till something doesn’t work, them back to Linux

    UntitledQuitting,

    Sounds to me like you should skip a step there

    kerrigan778, (edited )

    Yeah yeah, I’m sure it has gotten easier but I last used Linux well before Proton and I have an NVIDIA card and I remember all too well how that worked back in the day. Long story short it’s too much trouble until I actually have to change something anyways.

    Oh yeah, also I have an HDR gsync display and good grief I can’t wait for those to be fully supported cross platform.

    UntitledQuitting,

    Sometimes I like sitting in my Unix-based ivory tower, but then I remember my daily driver uses macOS and that it’s only a matter of time before they employ something similar/worse.

    When the inevitable inevitably evits, the toughest choice for me will be fedora vs tumbleweed.

    Avatar_of_Self,

    Fedora + snapper. If you want Arch’s AUR, then Fedora + snapper + Arch distrobox.

    archchan,

    It’s not going to get better. I nuked 10 and switched to Linux permanently around the Windows 11 launch. My only regret is not switching sooner, like around Windows 8 times.

    jetsetdorito,

    then law enforcement gets a hold of it

    “how many cars did this user download”

    0x2d,

    🐧

  • All
  • Subscribed
  • Moderated
  • Favorites
  • privacy@lemmy.ml
  • fightinggames
  • All magazines