theregister.com

Andromxda, to linux in Debian spices up APT package manager with a dash of color
@Andromxda@lemmy.dbzer0.com avatar

Nala is a much nicer frontend for apt, which includes additional features like parallel downloads

umbrella,
@umbrella@lemmy.ml avatar

i tought apt already did this if you were downloading from different mirrors?

LeFantome,

I have heard of Nala before but have never actually taken the time to install it. Based on your comment, I just checked it out on one of the Debian 12 systems I run. Turns out it was right in the repos.

Wow. So good. I cannot believe it took me this long. Jealous of it on the Arch installs now.

I installed it on Ubuntu 22.04 as well but it was not there when I searched. I had to add the jammy-backports repo first.

Thank you for the push.

mp3, (edited ) to technology in Apple slams Android as a 'massive tracking device' in internal slides revealed in Google antitrust battle
@mp3@lemmy.ca avatar

Yet Apple gladly takes billions from Google so that they remain the default search engine.

erwan,

Yes it’s all business.

Partnership team finds the biggest bidder for the default search.

Marketing teams find the best argument against their biggest competitor.

At no point anyone is pondering if Google is “good” or “bad” because companies typically don’t care.

cmnybo, to technology in Privacy advocate challenges YouTube's ad blocking detection scripts under EU law

If that is successful, I would expect youtube to switch to simply checking if the ads were actually served to the user. That wouldn’t require checking for adblock on the users computer. Of course the adblocker would just download the content and not display it if they did that.

thejml,

Either that or merging the ad into the video stream itself. This would make it un-skippable, but would also be unblockable without stream processing (there are commercial skip options for ffmpeg and similar encoders, so not completely impossible, but much more work and more likely to mark real content as a commercial as well).

Car,

Thankfully it seems that encoding ads into the video stream is still too expensive for them to implement.

I’m assuming that asking CDNs to combine individualized ads with content and push the unique streams to hosts does not scale well.

blindsight, (edited )

Since they target ads demographically and ads change frequently, that would be a mess… The encoding, storage, and tracking would be a Big problem.

If they go this route, it would only make sense if they build a new video codec that allows for linearly splitting content at key frames so they can concatenate the ads with the video in a single file at runtime.

But then couldn’t ad detectors just start playback at the key frames?

Even if it works, it would still be a Big Deal since re-encoding all of YouTube would be Hard. I guess they could just use the codec for all newly added material. Playback might suck on older devices, too; idk if they use h264 (that has dedicated hardware decoders)?

jmcs,

If they go back to contextual ads instead of making the NSA look like reasonable people, they could pre-insert them like some podcasts do

noodlejetski,

and for that, there’s SponsorBlock sponsor.ajay.app

lemmyvore,

It’s not that expensive. You can mix or overlay stuff over a video stream fairly cheaply. Sure, it will be a hit overall for their bottom line but they’ll do it if they have to.

They can also turn on DRM for all videos on the platform. Currently it’s only used for paid videos and it’s very hard to bypass.

GissaMittJobb,

I don’t think that inlining ads into the stream would be expensive, because of how adaptive streaming formats work. There are probably other reasons why they haven’t chosen this option yet.

Car,

This seems simple for one stream, but scale that up to how many unique streams that Youtube is servicing at any given second. 10k?

Google doesn’t own all of the hardware involved in this video serving process. They push videos to their local CDNs, which then push the videos to the end users. If we’re configuring streams on the fly with advertisements, we need to push the ads to the CDNs pushing out the content. They may already be collocated, but they may not. We need to factor in additional processing which costs time and money.

I can see this becoming an extremely ugly problem when you’re working with a decentralized service model like Youtube. Nothing is ever easy since they don’t own everything.

GissaMittJobb,

So what you would do is to generate the manifest files (HLS/DASH/what have you) on the fly to include the segments with ads. Since adaptive streaming is based on manifests, that stitch together segments of video files that together make up the underlying content in different bitrates, you can essentially just push in a few segments of advertising in-between the segments representing the underlying content. This isn’t particularly hard to do, and you’d get the full benefit of the CDN for the segments, so there’s really no issue.

thingsiplay,
@thingsiplay@kbin.social avatar

or merging the ad into the video stream itself. This would make it un-skippable

That's not true. Besides the point that people can skip any video content manually anyway, I already use a Firefox addon called "SponsorBlock for YouTube - Skip sponsorships", which is configurable and works for other sites as well. The skip points are community maintained, but with the help of AI it should be easy to detect ads automatically. The point is, there are already tools to help with skipping video encoded content.

lemmyvore,

There’s nothing to skip if they overlay small ads while the content is playing.

On the bright side such small ads may be less annoying than full screen ads.

WeLoveCastingSpellz,

Probs a sponsor block like thing could work

thingsiplay, to technology in Apple limits third-party browser engine work to EU devices

Same for side loading apps. If the rest of the world / governments does not care, then Apple won’t care in the rest of the world too.

LeFantome, to linux in Forgetting the history of Unix is coding us into a corner [The Register]

What an odd article. First, the author goes to great lengths to assert that “Linux IS UNIX” with pretty circumstantial evidence at best. Then, I guess to hide the fact the his point has not proved, he goes through the history of UNIX, I guess to re-enforce that Linux is just a small piece of the UNIX universe? Then, he chastises people working on Linux for not staying true to the UNIX philosophy and original design principles.

Questions like “are you sure this is a UNIX tool?” do not land with the weight he hopes as the answer os almost certainly “No. This is not a “UNIX” tool. It is not trying to be. Linux is not UNIX.”

The article seems to be mostly a complaint that Linux is not staying true enough to UNIX. The author does not really establish why that is a problem though.

There is an implication I guess that the point of POSIX and then we UNIX certification was to bring compatibility to the universe of diverging and incompatible Unices. While I agree that fragmentation works against commercial success, this is not a very strong point. Not only was the UNIX universe ( with its coherent design philosophy and open specifications ) completely dominated by Windows in the market but they were also completely displaced by Linux ( without the UNIX certification ).

Big companies found in Linux a platform that they could collaborate on. In practice, Linux is less fragmented and more ubiquitous than UNiX ever was before Linux. Critically, Linux has been able to evolve beyond the UNIX certification.

Linux does follow standards. There is POSIX of course. There is the LSB. There is freedesktop.org. There are others. There is also only one kernel.

Linux remains too fragmented on the desktop to displace Windows. To address that, a standard set of Linux standards are emerging: including Wayland, pipewire, and Flatpak.

Wayland is an evolution of the Linux desktop. It is a standard. There is a specification. There is a lot of collaboration around its evolution.

As for “other” systems, I would argue that compatibility with Linux will be more useful to them than compatibility with “UNIX”. I would expect other systems to adopt Wayland in time. It is already supported on systems like Haiku. FreeBSD is working on it as well.

ethd,

This is my real problem with this (and also broadly pointing the finger to the “Unix philosophy” whenever a project like systemd or Wayland exists, ignoring that the large, complex, multifaceted, and monolithic Linux kernel itself flies in the face of that philosophy). Linux may have originally been built to be Unix-like but has become its own thing that shares a few similarities with Unix.

djsaskdja, to technology in Apple exec defends 8GB $1,599 MacBook Pro, claims it's like 16GB on a PC

Tell that to Google Chrome

sweetpotato, to privacy in Majority of Americans now use ad blockers
@sweetpotato@lemmy.ml avatar

Good, your attention is a commodity, don’t let advertisers steal it. We’ve been assaulted with enough ads in public spaces already.

taladar,

It is literally wasting the most valuable thing we have, our lifetime.

floofloof, to technology in Former infosec COO pleads guilty to attacking hospitals to drum up business

Pleading guilty to one count of intentional damage to a protected computer, Singla faces a maximum prison term of 10 years, though he may not ever see the inside of a cell.

The court was recommended to instead sentence Singla to 57 months of house detention due to his suffering an “extraordinary” rare and incurable form of cancer. Any delay to his surgery, should the cancer recur, may render his condition inoperable, according to the plea agreement.

The decision to recommend the alternative to incarceration was also influenced by a “dangerous” vascular condition, from which Singla also suffers.

Ironic that the guy found guilty of disrupting other people’s access to healthcare may avoid prison because he needs ready access to healthcare.

EnderMB, to privacy in Microsoft CEO of AI: Online content is 'freeware' for models • The Register

I’m fine with that, but let’s put some rules against this.

  • Any AI models should be able to determine the source of their data to a defined level of accuracy.
  • There should be a well-defined way to block data from being used by AI. If one of these ways (e.g. robots.txt) has been breached, the model has to be rebuilt without the data, and reparations made to the content owners.
ayaya,
@ayaya@lemdro.id avatar

What you’re asking for is literally impossible.

A neural network is basically nothing more than a set of weights. If one word makes a weight go up by 0.0001 and then another word makes it go down by 0.0001, and you do that billions of times for billions of weights, how do you determine what in the data created those weights? Every single thing that’s in the training data had some kind of effect on everything else.

It’s like combining billions of buckets of water together in a pool and then taking out 1 cup from that and trying to figure out which buckets contributed to that cup. It doesn’t make any sense.

EnderMB,

Respectfully, I worked for Alexa AI on compositional ML, and we were largely able to do exactly this with customer utterances, so to say it is impossible is simply not true. Many companies have to have some degree of ability to remove troublesome data, and while tracing data inside a model is rather difficult (historically it would be done during the building of datasets or measured at evaluation time) it’s definitely something that most big tech companies will do.

ayaya,
@ayaya@lemdro.id avatar

Sorry, I misinterpreted what you meant. You said “any AI models” so I thought you were talking about the model itself should somehow know where the data came from. Obviously the companies training the models can catalog their data sources.

But besides that, if you work on AI you should know better than anyone that removing training data is counter to the goal of fixing overfitting. You need more data to make the model more generalized. All you’d be doing is making it more likely to reproduce existing material because it has less to work off of. That’s worse for everyone.

socphoenix,

It’s not impossible lol. All a company would need to do is keep track of where they were getting content. If I use a script to download as much of the internet as possible and end up with a bunch of copyrighted content I could still get in trouble, hell there was even a guy arrested for downloading jstor without authorization.. Stop letting these guys get away with crimes just because you like the idea of the end product

AstralPath,

Sounds like homeopathy lol

jarfil, to technology in Trump 'tried to sell Truth Social to Musk' as SPAC deal stalled

Truth Social, essentially a Mastodon clone

Wait, what?.. 😳

The Trump Truth Social network removes the most freedom-friendly features of the Fediverse

Phew.

narc0tic_bird, to technology in Apple exec defends 8GB $1,599 MacBook Pro, claims it's like 16GB on a PC

Do they store 32-bit integers as 16-bit internally or how does macOS magically only use half the RAM? Hint: it doesn’t.

Even if macOS was more lightweight than Windows - which might well be true will all the bs processes running in Windows 11 especially - third party multiplatform apps will use similar amounts of memory no matter the platform they run on. Even for simple use cases, 8 GB is on the limit (though it’ll likely still be fine) as Electron apps tend to eat RAM for breakfast. Love it or hate it Apple, people often (need to) use these memory-hogging apps like Teams or even Spotify, they are not native Swift apps.

I love my M1 Max MacBook Pro, but fuck right off with that bullshit, it’s straight up lying.

Kazumara,

Pied Piper middle out compression for your RAM

But seriously it’s so ridiculous especially since he said it in an interview with a machine learning guy. Exactly the type of guy who needs a lot of RAM for his own processes working on his own data using his own programs. Where the OS has no control over precision, access patterns or the data streaming architecture.

echodot,

Apple executives haven’t actually been computed guys for years now. They’re all sales and have no idea how computers work. They constantly saying stupid things that make very little sense, but no one ever calls them on it because Apple.

abhibeckert, (edited )

Do they store 32-bit integers as 16-bit internally or how does macOS magically only use half the RAM? Hint: it doesn’t.

As a Mac programmer I can give you a real answer… there are three major differences… but before I go into those, almost all integers in native Mac apps are 64 bit. 128 bit is probably more common than 32.

First of all Mac software generally doesn’t use garbage collection. It uses “Automatic Reference Counting” which is far more efficient. Back when computers had kilobytes of RAM, reference counting was the standard with programmer painstakingly writing code to clear things from memory the moment it wasn’t needed anymore. The automatic version of that is the same, except the compiler writes the code for you… and it tends to do an even better job than a human, since it doesn’t get sloppy.

Garbage collection, the norm on modern Windows and Linux code, frankly sucks. Code that, for example, reads a bunch of files on disk might store all of those files in RAM for for ten seconds even if it only needs one of them in RAM at a time. That burn be 20GB of memory and push all of your other apps out into swap. Yuck.

Second, swap, while it’s used less (due to reference counting), still isn’t a “last resort” on Macs. Rather it’s a best practice to use swap deliberately for memory that you know doesn’t need to be super fast. A toolbar icon for example… you map the file into swap and then allow the kernel to decide if it should be copied into RAM or not. Chances are the toolbar doesn’t change for minutes at a time or it might not even be visible on the screen at all - so even if you have several gigabytes of RAM available there’s a good chance the kernel will kick that icon out of RAM.

And before you say “toolbar icons are tiny” - they’re not really. The tiny favicon for beehaw is 49kb as a compressed png… but to draw it quickly you might store it uncompressed in RAM. It’s 192px square and 32 bit color so 192 x 192 x 32 = 1.1MB of RAM for just one favicon. Multiply that by enough browser tabs and… Ouch. Which is why Mac software would commonly have the favicon as a png on disk, map the file into swap, and decompress the png every time it needs to be drawn (the window manager will keep a cache of the window in GPU memory anyway, so it won’t be redrawn often).

Third, modern Macs have really fast flash memory for swap. So fast it’s hard to actually measure it, talking single digit microseconds, which means you can read several thousand files off disk in the time it takes the LCD to refresh. If an app needs to read a hundred images off swap in order to draw to the screen… the user is not going to notice. It will be just as fast as if those images were in RAM.

Sure, we all run a few apps that are poorly written - e.g. Microsoft Teams - but that doesn’t matter if all your other software is efficient. Teams uses, what, 2GB? There will be plenty left for everything else.

Of course, some people need more than 8GB. But Apple does sell laptops with up to 128GB of RAM for those users.

rasensprenger, (edited )

Almost all programs use both 32bit and 64bit integers, sometimes even smaller ones, if possible. Being memory efficient is critical for performance, as L1 caches are still very small.

Garbage collection is a feature of programming languages, not an OS. Almost all native linux software is written in systems programming languages like C, Rust or C++, none of which have a garbage collector.

Swap is used the same way on both linux and windows, but kicking toolbar items out of ram is not actually a thing. It needs to be drawn to the screen every frame, so it (or a pixel buffer for the entire toolbar) will kick around in VRAM at the very least. A transfer from disk to VRAM can take hundreds of milliseconds, which would limit you to like 5 fps, no one retransfers images like that every frame.

Also your icon is 1.1Mbit not 1.1MB

I have a gentoo install that uses 50MB of ram for everything including its GUI. A webbrowser will still eat up gigabytes of ram, the OS has literally no say in this.

narc0tic_bird,

My 32/16 bit integer example was just that: an example where one was half the size as the other. Take 128/64 or whatever, doesn’t matter as it doesn’t work like that (which was my point).

Software written in non-GC based languages runs on other operating systems as well.

I used MS Teams as an example, but it’s hardly an exception when it comes to Electron/WebView/CEF apps. You have Spotify running, maybe a password manager (even 1Password uses Electron for its GUI nowadays), and don’t forget about all the web apps you have open in the browser, like maybe GMail and some Google Docs spreadsheet.

And sure, Macs have fast flash memory, but so do PC notebooks in this price range. A 990 Pro also doesn’t set you back $400 per terabyte, but more like … $80, if even that. A fifth. Not sure where you got that they are so fast it’s hard to measure.

There are tests out there that clearly show why 8 GB are a complete joke on a $1600 machine.

So no, I still don’t buy it. I use a desktop Windows/Linux machine and a MacBook Pro (M1 Max) and the same workflows tend to use very similar amounts of memory (what a surprise /s).

darkfiremp3, to technology in Apple exec defends 8GB $1,599 MacBook Pro, claims it's like 16GB on a PC

It makes it not feel like a premium device

30p87,

Because it’s not

Stormyfemme,

Honestly I was considering getting one because I could use a nice laptop to do stuff on but 8GB is inexcusably bad so yeah pass

Rekhyt, to technology in Australia to build Top Secret cloud in AWS for defence users

The register providing contrast to the AWS infrastructure build out:

The Register is aware of government agencies building on-prem private clouds – sometimes on open source platforms – so they can scour code to soothe their security worries.

That’s just a local data center, guys. Like how everything was done before “the cloud” became a buzzword.

progandy,

There is some difference I see in the management layer, with more dynamic resource allocation in a cloud infrastructure compared to traditional data center usage.

refalo,

AWS literally advertises an isolated “GovCloud” service.

Travelator,

Maybe it’s “AI” powered. I’m sure they could sell that concept.

riodoro1, to linux in Xubuntu 24.04: A minimal install that really means it

Nearly snap free.

Thats a wrong answer.

Ganbat,

Tried Ubuntu a few years back. Snap was a big part of why I dropped it. Started using Pop_OS last year, and while it’s still not my main driver (mostly because of gaming issues), I split my time between it and windows pretty evenly.

limelight79,

I have Kubuntu installed on my desktop, been using it for years. I had disabled snap Firefox and used a Deb version, but the other day I discovered that Kubuntu reinstalled the snap Firefox.

I’ve been planning to switch to Debian on my desktop, but I just haven’t gotten around to it yet. This little incident is reminding why I want to in the first place.

barbara, to privacy in EU tells Meta it can't paywall privacy

Does that make the subscription model illegal and facebook has to pay the money back?

slazer2au,

I am sure FB will give those who paid 1/11 of the amount back as credit on the Facebook marketplace.

Just like every other online retailer. Oh you paid $40 for something that we now have to refund you? Here is store credit for $9.76.

delirious_owl,
@delirious_owl@discuss.online avatar

That’s oddly specific

  • All
  • Subscribed
  • Moderated
  • Favorites
  • fightinggames
  • All magazines