Iā¦. Donāt really get why they think this is better. Google search was goodā¦. Other companies can copy AI technology anyway. AI is really just predicting words and wasnāt designed for search, but their old algorithm was.
Google hasnāt understood the internet for a long time. They created an excellent search algorithm by treating the internet as a single information system that warranted analysis and indexing for convenient traversal.
These days thatās notā¦ Something theyāre interested in anymore. The goal is to collect user data for targeting advertising and resale. Their core product is still the search bar, sure, but thatās just a hook to reel you in. Theyāll attach whatever buzzword to it it takes to keep it in the zeitgeist. āAiā is hot right now so thatās the buzzword.
I donāt get the impression technical competency is something Google values anymoreā¦
My theory is that Google wants to move towards vector symbolic representations for pages in search rather than page caching. It would make index storage and retrival orders of magnitude cheaper for them if they can design a scheme that works well.
Lemmy (like its predecessors) is temporally arranged content. Think of it like having a discussion in a pub. Imagine bringing up a topic and someone said: but we discussed this 5 days ago, so we cannot discuss it now. Your obvious response would be: but I wasnāt here five days ago. Itās okay to repeat a conversation.
If you want more of a hierarchical structure, use wikipedia article conversations. Then each conversation only occurs once (ish). Not encouraging repeated conversation here will lead to slow content death ā like on StackOverflow.
It also involves context; the post I replied to said it was not new. I simply noted that it occurred on slower days. My point being, you should check the dates of the source material for context. I made no judgment of the validity of that. You projected that. I agree with you Itās fine to visit the past.
This sort of tech is already disrupting the corpos by shattering copyright as a sham that the rich use to continue to enrich themselves. Hell, it took 104 years for a goddamn black and white cartoon about a mouse driving a steamboat to enter public domain, because Disney and other rich corps kept pushing the copyright laws in their favor.
This is why open-source and freely-available research is an important component. This is an important time to fight for, not against, the tech, and make sure it stays in the publicās hands. If we keep fighting against AI, and the inevitable, then weāre going to end up on the losing side of some draconian law that dictates that only the rich can use the technology, because of some bullshit requirement to pretend to appease copyright holders that requires hundreds of thousands or millions of dollars to maintain.
I mean, that may well be true too, but itās telling specifically French government employees to do so. Itās not that unusual for even regular businesses to place constraints on how those employees are communicating to something that their IT people have vetted.
opt-in analytics! servers running Synapse can choose to send a bit of analytics information like number of users, but itās opt-in so the number is potentially even higher
You would think youād already have problems if someoneās managed to compromise one or more of your containers without you knowing though whether they can get the host or not
Could be serving users malware or silently sucking up all the sensitive data the container sees
What if anything do people do about anti virus in containers?
You would think youād already have problems if someoneās managed to compromise one or more of your containers without you knowing though whether they can get the host or not
True, but the security idea behind being in a containerised environment is that your problems aren't immediately made worse by the fact that your database server is on the same machine as your web application - since they'd both be on separate but networked containers.
What if anything do people do about anti virus in containers?
The real threat to containers isn't AV-detectable malware, but Remote Code Execution (RCE) exploits.
Containers are best used as single purpose installations. With that configuration, it isn't easy to get non-standard executables - including malware - onto a container.
Most RCE exploits also don't involve the dropping of malware files onto the file system. There are some that do, but that issue is better handled in other ways.
Why? Well AVs only do something about binaries they know or think to be malware. A well crafted, customised Cobalt Strike beacon (aka: malicious remote control software) will blow through any resistance an AV has to offer.
So what do we do? Remember what I said that containers are best used as single purpose installations? Therefore you know exactly what executables should be running, making it trivial to set up executable whitelisting. That means that any executable not on the list will not run.
But even that isn't completely bulletproof. It won't do much against web shells, in which case your best detection mechanism is to look for applications calling /bin/bash or /bin/sh that shouldn't be.
If it upgrades some stuff, you were vulnerable, but you no longer are. If nothing upgrades, then you were already all good.
If you're doing that regularly, then your core system will generally be patched fixing almost all exploits in your core system, including this one. If not, you're vulnerable to this exploit and likely a whole bunch more stuff.
Edit: That's the simplest answer but if you're curious you can do a double-check for this particular vulnerability with apt changelog libc6 - generally speaking you won't see recent changes, but if a package has been recently updated you'll see a recent fix. So e.g. for this, I see the top change in the changelog is the fix from a couple weeks back:
glibc (2.36-9+deb12u4) bookworm-security; urgency=medium
* debian/patches/any/local-CVE-2023-6246.patch: Fix a heap buffer overflow
in __vsyslog_internal (CVE-2023-6246).
* debian/patches/any/local-CVE-2023-6779.patch: Fix an off-by-one heap
buffer overflow in __vsyslog_internal (CVE-2023-6779).
* debian/patches/any/local-CVE-2023-6780.patch: Fix an integer overflow in
__vsyslog_internal (CVE-2023-6780).
* debian/patches/any/local-qsort-memory-corruption.patch: Fix a memory
corruption in qsort() when using nontransitive comparison functions.
-- Aurelien Jarno <aurel32@debian.org> Tue, 23 Jan 2024 21:57:06 +0100
If you are running apt then you are running debian or ubuntu which the article clearly states they are vulnerable. but anyway I was asking how do I figure it out by myself
All Linux systems will be very likely vulnerable to this if they're not they're patched with the fix. Patched systems will not be vulnerable. That's true for Debian and Ubuntu, as it is for any Linux system. The commands I gave are determining whether or not you're patched, on a Debian or Ubuntu system.
What distro are you running? I can give you commands like that for any Linux system to determine whether or not you're patched.
(Edit: Also, as a general rule -- don't type stuff as root just because I or some other random person on the internet tells you to; check the man page or docs to make sure it's going to do something that you want it to do first.)
bleepingcomputer.com
Top