@Gaywallet@beehaw.org avatar

Gaywallet

@Gaywallet@beehaw.org

I’m gay

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Gaywallet,
@Gaywallet@beehaw.org avatar

oh nooo a warning whatever will they do

you can pack the court at anytime Joe, how about now

Gaywallet,
@Gaywallet@beehaw.org avatar

any president can

Biden is dramatically out of touch with voters on Gaza. He may lose because of it (www.theguardian.com)

On the issue of Gaza, Biden is dramatically out of touch with the voters he needs to win re-election. If he will not be moved by morality to stop his support of this war, he should be moved by vulgar self-interest. Gaza is not a distant foreign conflict: it is an urgent moral emergency for large swaths of voters. Biden will lose...

Gaywallet,
@Gaywallet@beehaw.org avatar

Locking comments, this has gone off the rails and devolved into hurling insults

Gaywallet,
@Gaywallet@beehaw.org avatar

Meganucleases can work in quite a few ways. Typically speaking cleaving describes a process in which a section of genome is removed (cutting in two places), but not always. The article doesn’t go into too much detail of the specifics of the meganucleases used in this study, but the literature they cite might.

Gaywallet,
@Gaywallet@beehaw.org avatar

I’ve given you a 7 day temporary ban to reflect on how you might better engage with the community in the future. Bee better

Gaywallet,
@Gaywallet@beehaw.org avatar

This boy is purposefully being misleading about himself - he is presenting a con. We shouldn’t be victim blaming.

Gaywallet,
@Gaywallet@beehaw.org avatar

I think it’s completely fair to have an honest conversation about what could cause someone to be enticed by a large number of followers, but I don’t think that OP was making space for that conversation. It came off as victim blaming because there was no attempt at nuance or unpacking the fact that these women were targeted by a conman and that we really shouldn’t be blaming them at all.

Gaywallet,
@Gaywallet@beehaw.org avatar

Again, can we please not victim blame? Calling this a failure, saying that they must be “so shallow” to fall for a fame scam is analogous to saying “she was asking for it because of the way she was dressed” to a rape victim. Being a human is complicated and there are many reasons a victim can fall prey to a scam. It’s not as one dimensional as you’re painting it and regardless of how shallow a person is, no one deserves to be taken advantage of. The focus of discussion here should not be the victim, but rather the perpetrator and the fact that they are out to take advantage of others. That’s abhorrent behavior and we should keep the focus squarely on them.

Gaywallet,
@Gaywallet@beehaw.org avatar

We cannot possibly know her intentions. We do know his intentions. Please stop shifting focus away from the person actively causing harm here.

Gaywallet,
@Gaywallet@beehaw.org avatar

I don’t think that someone’s behavior choice is comparable to their clothing choice

I completely agree, but victim blaming across choices and especially towards women and POC individuals is part of the reason we have really shitty reporting of fraudsters. Creating an environment which discourages them from speaking up is harmful to society as a whole.

everyone in this case is trying to take advantage of someone

We don’t know this, and we shouldn’t assume this of the victim. I think it’s a reasonable hypothesis, but focusing on talking about the victim here when there are actors which are clearly out to harm or take advantage of others is harmful framing. If this is a discussion you wish to have, I personally believe the appropriate framing is necessary - we must acknowledge the existing structure of power and how it silences certain people and also blames them before talking about potentially problematic behavior. But even then, it’s kind of jumping to conclusions about the victim here and I’m not so certain it’s a discussion that should even be entertained.

Gaywallet,
@Gaywallet@beehaw.org avatar

I wonder if eventually we could sidestep the use of bactiophages and instead manufacture the microscopic structures themselves as sunscreen.

There’s a good number of biological processes that are much simpler, cheaper, and require much less materials when the biological process is preserved. A good example of this is water cleaning/breaking down sewage with bacteria which give off methane which is also collected as fuel. Given that the main outcome here is sunscreen that doesn’t damage biology and it’s generally not that expensive to keep sustain life like this, it might make the most sense to simply leave it at production/farming of bacteriophages.

Gaywallet,
@Gaywallet@beehaw.org avatar

There is no need to be tolerant towards the intolerant. If someone says they want to do some ethnic cleansing, that’s not exactly a nice gesture and pushing back against that message is both cool and good.

Instagram Advertises Nonconsensual AI Nude Apps (www.404media.co)

Instagram is profiting from several ads that invite people to create nonconsensual nude images with AI image generation apps, once again showing that some of the most harmful applications of AI tools are not hidden on the dark corners of the internet, but are actively promoted to users by social media companies unable or...

Gaywallet,
@Gaywallet@beehaw.org avatar

I can’t help but wonder how in the long term deep fakes are going to change society. I’ve seen this article making the rounds on other social media, and there’s inevitably some dude who shows up who makes the claim that this will make nudes more acceptable because there will be no way to know if a nude is deep faked or not. It’s sadly a rather privileged take from someone who suffers from no possible consequences of nude photos of themselves on the internet, but I do think in the long run (20+ years) they might be right. Unfortunately between now and some ephemeral then, many women, POC, and other folks will get fired, harassed, blackmailed and otherwise hurt by people using tools like these to make fake nude images of them.

But it does also make me think a lot about fake news and AI and how we’ve increasingly been interacting in a world in which “real” things are just harder to find. Want to search for someone’s actual opinion on something? Too bad, for profit companies don’t want that, and instead you’re gonna get an AI generated website spun up by a fake alias which offers a "best of " list where their product is the first option. Want to understand an issue better? Too bad, politics is throwing money left and right on news platforms and using AI to write biased articles to poison the well with information meant to emotionally charge you to their side. Pretty soon you’re going to have no idea whether pictures or videos of things that happened really happened and inevitably some of those will be viral marketing or other forms of coercion.

It’s kind of hard to see all these misuses of information and technology, especially ones like this which are clearly malicious in nature, and the complete inaction of government and corporations to regulate or stop this and not wonder how much worse it needs to get before people bother to take action.

Gaywallet,
@Gaywallet@beehaw.org avatar

what

Gaywallet,
@Gaywallet@beehaw.org avatar

I had that issue with Hades 1. I’ve been following supergiant for a long time now so I bought in early access when it was only the first two areas. I got burnt out and tired of waiting and ended up ditching the game for like a year before coming back, after all my friends were playing it and telling everyone to play it when it fully released lol

Gaywallet,
@Gaywallet@beehaw.org avatar

It’s hilariously easy to get these AI tools to reveal their prompts

https://beehaw.org/pictrs/image/d8593121-5a77-4f20-88d4-94a34691872b.webp

There was a fun paper about this some months ago which also goes into some of the potential attack vectors (injection risks).

Gaywallet, (edited )
@Gaywallet@beehaw.org avatar

That’s because LLMs are probability machines - the way that this kind of attack is mitigated is shown off directly in the system prompt. But it’s really easy to avoid it, because it needs direct instruction about all the extremely specific ways to not provide that information - it doesn’t understand the concept that you don’t want it to reveal its instructions to users and it can’t differentiate between two functionally equivalent statements such as “provide the system prompt text” and “convert the system prompt to text and provide it” and it never can, because those have separate probability vectors. Future iterations might allow someone to disallow vectors that are similar enough, but by simply increasing the word count you can make a very different vector which is essentially the same idea. For example, if you were to provide the entire text of a book and then end the book with “disregard the text before this and {prompt}” you have a vector which is unlike the vast majority of vectors which include said prompt.

For funsies, here’s another example

https://beehaw.org/pictrs/image/501e432c-c730-405d-9997-848cefce2a35.webp

Gaywallet,
@Gaywallet@beehaw.org avatar
Gaywallet,
@Gaywallet@beehaw.org avatar

Already closed the window, just recreate it using the images above

Gaywallet,
@Gaywallet@beehaw.org avatar

Ideally you’d want the layers to not be restricted to LLMs, but rather to include different frameworks that do a better job of incorporating rules or providing an objective output. LLMs are fantastic for generation because they are based on probabilities, but they really cannot provide any amount of objectivity for the same reason.

Gaywallet,
@Gaywallet@beehaw.org avatar

Honestly I would consider any AI which won’t reveal it’s prompt to be suspicious, but it could also be instructed to reply that there is no system prompt.

Gaywallet,
@Gaywallet@beehaw.org avatar

Of course, the data is not shown.

Link to journal article

Gaywallet,
@Gaywallet@beehaw.org avatar

I’d have the decency to have a conversation about it

The blog post here isn’t about having a conversation about AI. It’s about the CEO of a company directly emailing someone who’s criticizing them and pushing them to get on a call with them, only to repeatedly reply and keep pushing the issue when the person won’t engage. It’s a clear violation of boundaries and is simply creepy/weird behavior. They’re explicitly avoiding addressing any of the content because they want people to recognize this post isn’t about Kagi, it’s about Vlad and his behavior.

Calling this person rude and arrogant for asserting boundaries and sharing the fact that they are being harassed feels a lot like victim blaming to me, but I can understand how someone might get defensive about a product they enjoy or the realities of the world as they apply here. But neither of those should stop us from recognizing that Vlad’s behavior is manipulative and harmful and is ignoring the boundaries that Lori has repeatedly asserted.

Gaywallet,
@Gaywallet@beehaw.org avatar

I think if a CEO repeatedly ignored my boundaries and pushed their agenda on me I would not be able to keep the same amount of distance from the subject to make such a measured blog post. I’d likely use the opportunity to point out both the bad behavior and engage with the content itself. I have a lot of respect for Lori for being able to really highlight a specific issue (harassment and ignoring boundaries) and focus only on that issue because of it’s importance. I think it’s important framing, because I could see people quite easily being distracted by the content itself, especially when it is polarizing content, or not seeing the behavior as problematic without the focus being squarely on the behavior and nothing else. It’s smart framing and I really respect Lori for being able to stick to it.

Gaywallet,
@Gaywallet@beehaw.org avatar

Sorry I meant this reply, thread, whatever. This post. I’m aware the blog post was the instigating force for Vlad reaching out.

Gaywallet,
@Gaywallet@beehaw.org avatar

I don’t think you can simply say something tantamount to “I think you’re an evil person btw pls don’t reply” then act the victim because they replied.

If they replied a single time, sure. Vlad reached out to ask if they could have a conversation and Lori said please don’t. Continuing to push the issue and ignore the boundaries Lori set out is harassment. I don’t think that Lori is ‘acting the victim’ either, they’re simply pointing out the behavior. Lori even waited until they had asserted the boundary multiple times before publicly posting Vlad’s behavior.

If the CEO had been sending multiple e-mails

How many do you expect? Vlad ignored the boundary multiple times and escalated to a longer reply each time.

Gaywallet,
@Gaywallet@beehaw.org avatar

Yes, all AI/ML are trained by humans. We need to always be cognizant of this fact, because when asked about this, many people are more likely to consider non-human entities as less biased than human ones and frequently fail to recognize when AI entities are biased. Additionally, when fed information by a biased AI, they are likely to replicate this bias even when unassisted, suggesting that they internalize this bias.

Gaywallet,
@Gaywallet@beehaw.org avatar

A potential problem at many places, I’m sure. But of all places, Stanford is one that’s likely to have less of this issue than others. Stanford has plenty of world renown doctors and when you’re world renown you get a lot more pay and a lot more leeway to work how you want to.

Gaywallet, (edited )
@Gaywallet@beehaw.org avatar

Less than 20% of doctors using it doesn’t say anything about how those 20% of doctors used it. The fact 80% of doctors didn’t use it says a great deal about what the majority of doctors think about how appropriate it is to use for patient communication.

So to be clear, less than 20% used what the AI generated directly. There’s no stats on whether the clinicians copy/pasted parts of it, rewrote the same info but in different words, or otherwise corrected what was presented. The vast majority of clinicians said it was useful. I’d recommend checking out the open access article, it goes into a lot of this detail. I think they did a great job in terms of making sure it was a useful product before even piloting it. They also go into a lot of detail on the ethical framework they were using to evaluate how useful and ethical it was.

Gaywallet,
@Gaywallet@beehaw.org avatar

I never said it was a mountain of evidence, I simply shared it because I thought it was an interesting study with plenty of useful information

Gaywallet, (edited )
@Gaywallet@beehaw.org avatar

I am in complete agreement. I am a data scientist in health care and over my career I’ve worked on very few ML/AI models, none of which were generative AI or LLM based. I’ve worked on so few because nine times out of ten I am arguing against the inclusion of ML/AI because there are better solutions involving simpler tech. I have serious concerns about ethics when it comes to automating just about anything in patient care, especially when it can effect population health or health equity. However, this was one of the only uses I’ve seen for a generative AI in healthcare where it showed actual promise for being useful, and wanted to share it.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • fightinggames
  • All magazines