cwagner

@cwagner@beehaw.org

This profile is from a federated server and may be incomplete. Browse more on the original instance.

cwagner,

I think the majority of us also don’t want to play tech support.

cwagner,

Now the question is, has anyone here actually had wasabi?

But here’s the rub: That tangy paste served up at nearly all sushi bars — even the ones in Japan — is almost certainly an impostor. Far more common than the real thing is a convincing fraud, usually made of ordinary white horseradish, dyed green.

Japan doesn’t even produce enough to fulfill their own demand, I’m almost certain all Wasabi I’ve ever had was fake.

cwagner,

Not even a mention of lightning? I have no idea if it works as I’ve been hearing both yes and no for several years, but writing such an article without mentioning what at least theoretically would be the solution just seems bad.

cwagner,

See, that’s another “no”, but then I read just as convincing “yes” posts, and I just don’t care enough to make my own research, so I have Schrödinger’s lightning network ;)

But any way, it would have to be mentioned in a serious sticker.

cwagner,

I know what you mean, but FWIW: You probably mean “move fast and break things”. “Fail fast” is usually about not hiding/carrying with you potentially bad errors, and instead “fail fast” when you know there’s an issue. It’s an important tool for reliability.

An unrealistic example: Better to fail fast and not start the car at all when there’s abnormal voltage fluctuations, then explode while driving ;)

cwagner,

I came across a post on instagram that says that Al Yankovic’s 80 million stream on playlist only netted him enough money to buy a sandwich.

It was hyperbole, unless his sandwich costs 200-300k. Which is the reason why his statement was very questionable.

cwagner,

I doubt it pays much better, the issue might be partially the distribution, but mainly that they are too cheap.

cwagner,

Buying digital albums works just as well. No need to go physical.

cwagner,

Which is why it really sucks. Now people remember that number, keep repeating it, and essentially he has become a fake news peddler. Good job, Al.

cwagner,

I was confused about that as his Wikipedia page didn’t show anything that bad, but didn’t want to get into that :D

cwagner,

Eh, not sure I agree. Seems to also have been between too little and too much AI safety, and I strongly feel like there’s already too much AI safety.

cwagner,

Using it and getting told that you need to ask the Fish for consent before using it as a flesh light.

And that is with a system prompt full of telling the bot that it’s all fantasy.

edit: And “legal” is not relevant when talking about what OpenAI specifically does for AI safety for their models.

cwagner,

I don’t really care, but I find it highly entertaining :D It’s like trash TV for technology fans (and as text, which makes it even better) :D

cwagner,

Nope

Best results so far were with a pie where it just warned about possibly burning yourself.

cwagner,

No, it’s “the user is able to control what the AI does”, the fish is just a very clear and easy example of that. And the big corporations are all moving away from user control, there was even a big article about how I think the MS AI was broken because… you could circumvent the built-in guardrails. Maybe you and the others here want to live in an Apple walled garden corporate controlled world of AI. I don’t.

Edit: Maybe this is not clear for everyone, but if you think a bit further, imagine you have an AI in your RPG, like Tyranny, where you play a bad guy. You can’t use the AI for anything slavery related, because Slavery bad, mmkay? And AI safety says there’s no such thing as fantasy.

cwagner,

AI safety is currently, in all articles I read, used as “guard rails that heavily limit what the AI can do, no matter what kind of system prompt you use”. What are you thinking of?

cwagner,

If it helps even more: The AI in question is a 46 cm long, 300 g heavy, blue, plushie penis named after Australia’s “biggest walking dick” Scott Morrison: Scomo, and active in an Aussie cooking stream.

cwagner,

Is there anything new in this post that I’m missing?

cwagner,

It’s a substack post. At this point, my quality expectation is

  1. Wordpress - Probably someone who really cares about what they write about
  2. Substack - Either low effort spam like this that gets upvoted for some reason or someone pushing their agenda, hard
  3. Medium - Either spam, wrong, dumb, or too simple. Literally never worth reading.
cwagner,

Heh:

The OpenAI tussle is between the faction who think Skynet will kill them if they build it, and the faction who think Roko’s Basilisk will torture them if they don’t build it hard enough.

mastodon.social/

cwagner,

I’d say this is an amazing result for MS. Not only is their investment mostly Azure credits, so OpenAI is dependent on MS, now they also got Altman and his followers for themselves for more research.

cwagner,

Doesn’t really work when none of this was initiated by MS

cwagner,

I don’t mind so much what they did with firing him, but how they did it, and everything since. It just seems extremely unprofessional and disorganized.

cwagner,

They believed that the AI safety work they had done was insufficient.

Considering that every new model seems to be getting worse for anything but highly sanitized corporate usage, I’m not sure that I want more AI safety …

For my usage, I use Chat GPT 3.5 turbo with the march checkpoint because I can’t get the current one to stop moralizing about bullshit instead of doing what it’s supposed to (I run two twitch bots with it). GPT4 used to be okay there, but the new preview is now starting to have the same issue with more frequent “I can’t do that Dave”-style answers, though it’s still mostly circumventable with enough prompt massaging, but it is getting harder.

In a year, I don’t see anything but self-hosted models usable for anything not corporate glitz if trajectories hold, so fuck all that AI safety.

cwagner,

Nothing that runs on my GPU / CPU comes even close to GPT 3.5, GPT4 is not even in the same universe, and that’s with them running far more slowly.

cwagner,

That’s an interface for models. Which model did you use?

cwagner,

Which model are you talking about?

cwagner,

Oh, faster is easy. GPT 3.5 is also far faster than GPT 4. Faster at quality replies is the issue.

cwagner,

I do not believe any 7B model comes even close to 3.5 in quality. I used LLama V1 64B, and it was horrible in comparison. Are you really telling me that this tiny model gives better general answers? Or am I just misunderstanding what you are saying?

cwagner, (edited )

Prompt:

I’m currently trying to show on the Website Beehaw, that certain LLMs are far superior in writing than others. Examples of what bigger models do better than smaller ones: *

Mistral-7B-Instruct-v0.1

  • ntire articles* vs Headlines Descriptions vs Product titles *Bul

GPT 3.5-Turbo doesn’t support completion as it’s for chat, so I used an even worse one, text-davinci-003 which is far behind state of the art.

Bigger models are able to handle more complex and detailed tasks with ease

  • Bigger models are better suited for natural language understanding and text processing
  • Bigger models are able to learn more accurate representations of context, thus improving the precision of the output
  • Bigger models can process data more quickly and efficiently, saving time and processing power when large volumes of data are used
  • Bigger models can better recognize more subtle nuances in language, which allows them to produce more accurate results
  • Bigger models are able to use more sophisticated algorithms, resulting in a more comprehensive and deeper understanding of the data being used

Mistral 7B might be okay for some very specific cases, but it’s not comparable to proper models at all.

edit: gave it a second chance, it’s a bit better (at least no complete nonsense anymore), but still terrible writing and doesn’t make much sense

Paraphrasing The ability of a language model to generate text that has a similar meaning to the original text is called paraphrasing. This is a very common problem in natural language processing, and many LLMs are designed to be able to paraphrase text. However, there are some LLMs that are particularly good at paraphrasing, and these models are often preferred over smaller models because of their ability to generate more varied and unique text. Examples of LLMs that are known for their paraphrasing abilities include GPT-2 and transformers. These models

cwagner,

Crazy, the news almost took hackernews down when it broke. MS also was taken by surprise, and today 3 lead researchers resigned. Currently only speculation and no one really knows what’s going on.

cwagner,

I still wonder what’s so different with my circle that so many of them use signal. I mean sure, none use only signal and all also have WA, but almost everyone I know, which is people from Flensburg (border to Denmark) to RLP (center-south), aged late 20 to 70+, use signal. Less than 10% don’t.

cwagner,

I usually buy games for under 10, or kickstart them for more. This one’s pitch sounded awesome enough that I backed it on fig originally, and now bought the season pass ;)

cwagner,

Here in Germany in my circle (which has people from mid-twenties to 60+, from the North to the center), most people use Signal, with Telegram being a rare outlier. WhatsApp is what everyone uses, though.

cwagner,

If someone thought he was innocent, SBF would probably “well, actually…” them.

cwagner,

But water?

Give me a number. I use 6-8 L of water no matter how many dishes I have. From what I read, that’s about in line with the most efficient dishwashers.

cwagner,

I really don’t understand why people get so aggressive when talking about their dishwashers.

cwagner,

Only I wasn’t, and I didn’t insult you. But I have no more interest in discoursing with you.

cwagner,

That’s weird, never had that issue. First thoughts: too much citrate, nothing but cheese ( so no liquid), to much heat. Any of those?

cwagner,

Shredded shouldn’t be an issue, unless they’re was weird stuff in there. But maybe high acidity was an issue.

cwagner,

The soapy water cleans off when drying and leaves them clean. Two stages are wasting water, and extra work.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • fightinggames
  • All magazines