Not even a mention of lightning? I have no idea if it works as I’ve been hearing both yes and no for several years, but writing such an article without mentioning what at least theoretically would be the solution just seems bad.
See, that’s another “no”, but then I read just as convincing “yes” posts, and I just don’t care enough to make my own research, so I have Schrödinger’s lightning network ;)
But any way, it would have to be mentioned in a serious sticker.
I know what you mean, but FWIW: You probably mean “move fast and break things”. “Fail fast” is usually about not hiding/carrying with you potentially bad errors, and instead “fail fast” when you know there’s an issue. It’s an important tool for reliability.
An unrealistic example: Better to fail fast and not start the car at all when there’s abnormal voltage fluctuations, then explode while driving ;)
No, it’s “the user is able to control what the AI does”, the fish is just a very clear and easy example of that. And the big corporations are all moving away from user control, there was even a big article about how I think the MS AI was broken because… you could circumvent the built-in guardrails. Maybe you and the others here want to live in an Apple walled garden corporate controlled world of AI. I don’t.
Edit: Maybe this is not clear for everyone, but if you think a bit further, imagine you have an AI in your RPG, like Tyranny, where you play a bad guy. You can’t use the AI for anything slavery related, because Slavery bad, mmkay? And AI safety says there’s no such thing as fantasy.
AI safety is currently, in all articles I read, used as “guard rails that heavily limit what the AI can do, no matter what kind of system prompt you use”. What are you thinking of?
If it helps even more: The AI in question is a 46 cm long, 300 g heavy, blue, plushie penis named after Australia’s “biggest walking dick” Scott Morrison: Scomo, and active in an Aussie cooking stream.
We had a thread about OpenAI Staff Threaten to Quit Unless Board Resigns, but I thought I might as well add it again. Especially because of this part:...
The OpenAI tussle is between the faction who think Skynet will kill them if they build it, and the faction who think Roko’s Basilisk will torture them if they don’t build it hard enough.
I’d say this is an amazing result for MS. Not only is their investment mostly Azure credits, so OpenAI is dependent on MS, now they also got Altman and his followers for themselves for more research.
I don’t mind so much what they did with firing him, but how they did it, and everything since. It just seems extremely unprofessional and disorganized.
They believed that the AI safety work they had done was insufficient.
Considering that every new model seems to be getting worse for anything but highly sanitized corporate usage, I’m not sure that I want more AI safety …
For my usage, I use Chat GPT 3.5 turbo with the march checkpoint because I can’t get the current one to stop moralizing about bullshit instead of doing what it’s supposed to (I run two twitch bots with it). GPT4 used to be okay there, but the new preview is now starting to have the same issue with more frequent “I can’t do that Dave”-style answers, though it’s still mostly circumventable with enough prompt massaging, but it is getting harder.
In a year, I don’t see anything but self-hosted models usable for anything not corporate glitz if trajectories hold, so fuck all that AI safety.
I do not believe any 7B model comes even close to 3.5 in quality. I used LLama V1 64B, and it was horrible in comparison. Are you really telling me that this tiny model gives better general answers? Or am I just misunderstanding what you are saying?
I’m currently trying to show on the Website Beehaw, that certain LLMs are far superior in writing than others. Examples of what bigger models do better than smaller ones: *
Mistral-7B-Instruct-v0.1
ntire articles* vs HeadlinesDescriptions vs Product titles *Bul
GPT 3.5-Turbo doesn’t support completion as it’s for chat, so I used an even worse one, text-davinci-003 which is far behind state of the art.
Bigger models are able to handle more complex and detailed tasks with ease
Bigger models are better suited for natural language understanding and text processing
Bigger models are able to learn more accurate representations of context, thus improving the precision of the output
Bigger models can process data more quickly and efficiently, saving time and processing power when large volumes of data are used
Bigger models can better recognize more subtle nuances in language, which allows them to produce more accurate results
Bigger models are able to use more sophisticated algorithms, resulting in a more comprehensive and deeper understanding of the data being used
Mistral 7B might be okay for some very specific cases, but it’s not comparable to proper models at all.
edit: gave it a second chance, it’s a bit better (at least no complete nonsense anymore), but still terrible writing and doesn’t make much sense
Paraphrasing The ability of a language model to generate text that has a similar meaning to the original text is called paraphrasing. This is a very common problem in natural language processing, and many LLMs are designed to be able to paraphrase text. However, there are some LLMs that are particularly good at paraphrasing, and these models are often preferred over smaller models because of their ability to generate more varied and unique text. Examples of LLMs that are known for their paraphrasing abilities include GPT-2 and transformers. These models
Crazy, the news almost took hackernews down when it broke. MS also was taken by surprise, and today 3 lead researchers resigned. Currently only speculation and no one really knows what’s going on.
I still wonder what’s so different with my circle that so many of them use signal. I mean sure, none use only signal and all also have WA, but almost everyone I know, which is people from Flensburg (border to Denmark) to RLP (center-south), aged late 20 to 70+, use signal. Less than 10% don’t.
Thought I’d share this, now that they released the DLC. Vagrus is probably not for the majority of people. It’s a very dark, extremely text heavy, trading caravan sim with combat and RPG elements....
I usually buy games for under 10, or kickstart them for more. This one’s pitch sounded awesome enough that I backed it on fig originally, and now bought the season pass ;)
Here in Germany in my circle (which has people from mid-twenties to 60+, from the North to the center), most people use Signal, with Telegram being a rare outlier. WhatsApp is what everyone uses, though.
I love cooking, and I cook every day for me and my wife (home office since 2008 helps there), and I love hearing about new things. I have the book “The Science of Cooking” which was fascinating.
Some people like to think they’re super water-efficient doing the dishes, but they’re not; dishwasher saves water.
About that. I know of one study done in Europe on this, and it was paid for by dishwasher companies, and didn’t exclude outliers like the guy who used about 400L of water doing the dishes by hand.
I once measured water and power usage of me doing the dishes by hand, and it was both below what I found online for dishwashers.
If you do 2-stage cleaning (soapy hot and cold clean water), then dishwashers will be better because they don’t. Amount and source of hot water governs if you are more energy efficient. The advantage of dishwashers is that a badly used dishwasher is far more efficient than badly (= wasteful) handwashing, and even efficient handwashing is not much better than dishwashers (though I wouldn’t know how to calculate production and recycling of the dishwasher itself, not even what order of magnitude that is). Which was, as far as I remember, also in the conclusion of the study, unless there has been another one since then.
Each Bitcoin transaction uses 4,200 gallons of water — enough to fill a swimming pool — and could potentially cause freshwater shortages (www.tomshardware.com)
Electric Vehicles Have 79% More Reliability Challenges Than Gas Powered Cars (samrome58.substack.com)
Spotify made £56m profit, but has decided not to pay smaller artists like me. We need you to make some noise | Damon Krukowski (www.theguardian.com)
Sam Altman to return as CEO of OpenAI (www.theverge.com)
See also twitter:...
YouTube limits Video Viewing for Ad blocker Users (samrome58.substack.com)
OpenAI: Gathered Articles from the last few hours (or a Mini-Mega-Thread)
We had a thread about OpenAI Staff Threaten to Quit Unless Board Resigns, but I thought I might as well add it again. Especially because of this part:...
Microsoft hires former OpenAI CEO Sam Altman and co-founder Greg Brockman (www.theverge.com)
Well, this escalated quickly. So is this the end, or will the mods create an OpenAI megathread? ;)
The deal to bring Sam Altman back to OpenAI has fallen apart, Former Twitch CEO Emmett Shear will now take over as interim CEO (www.theverge.com)
In today’s OpenAI clown show news
Safety and Research were Sacrificed for Profit under Altman (www.theatlantic.com)
Article from The Atlantic, archive link: archive.ph/Vqjpr...
OpenAI board in discussions with Sam Altman to return as CEO (www.theverge.com)
Are they drunk over there?
OpenAI's board has fired Sam Altman (openai.com)
'Brazil is the Country of WhatsApp,' Says President of the App (www1.folha.uol.com.br)
The rest of the article (not translated) is an interview with Cathcart....
Vagrus - The Riven Realms: Sunfire and Moonshadow released (store.steampowered.com)
Thought I’d share this, now that they released the DLC. Vagrus is probably not for the majority of people. It’s a very dark, extremely text heavy, trading caravan sim with combat and RPG elements....
WhatsApp head confirms ads in the messaging app are still in the works (www.theverge.com)
'Crypto King' Sam Bankman-Fried guilty of FTX fraud (www.bbc.co.uk)
Favorite secrets, tips & tricks in the kitchen?
I love cooking, and I cook every day for me and my wife (home office since 2008 helps there), and I love hearing about new things. I have the book “The Science of Cooking” which was fascinating.