gizmodo.com

SeaJ, (edited ) to gaming in 13-Year-Old Becomes First Person to Beat Tetris on NES

Streamer Blue Scuti has surpassed artificial intelligence by becoming the first known human to crash Tetris

He’d still be the first human even if AI did it first…

SSUPII,

A TAS to game crash already existed.

Evkob,
@Evkob@lemmy.ca avatar

100% just slipped that in there for SEO purposes due to the current trendiness of “AI”.

jarfil,

a feat previously only accomplished by AI.

AI did it first, human came second, so didn’t surpass anything AI.

Kolanaki,
@Kolanaki@yiffit.net avatar

Surpassed all other human attempts at beating it.

jarfil,

Yes, and still this part is false:

has surpassed artificial intelligence

admin, to gaming in 13-Year-Old Becomes First Person to Beat Tetris on NES
@admin@beehaw.org avatar

Found a very good video about this.

mkwarman,

Great video, thanks for sharing!

ArgillaSilmeria,

That’s impressive seeing how the game breaks and how speedrunners managed to reach it.

numberz, to gaming in 13-Year-Old Becomes First Person to Beat Tetris on NES
@numberz@mastodon.social avatar

I love this game. I didn't think you could actually "beat" it.

Cethin,

Well, you really can’t in a traditional sense. This isnt a victory screen, it’s a kill screen. He got so far into the game it crashed and you can’t continue. There are still more goals that can potentially be reached higher by avoiding the crash.

Telorand, to gaming in 13-Year-Old Becomes First Person to Beat Tetris on NES

I just learned about some of the manual techniques pro players have come up with to play Tetris at a high level. It’s not my thing, but along with speedrunning, the level of community-driven ingenuity is inspiring.

alternative_factor, to technology in Threads Has Lost More Than 80% of Its Daily Active Users
@alternative_factor@kbin.social avatar

I signed up to it from instagram to track my favorite epidemiologists and h5n1 only to discover it has no hashtag system to look up trending topics and most of my favorite epidemiologists aren't even on it. So it's worthless for even following COVID, avian flu, and probably every other news topic.

bl4kers,
@bl4kers@beehaw.org avatar

I thought it was explicitly anti-news and politics?

alternative_factor,
@alternative_factor@kbin.social avatar

Ah well that would be why it's completely pointless to me. Since all science is politicized now, I guess there is no reason to use it.

luciole, to technology in Google Paid How Much to Be the Default Search Engine?
@luciole@beehaw.org avatar

Yet, in a redacted copy of an internal email chain released on Friday, Jim Kolotouros, the vice president of Android Platform Partnerships, wrote: “Chrome exists to serve Google search, and if it cannot do that because it is regulated to be set by the user, the value of users using Chrome goes to almost zero (for me).”

So Chrome’s whole point is bringing users to Google Search… and Google Search’s whole point is Google Ads. I’m Glad I use Firefox.

PoisonedPrisonPanda,

Cut the snake by the head.

Problem solved.

In the last update of firefox google was redefaulted as search engine. Wonder if such roll out costs extra?

Pantherina,

Was it anything else before?

PoisonedPrisonPanda,

I dont know if the last update was any bigger update, but at least several updates before my engine was not changed.

Pantherina,

I have no damn idea as I see Firefox as a Platform.

github.com/trytomakeyouprivate/Arkenfox-softening

abhibeckert, (edited )

Dunno about “the last update” or the current state in each region but as far as I know the default search engine in FireFox has varied over the years and has always depended what country you’re in.

Baidu, Yandex and Yahoo are / have been the default in some countries. They made Bing the default for “1%” of users in a bunch of major countries recently to test the waters (and didn’t take it further than that).

Google blocks traffic from Chinese IP addresses as a protest against censorship there, so nobody has Google as the default in that country.

sparky,

And what does that make Android’s whole point? 😉

skip0110, to technology in Google Paid How Much to Be the Default Search Engine?
@skip0110@lemm.ee avatar

5.25 billion smartphone users, so they are paying about $5 per user. If you switch the default from Google, you are taking $5 from them!

tesseract,

TBH, 26.3 billion dollars are just a drop in the bucket for Google. That bucket of course filled with the money they got with industrial scale spying, cross-site tracking, denial of control, forced ads, destruction of competition, among countless dirty tricks they play on regular netizens.

abhibeckert,

They made a $40b profit last year. More than half their profits is a “drop in the bucket”?!

gk99,

I actually use Bing so that I get Microsoft Rewards points, meaning I gain money by not using Google.

But I understand privacy homies going DuckDuckGo or something else.

Kingsilva, to technology in Google Paid How Much to Be the Default Search Engine?

They played the game of thrones well

scroll_responsibly, to technology in Google Paid How Much to Be the Default Search Engine?
@scroll_responsibly@lemmy.sdf.org avatar

For anyone who doesn’t click the link, Google paid $26.3 billion.

raoul, to technology in Google Paid How Much to Be the Default Search Engine?

The report, shared with The Register, estimates that Google’s payout accounts for 14% to 16% of Apple’s annual operating profits [in 2021].

What?!? That’s huge

lvxferre, to technology in So Far, AI Is a Money Pit That Isn't Paying Off
@lvxferre@lemmy.ml avatar

Okay… let’s call wine “wine” and bread “bread”: the acronym “AI” is mostly an advertisement stunt.

This is not artificial intelligence; and even if it was, “AI” is used for a bag of a thousand cats - game mob pathfinding, chess engines, swarm heuristic methods, so goes on.

What the article is talking about is far more specific, it’s what the industry calls "machine learning"¹.

So the text is saying that machine learning is costly. No surprise - it’s a relatively new tech, and even the state of art is still damn coarse². Over time those technologies will get further refined, under different models; cost of operation is bound to reduce over time. Microsoft and the likes aren’t playing the short game, they’re looking for long-term return of investment.

  1. I’d go a step further and claim here that “model-based generation” is more accurate. But that’s me.
  2. LLMs are a good example of that; GPT-4 has ~210¹² parameters. If you equate each to a neuron (kind of a sloppy comparison, but whatever), it’s more than an order of magnitude larger than the ~110¹¹ neurons in a human brain. It’s basically brute force.
interolivary,
@interolivary@beehaw.org avatar

The comparison of GPT parameters to neurons really is kinda sloppy, since they’re not at all comparable. To start with, “parameters” encompasses both weights (ie. the “importance” of a connection between any two neurons) and biases (sort of the starting value of an individual neuron, which then biases the activation function) so it doesn’t tell you anything about the number of neurons, and secondly biological neurons have way more dynamic behavior than what current “static” NNs like GPT use so it wouldn’t really be surprising if you needed much more of them to mimic the behavior of meatbag neurons. Also, LLM architecture is incredibly weird so the whole concept of neurons isn’t as relevant as it is in more traditional networks (although they do have neurons in their layers)

lvxferre,
@lvxferre@lemmy.ml avatar

Another sloppiness that I didn’t mention is that a lot of human neurons are there for things that have nothing to do with either reasoning or language; making your heart beat, transmitting pain, so goes on. However I think that the comparison is still useful in this context - it shows how big those LLMs are, even in comparison with a system created out of messy natural selection. The process behind the LLMs seems inefficient.

interolivary,
@interolivary@beehaw.org avatar

I wouldn’t discount natural selection as messy. The reason why LLMs are as inefficient as they are in comparison to their complexity is exactly because they were designed by us meatbags; evolutionary processes can result in some astonishingly efficient solutions, although by no means “perfect”. I’ve done research in evolutionary computation and while it does have its problems – results can be unpredictable, it’s ridiculously hard to design a good fitness function, designing a “digital DNA” that mimics the best parts of actual DNA is nontrivial to say the least etc etc – I think it might be at least part of the solution to building, or rather growing, better neural networks / AI architectures.

lvxferre,
@lvxferre@lemmy.ml avatar

It’s less about “discounting” it and more about acknowledging that the human brain is not so efficient as people might think. As such, LLMs using an order of magnitude more parameters than the number of cells in a brain hints that LLMs are far less efficient than language models could be.

I’m aware that evolutionary algorithms can yield useful results.

interolivary,
@interolivary@beehaw.org avatar

But the point is that not only is the human brain actually remarkably efficient for what it is, and that you’re still confusing parameter count and neuron count. The parameter count is essentially the number of connections between neurons plus the count of neurons in a network.

If I recall correctly the average human brain has something like 80 billion neurons, and each neuron can have anywhere from 1 000 to 10 000 connections. This means that in neural net technology terms, we meatbags have brains with trillions of parameters. I just meant that it wouldn’t be surprising if an “artifial brain” needed more neurons to do (a part of) the same thing as our brains do since they’re vastly simpler

fushuan,

But ML is being used in the industry in tons of places, and it’s definitely cost effective. There’s simple models that take the input of machinery sensors and detect when something is faulty or needs repairing, not just malfunctioning parts but worn out parts too. It’s used heavily in image processing, tiktok is used by a lot of people and the silly AR thingies use image recognition and tracking in real time through your phone. I’m not saying that this features created revenue directly, but they do get viral and attract users, so yeah. Image processing is also used in almost any supermarket to control the amount of people in the store, at least since covid I see a dynamically updated counter in every supermarket I visit.

It is also used in time estimations, how much traffic influences a trip is not manually set at all, it gets updated dynamically with real time data, through trained models.

It is also used in language models, for real usages like translation, recommendation engines (search engines, store product recommendation…).

The article is talking about generative models, more specifically, text prediction engines (ChatGPT, Copilot). ChatGPT is a chatbot, and I don’t see a good way to monetise it while keeping it free to use, and Copilot is a silly idea to me as a programmer since it feels very dangerous and not very practical. And again, not something I would pay for, so meh.

lvxferre, (edited )
@lvxferre@lemmy.ml avatar

But (+> contradiction) ML is being used in the industry in tons of places […] store product recommendation…).

By context it’s rather clear which type of machine learning I’m talking about, given the OP. I’m not talking about the simple models that you’re talking about and that, as you said, already found economically viable applications.

Past that “it’s generative models, not machine learning” is on the same level as “it’s a cat, not a mammal”. One is a subset of the other, and by calling it “machine learning” my point was to highlight that it is not a “toothed and furry chicken” as the term AI implies.

The article is talking about generative models

I’m aware, as footnote #1 shows.

fushuan,

By context it’s rather clear which type of machine learning I’m talking about

Eh, it was to you and me, but we are not in a specialised community. This is a general one about technology, and since people tend to misunderstand stuff I prefer to specify. I get that you then wrote footnote #1, but why write statements like this one:

So the text is saying that machine learning is costly. No surprise - it’s a relatively new tech, and even the state of art is still damn coarse²

I know which branch of ML you are talking about, but in written form on a public forum that people might use as a reference, I’d prefer to be more specific. Yeah you then mention LLMs as an example, but the new ones are basically those, there’s several branches with plenty maturity.

“it’s generative models, not machine learning”

IDK why you are quoting me on that, I never said that. I’d just want people to specify more. I only mentioned several branches of machine learning, and generative models are one of them.

Also, what’s that about contradiction? In the first paragraph I was mentioning the machinery industry, since I talk about machines. Then I talked about language models and some of their applications, I don’t get why that contradicts anything. Store product recommendations are done with supervised ML models that track your clicks, views, and past purchases to generate an interest model about you, and it’s combined with the purchases people with similar likes that you do do to generate a recommendation list. This is ML too.

Dunno, you read as quite angry, misquoting me and all.

ptz, to technology in So Far, AI Is a Money Pit That Isn't Paying Off
@ptz@dubvee.org avatar

Good. Maybe the hype will finally die down soon and “AI” won’t be shoved into every nook, cranny, and Notepad app anymore.

scrubbles,
@scrubbles@poptalk.scrubbles.tech avatar

I’ll say AI is a bit more promising, but all of this just really reminds me of the blockchain craze in 2017. Every single business wanted to add blockchain because the suits upstairs just saw it as free money. Technical people down below were like “yeah cool, but there’s no place for it”. At least I could solve some problems, but business people again just think that it’s going to make them limitless money

tal,
@tal@lemmy.today avatar

Nah, blockchain has extremely limited applications.

Generative AI legitimately does have quite a number of areas that it can be made use of. That doesn’t mean that it can’t be oversold for a given application or technical challenges be disregarded, but it’s not super-niche.

If you wanted to compare it to something that had a lot of buzz at one point, I’d use XML instead. XML does get used in a lot of areas, and it’s definitely not niche, but I remember when it was being heavily used in marketing as a sort of magic bullet for application data interchange some years back, and it’s not that.

bioemerl,

Technical people down below were like “yeah cool, but there’s no place for it

I think you might underestimate entertainment and creation. Right now I can imagine some character or scenario in my head and generate a little avatar with stable situation then render it onto a live chat that (mostly) works.

I've paid like 2k for a computer that enables this. It's make money from me at least.

Turkey_Titty_city,

Before that it was 'big data'. remember that?

every 5 or so years the media needs some new tech to hype up to get people paranoid

Lanthanae,
@Lanthanae@lemmy.blahaj.zone avatar

“big data” runs the content recommendation algorithms of all the sites people use which in tirn have a massive influence on the world. It’s crazy to think “big data” was just a buzzword when it’s a tangible thing that affects you day-to-day.

LLM powered tools are a heavy part of my daily workflow at this point, and have objectively increased my productive output.

This is like the exactly opposite of Bitcoin / NFTs. Crypto was something that made a lot of money but was useless. AI is something that is insanely useful but seems not to be making a lot of money. I do not understand what parallels people are finding between them.

privsecfoss,
@privsecfoss@feddit.dk avatar

Nice try, Microsoft

deegeese,

Dude, we do big data every day at work. We just call it data engineering because why call it big if everything is big?

Franzia,

That’s what she said

Lanthanae, (edited )
@Lanthanae@lemmy.blahaj.zone avatar

AI ≠ Micros*ft

psudo,

The hype cycle. And just like how even a reasonable read on the supposed benefits are going to leave most people very disappointed when it happens. And I’m glad you’re one of the people that have found a good use for LLMs, but you’re in the vocal minority, as far as I can tell

Lanthanae,
@Lanthanae@lemmy.blahaj.zone avatar

That’s a weird argument. Most technological advancements are directly beneficial to the work of only a minority of people.

Nobody declares that it’s worthless to research and develop better CAD tools because engineers and product designers are a “vocal minority.” Software development and marketing are two fields where LMMs have already seen massive worth, and even if they’re a vocal minority, they’re not a negligible one.

psudo,

I don’t see how saying things failing to live up to their promises and helping a mere fraction of the people claimed is. And I can’t speak to marketing, but I can to software development and it really is not having the impact claimed, at least in my professional network.

kittenroar, to technology in So Far, AI Is a Money Pit That Isn't Paying Off

Oh, no! Billionaires with short term, selfish thinking might lose money! What a tragedy.

explodicle, to technology in So Far, AI Is a Money Pit That Isn't Paying Off

Rather than debate every new technology for energy worthiness on a case-by-case basis up front, it would be more productive to direct our efforts towards better policy that internalizes the cost of pollution.

“These new-fangled ‘lasers’ don’t even do anything useful for the energy they consume!”

ryan, to technology in So Far, AI Is a Money Pit That Isn't Paying Off

AI is absolutely taking off. LLMs are taking over various components of frontline support (service desks, tier 1 support). They're integrated into various systems using langchains to pull your data, knowledge articles, etc, and then respond to you based on that data.

AI is primarily a replacement for workers, like how McDonalds self service ordering kiosks are a replacement for cashiers. Cheaper and more scalable, cutting out more and more entry level (and outsourced) work. But unlike the kiosks, you won't even see that the "Amazon tech support" you were kicked over to is an LLM instead of a person. You won't hear that the frontline support tech you called for a product is actually an AI and text to speech model.

There were jokes about the whole Wendy's drive thru workers being replaced by AI, but I've seen this stuff used live. I've seen how flawlessly they've tuned the AI to respond to someone who makes a mistake while speaking and corrects themself ("I'm going to the Sacramento office -- sorry, no, the Folsom office") or bundles various requests together ("oh while you're getting me a visitor badge can you also book a visitor cube for me?"). I've even seen crazy stuff like "I'm supposed to meet with Mary while I'm there, can you give me her phone number?" and the LLM routes through the phone directory, pulls up the most likely Marys given the caller's department and the location the user is visiting via prior context, and asks for more information - "I see two Marys here, Mary X who works in Department A and Mary Y who works in Department B, are you talking about either of them?"

It's already here and it's as invisible as possible, and that's the end goal.

monobot,

This is just what is visible to users/customers which is just top of the iceberg.

Real use of AI is in every industry and best use case is for jobs that were imposible before.

webghost0101,

That’s subjective. While being able to do stuff we couldn’t before is amazing i think the “Best” usecase is exactly the jobs that people do right know.

Cheap Democratic labor accessible to everyone with a phone is the dream they can finnaly answer the early 20 century promise that technological will bring more leisure to all.

XPost3000,

This article isn’t saying that AI is a fad or otherwise not taking off, it absolutely is, but it’s also absolutely taking too much money to run

And if these AI companies aren’t capable of turning a profit on this technology and consumers aren’t able to run these technologies themselves, then these technologies may very well just fall out of the public stage and back into computer science research papers, despite how versatile the tech may be

What good is a ginie if you can’t get the lamp?

abhibeckert,

it’s also absolutely taking too much money to run

Well, maybe they should raise their prices then?

If they raise the prices too far though, I’ll just switch to running Facebook’s open source llama model on my workstation. I’ve tested and it works with acceptable quality and performance, only thing that’s missing is tight integration with other tools I use. That could (and I expect will soon) be fixed.

XPost3000,

Exactly, nobody’s gonna wanna pay $20-$80 per month if they can just run an open source version for free

Classic proprietary L, ironically enough for "Open"AI

  • All
  • Subscribed
  • Moderated
  • Favorites
  • fightinggames
  • All magazines