LLM’s hallucinate all the time. The hallucination is the feature. Depending on how you design the neural network you can get an AI that doesn’t hallucinate. LLM’s have to do that, because they’re mimicking human speech patterns and predicting one of my possible responses.
A model that tries to predict locations of people likely wouldn’t work like that.
“I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.”
“Because we usually carried out the attacks with dumb bombs, and that meant literally dropping the whole house on its occupants. But even if an attack is averted, you don’t care – you immediately move on to the next target. Because of the system, the targets never end. You have another 36,000 waiting.”
Are we still supposed to believe that the pursuit of AI development is for the good of Humanity?
Fuck you Google for opening Nimbus to the IDF, via a contract that contains a clause saying that you can’t break it whatever the reason. Fucking moronic disgrace to humanity all you bunch
Responding to the publication of the testimonies in +972 and Local Call, the IDF said in a statement that its operations were carried out in accordance with the rules of proportionality under international law. It said dumb bombs are “standard weaponry” that are used by IDF pilots in a manner that ensures “a high level of precision”.
Another case where AI is used as a slick marketing term for a black box. A box in which humans selected indiscriminate bombing and genocide. Sure there is new technology used, but at the end of the day it is just military industry marketing to justify humans mass murdering other humans.
You really want to do something, but it feels evil and you don’t want to be evil so you slap some pseudoscience on it and relax. It’s done for Reasons now.
Add comment