eveninghere,

Did we ever agree on AI training with our Reddit comments btw

PeterBronez,
@PeterBronez@hachyderm.io avatar

@along_the_road

“These were mostly family photos uploaded to personal and parenting blogs […] as well as stills from YouTube videos"

So… people posted photos of their kids on public websites, common crawl scraped them, LAION-5B cleaned it up for training, and now there are models. This doesn’t seem evil to me… digital commons working as intended.

If anyone is surprised, the fault lies with the UX around “private URL” sharing, not devs using Common Crawl

PeterBronez,
@PeterBronez@hachyderm.io avatar

@along_the_road what’s the alternative scenario here?

You could push to remove some public information from common crawl. How do you identify what public data is unintentionally public?

Assume we solve that problem. Now the open datasets and models developed on them are weaker. They’re specifically weaker at identifying children as things that exist in the world. Do we want that? What if it reduces the performance of cars’ emergency breaking systems? CSAM filters? Family photo organization?

kent_eh,

what’s the alternative scenario here?

Parents could not upload pictures of their kids everywhere in a vain attempt to attract attention to themselves?

That would be good.

PeterBronez,
@PeterBronez@hachyderm.io avatar

@kent_eh exactly.

The alternative is “if you want your content to be private, share it privately.”

If you transmit your content to anyone who sends you a GET request, you lose control of that content. The recipient has the bits.

It would be nice to extend the core technology to better reflect your intent. Perhaps embedding license metadata in the images, the way LICENSE.txt travels with source code. That’s still quite weak, as we saw with Do Not Track.

wagoner,

Doesn’t Digital Commons mean common ownership? A personal blog of family photos inherently owned by that photographer are surely not commonly owned. I see this as problematic.

Gamers_Mate,

I hope this causes a class action against these companies.

Fapper_McFapper,

I’m so tired of AI.

criitz,

Too bad, it’s here forever…

remotelove,

It’s been around for a while. It’s the fluff and the parlor tricks that need to die. AI has never been magic and it’s still a long way off before it’s actually intelligent.

frog,

The other thing that needs to die is hoovering up all data to train AIs without the consent and compensation to the owners of the data. Most of the more frivolous uses of AI would disappear at that point, because they would be non-viable financially.

Even_Adder, (edited )

Cory Doctorow wrote a good article about this a little while back.

frog,

I remember reading that a little while back. I definitely agree that the solution isn’t extending copyright, but extending labour laws on a sector-wide basis. Because this is the ultimate problem with AI: the economic benefits are only going to a small handful, while everybody else loses out because of increased financial and employment insecurity.

So the question that comes to mind is exactly how, on a practical level, it would work to make sure that when a company scrapes data, trains and AI, and then makes billions of dollars, the thousands or millions of people who created the data all get a cut after the fact. Because particularly in the creative sector, a lot of people are freelancers who don’t have a specific employer they can go after. From a purely practical perspective, paying artists before the data is used makes sure all those freelancers get paid. Waiting until the company makes a profit, taxing it out of them, and then distributing it to artists doesn’t seem practical to me.

Even_Adder, (edited )

The point is that It’s not an activity you can force someone to pay for. Everyone that can run models on their own can benefit, and that group can expand with time as research makes it more feasible on more devices. But that can never come to pass if we destroy the rights that allow us to make observations and analyze data.

counting words and measuring pixels are not activities that you should need permission to perform, with or without a computer, even if the person whose words or pixels you’re counting doesn’t want you to. You should be able to look as hard as you want at the pixels in Kate Middleton’s family photos, or track the rise and fall of the Oxford comma, and you shouldn’t need anyone’s permission to do so.

Creating an individual bargainable copyright over training will not improve the material conditions of artists’ lives – all it will do is change the relative shares of the value we create, shifting some of that value from tech companies that hate us and want us to starve to entertainment companies that hate us and want us to starve.

frog, (edited )

Creating same-y pieces with AI will not improve the material conditions of artists’ lives, either. All that does is drag everyone down in a race to the bottom on who can churn out the most dreck the most quickly. “If we advance the technology enough, everybody can have it on their device and make as much AI-generated crap as they like” does not secure stable futures for artists.

Even_Adder,

Creating same-y pieces with AI will not improve the material conditions of artists’ lives, either. All that does is drag everyone down in a race to the bottom on who can churn out the most dreck the most quickly. “If we advance the technology enough, everybody can have it on their device and make as much AI-generated crap as they like” does not secure stable futures for artists.

If you’re worried about labor issues, use labor law to improve your conditions. Don’t deny regular people access to a competitive, corporate-independent tool for creativity, education, entertainment, and social mobility for your monetary gain.

Art ain’t just a good; it’s self-expression, communication, inspiration, joy – rights that belong to every human being. The kind of people wanting to relegate such a significant part of the human experience to a domain where only the few can benefit aren’t the kind of people that want things to get better. They want to become the proverbial boot. The more people can participate in these conversations, the more we can all learn.

I understand that you are passionate about this topic, and that you have strong opinions. However, insults, and derisive language aren’t helping this discussion. They only create hostility and resentment, and undermine your credibility. If you’re interested, we can continue our discussion in good faith, but if your next comment is like this one, I won’t be replying.

frog,

I did actually specify that I think the solution is extending labour laws to cover the entire sector, although it seems that you accidentally missed that in your enthusiasm to insist that the solution is having AI on more devices. However, so far I haven’t seen any practical solutions as to how to extend labour laws to protect freelancers who will lose business to AI but don’t have a specific employer that the labour laws will apply to. Retroactively assigning profits from AI to freelancers who have lost out during the process doesn’t seem practical.

Even_Adder,

So the question that comes to mind is exactly how, on a practical level, it would work to make sure that when a company scrapes data, trains and AI, and then makes billions of dollars, the thousands or millions of people who created the data all get a cut after the fact. Because particularly in the creative sector, a lot of people are freelancers who don’t have a specific employer they can go after. From a purely practical perspective, paying artists before the data is used makes sure all those freelancers get paid. Waiting until the company makes a profit, taxing it out of them, and then distributing it to artists doesn’t seem practical to me.

This isn’t labor law.

frog, (edited )

Labour law alone, in terms of the terms under which people are employed and how they are paid, does not protect freelancers from the scenario that you, and so many others, advocate for: a multitude of individuals all training their own AIs. No AI advocate has ever proposed a viable and practical solution to the large number of artists who aren’t directly employed by a company but are still exposed to all the downsides of unregulated AI.

The reality is that artists need to be paid for their work. That needs to happen at some point in the process. If AI companies (or individuals setting up their own customised AIs) don’t want to pay in advance to obtain the training data, then they’re going to have to pay from the profits generated by the AI. Continuing the status quo, where AIs can use artists’ labour without paying them at all is not an acceptable or viable long-term plan.

Even_Adder,

I don’t think they have to, the point is to fight against regression of public rights for the benefit of the few.

frog,

Destroying the rights of artists to the benefit of AI owners doesn’t achieve that goal. Outside of the extremely wealthy who can produce art for art’s sake, art is a form of skilled labour that is a livelihood for a great many people, particularly the forms of art that are most at risk from AI - graphic design, illustration, concept art, etc. Most of the people in these roles are freelancers who aren’t in salaried jobs that can be regulated with labour laws. They are typically commissioned to produce specific pieces of art. I really don’t think AI enthusiasts have any idea how rare stable, long-term jobs in art actually are. The vast majority of artists are freelancers: it’s essentially a gig-economy.

Changes to labour laws protect artists who are employees - which we absolutely should do, so that companies can’t simply employ artists, train AI on their work, then fire them all. That absolutely needs to happen. But that doesn’t protect freelancers from companies that say “we’ll buy a few pieces from that artist, then train an AI on their work so we never have to commission them again”. It is incredibly complex to redefine commissions as waged employment in such a way that the company can both use the work for AI training while the artist is ensured future employment. And then there’s the issue of the companies that say “we’ll just download their portfolio, then train an AI on the portfolio so we never have to pay them anything”. All of the AI companies in existence fall into this category at present - they are making billions on the backs of labour they have never paid for, and have no intention of ever paying for. There seems to be no rush to say that they were actually employing those millions of artists, who are now owed back-pay for years worth of labour and all the other rights that workers protected by labour laws should have.

Even_Adder,

I’m not fighting for the extremely wealthy, I’m fighting for the existence of competitive open source models. Something that can’t happen with what you’ve proposed. That would just hand corporations a monopoly of a public technology by making it prohibitively expensive to for regular people to keep up with the megacorporations that already own vast troves of data and can afford to buy even more.

This article by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries does a good job of explaining what I’m talking about.

frog,

Taking artists’ work without consent or compensation goes against the spirit of open source, though, doesn’t it? The concept of open source relies upon the fact that everyone involved is knowingly and voluntarily contributing towards a project that is open for all to use. It has never, ever been the case that if someone doesn’t volunteer their contributions, their work should simply be appropriated for the project without their consent. Just look at open source software: that is created and maintained by volunteers, and others contribute to it voluntarily. It has never, ever been okay for an open source dev to simply grab whatever they want to use if the creator hasn’t explicitly released it under an applicable licence.

If the open source AI movement wants to be seen as anything but an enemy to artists, then it cannot just stomp on artists’ rights in exactly the same way the corporate AIs have. Open source AIs need to have a conversation about consent and informed participation in the project. If an artist chooses to release all their work under an open source licence, then of course open source AIs should be free to use it. But simply taking art without consent or compensation with the claim that it’s fine because the corporate AIs are doing it too is not a good look and goes against the spirit of what open source is. Destroying artists’ livelihoods while claiming they are saving them from someone else destroying their livelihoods will never inspire the kind of enthusiasm from artists that open source AI proponents weirdly feel entitled to.

This is ultimately my problem with the proponents of AI. The open source community is, largely, an amazing group of people whose work I really respect and admire. But genuine proponents of open source aren’t so entitled that they think anyone who doesn’t voluntarily agree to participate in their project should be compelled to do so, which is at the centre of the open source AI community. Open source AI proponents want to have all the data for free, just like the corporate AIs and their tech bro CEOs do, cloaking it in the words of open source while undermining everything that is amazing about open source. I really can’t understand why you don’t see that forcing artists to work for open source projects for free is just as unethical as corporations doing it, and the more AI proponents argue that it’s fine because it’s not evil when they do it, the more artists will see them as being just as evil as the corporations. You cannot force someone to volunteer.

Even_Adder, (edited )

Taking artists’ work without consent or compensation goes against the spirit of open source, though, doesn’t it?

It doesn’t. Making observations about others’ works is a well-established tool for any researchers, reviewers, and people inventing new works. A concept which work perfectly within the open source framework. That’s all these models are, original analysis of its training set in comparison with one another. Because it’s a step one must necessarily take when doing anything, doing this doesn’t require anyone’s permission and is itself a right we all have.

frog,

When the purpose of gathering the data is to create a tool that destroys someone’s livelihood, the act of training an AI is not merely “observation”. The AIs cannot exist without using content created by other people, and the spirit of open source doesn’t include appropriating content without consent - especially when it is not for research or educational purposes, but to create a tool that will be used commercially, which open source ones inevitably will be, given the stated purpose is to compete with corporate models.

No argument you can make will convince me that what open source AI proponents are doing is any less unethical or exploitative than what the corporate ones are. Both feel entitled to artists’ labour in exchange for no compensation, and have absolutely no regard for the negative impacts of their projects. The only difference between CEO AI tech bros and open source AI tech bros is the level of wealth. The arrogant entitlement is just the same in both.

Even_Adder,

Giving all people a tool to help them more effectively communicate, express themselves, learn, and come together is something everyone should get behind.

I firmly believe in the public’s right to access and use information, while acknowledging artists should retain specific rights over their creations. I also accept that the rights they don’t retain have always enabled ethical self-expression and productive dialogue.

Imagine if copyright owners had the power to simply remove whatever wasn’t profitable for them from existence. We’d be hindering critical functions such as critique, investigation, reverse engineering, and even the simple cataloging of knowledge. In place of all that good, we’d have an ideal world for those with money, tyrants, and all those who seek control, and the undermining of the free exchange of ideas.

frog,

The problem is that undermining artists by dispersing open source AI to everyone, without having a fundamental change in copyright law that removes power from the corporations as well as individual artists, and a fundamental change in labour law, wealth distribution, and literally everything else, just screws artists over. Proceeding with open source AI, without any other plans or even a realistic path to a complete change in our social and economic structure, is basically just saying “yeah, we’ll sort out the problems later, but right now we’re entitled to do whatever we want, and fuck everybody else”. And that is the tech bro mindset, and the fossil fuel industry, and so, so many others.

AI should be regulated into oblivion until such a time as our social and economic structures can handle it, ie, when all the power and wealth has been redistributed away from the 1% and evenly into the hands of everyone. Open source AI will not change the power that corporations hold. We know this because open source software hasn’t meaningfully changed the power they hold.

I’m also sick of the excuse that AI helps people express themselves, like artistic expression has always been behind some impenetrable wall, with some gatekeeper only allowing a chosen few access. Every single artist had to work incredibly hard to learn the skill. It’s not some innate talent that is gifted to a lucky few. It takes hard work and dedication, just like any other skill. Nothing has ever stopped anyone learning that except the willingness to put the effort in. I don’t think people who tried one doodle and gave up because it was hard are a justifiable reason to destroy workers’ livelihoods.

Even_Adder,

This isn’t undermining artists, it’s expanding access and knowledge, enabling individuals to take control of their own destinies. Open-source AI will empower artists, existing artists and newly active or returning artists who give this new medium a shot, by giving them the new tools that will push the frontiers of self-expression and redefine creativity this decade.

100 years ago photographers and filmmakers significantly disrupted the careers of most illustrators, story tellers, and theater companies of the time. Despite this, storytelling and image making exploded, entering a new golden age. Musicians panicked over the use of synthesizers in the 80s too often refusing to work with people involved with synthesizers. As a result, there are fewer drummers today than in 1970, but out of that came hip hop and house. Suppressing that tool would have been a huge cultural loss. Generative art hasn’t found its Marley Marl or Frankie Knuckles yet, but they’re out there, and they’re going to do stuff that will blow our minds. Cutting edge tools and techniques have always propelled art and artists forward. Every advancement a leap forward, leaving behind constraints and enabling more people to pursue their creative aspirations.

That reminds me of a presentation I saw a little while back.

If you want to fight against people’s right to freely communicate and express themselves, be my guest, but it’s not a fight you can win.

DdCno1,

It could be regulated into oblivion, to the point that any commercial use of it (and even non-commercial publication of AI generated material) becomes a massive legal liability, despite the fact that AI tools like Stable Diffusion can not be taken away. It’s not entirely unlikely that some countries will try to do this in the future, especially places with strong privacy and IP laws as well as equally strong laws protecting workers. Germany and France come to mind, which together could push the EU to come down hard on large AI services in particular. This could make the recently adopted EU AI Act look harmless by comparison.

Catsrules,

AI will remember that.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@beehaw.org
  • fightinggames
  • All magazines