soltakss Posted March 31 Posted March 31 10 hours ago, svensson said: Where I'm at right now with AI is this, "It's awfully pretty, but is it 'art'?" For me, personally, yes it is art. Quote Simon Phipp - Caldmore Chameleon - Wallowing in my elitism since 1982. Many Systems, One Family. Just a fanboy. www.soltakss.com/index.html Jonstown Compendium author. Find my contributions here.
PhilHibbs Posted April 14 Posted April 14 (edited) On 3/30/2024 at 7:39 PM, g33k said: But here, have this: https://www.giantfreakinrobot.com/ent/ai-joker-copyright-infringement.html "AI’s use of copywritten material..." "Copyrighted" or just "copyright", not "copywritten". The law is about the right to copy, and covers images as well as writing. Nobody claiming to know about copyright should make this mistake. On 3/30/2024 at 9:56 PM, Jeff said: ...from the perspective of copyright law we have loads of copy-written works that end up being cut up and reassembled... Oops, sorry-not-sorry! Edited April 15 by PhilHibbs Quote
PhilHibbs Posted April 14 Posted April 14 On 3/30/2024 at 11:48 PM, AndreJarosch said: Why should a company, like Chaosium, let AI create Cthulhuoid or Gloranthan pictures, which anyone else also can use? That's a damn good point! However, I've played around with AI art, but I've never used the output as generated. Anything that I've gone on to use (personal use only, like an illustration for a character sheet) has gone through extensive modification and mashing up with other material. For example my new avatar has AI generated components, but all heavily edited, and the final result would, according to my understanding, be my copyright. I would not expect most commercial products using AI to contain virgin AI output. For a start, it just isn't good enough yet. The output is generic and soulless. Anyone familiar with the technology can easily identify AI output, at least when it comes to compositions. Individual elements like faces can be convincing, but not scenes. This may just be a matter of time, but if AI is allowed to dominate the art market and starts consuming its own output, then we will see rapid degradation unless something new comes along that is closer to actual intelligence. Quote
Nick Brooke Posted April 16 Posted April 16 US law uses four tests to determine if something is “fair use” of an existing copyrighted work: tech-bro evangelists routinely ignore tests 1 & 4, because building plagiarism engines trained on copyrighted works and selling their output destroys the market for human-created, copyrightable art. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include: the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. 1 Quote Community Ambassador - Jonstown Compendium, Chaosium, Inc. Email: nick.brooke@chaosium.com for community content queries Jonstown Compendium ⧖ Facebook Ф Twitter † old website
PhilHibbs Posted April 16 Posted April 16 4 hours ago, Nick Brooke said: 4. the effect of the use upon the potential market for or value of the copyrighted work. This is the compelling one for me. You can argue that compiling a searchable database from a large number of copyright works is substantially transformative and does not threaten the works' value. But building an image or text generation engine, yeah that threatens the value. On the other hand, I do have some sympathy for the "but that's just what people do when they learn" argument, although point 1 (commercial nature vs educational) undermines it. I don't think that AI is there yet in that respect anyway, it isn't learning in a comparable way to the way that people do. It's very much faking it. Quote
Raleel Posted April 16 Posted April 16 I don't have enough legal knowledge to jump into that debate, but I will say that when you see something that is clearly derived from a known image (Henry Cavill from the Witcher as Elric), you are seeing the results of a lack of training data and appropriately tagged training data. Essentially, if you showed someone a picture of an orange cat, called it a cat, and they didn't know other cats existed, they would think all cats are orange. I can also say that AI doesn't "cut it up" any more than a novice artist cuts up a piece of art, copies, and pastes it into their own art. If this was true, AI hands would have five fingers constantly, rather than legendarily being bad at drawing them. It clearly is doing something, but it doesn't have the knowledge that hands have five fingers in many models, so it makes something up. I do think the notion that AI cannot create transformative works because it is not covered by law a fascinating argument. It is somewhat similar in my mind to the river in New Zealand which was given legal rights and now has representatives, so it can go to court and sue polluters. If it was operating wholly independently, this would almost certainly be required. I think currently, though, AI is a tool driven by human textual input ("draw me Elric"), and perhaps then the person required bar would be met by the prompt engineer, essentially making the AI a very fancy paint brush. 1 Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.