Jump to content

Chaosium should allow work that uses generative models for illustration in the BRP design challenge


Recommended Posts

  • 2 weeks later...
On 3/30/2024 at 7:39 PM, g33k said:

"AI’s use of copywritten material..."

"Copyrighted" or just "copyright", not "copywritten". The law is about the right to copy, and covers images as well as writing. Nobody claiming to know about copyright should make this mistake.

On 3/30/2024 at 9:56 PM, Jeff said:

...from the perspective of copyright law we have loads of copy-written works that end up being cut up and reassembled...

Oops, sorry-not-sorry!

Edited by PhilHibbs
Link to comment
Share on other sites

On 3/30/2024 at 11:48 PM, AndreJarosch said:

Why should a company, like Chaosium, let AI create Cthulhuoid or Gloranthan pictures, which anyone else also can use?

That's a damn good point! However, I've played around with AI art, but I've never used the output as generated. Anything that I've gone on to use (personal use only, like an illustration for a character sheet) has gone through extensive modification and mashing up with other material. For example my new avatar has AI generated components, but all heavily edited, and the final result would, according to my understanding, be my copyright. I would not expect most commercial products using AI to contain virgin AI output. For a start, it just isn't good enough yet. The output is generic and soulless. Anyone familiar with the technology can easily identify AI output, at least when it comes to compositions. Individual elements like faces can be convincing, but not scenes. This may just be a matter of time, but if AI is allowed to dominate the art market and starts consuming its own output, then we will see rapid degradation unless something new comes along that is closer to actual intelligence.

Link to comment
Share on other sites

US law uses four tests to determine if something is “fair use” of an existing copyrighted work: tech-bro evangelists routinely ignore tests 1 & 4, because building plagiarism engines trained on copyrighted works and selling their output destroys the market for human-created, copyrightable art.

In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include:

  1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
  2. the nature of the copyrighted work;
  3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
  4. the effect of the use upon the potential market for or value of the copyrighted work.
  • Like 1
Link to comment
Share on other sites

4 hours ago, Nick Brooke said:

4. the effect of the use upon the potential market for or value of the copyrighted work.

This is the compelling one for me. You can argue that compiling a searchable database from a large number of copyright works is substantially transformative and does not threaten the works' value. But building an image or text generation engine, yeah that threatens the value.

On the other hand, I do have some sympathy for the "but that's just what people do when they learn" argument, although point 1 (commercial nature vs educational) undermines it. I don't think that AI is there yet in that respect anyway, it isn't learning in a comparable way to the way that people do. It's very much faking it.

Link to comment
Share on other sites

I don't have enough legal knowledge to jump into that debate, but I will say that when you see something that is clearly derived from a known image (Henry Cavill from the Witcher as Elric), you are seeing the results of a lack of training data and appropriately tagged training data. Essentially, if you showed someone a picture of an orange cat, called it a cat, and they didn't know other cats existed, they would think all cats are orange. 
 

I can also say that AI doesn't "cut it up" any more than a novice artist cuts up a piece of art, copies, and pastes it into their own art. If this was true, AI hands would have five fingers constantly, rather than legendarily being bad at drawing them. It clearly is doing something, but it doesn't have the knowledge that hands have five fingers in many models, so it makes something up. 
 

I do think the notion that AI cannot create transformative works because it is not covered by law a fascinating argument. It is somewhat similar in my mind to the river in New Zealand which was given legal rights and now has representatives, so it can go to court and sue polluters. If it was operating wholly independently, this would almost certainly be required. I think currently, though, AI is a tool driven by human textual input ("draw me Elric"), and perhaps then the person required bar would be met by the prompt engineer, essentially making the AI a very fancy paint brush. 

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...