I don't have enough legal knowledge to jump into that debate, but I will say that when you see something that is clearly derived from a known image (Henry Cavill from the Witcher as Elric), you are seeing the results of a lack of training data and appropriately tagged training data. Essentially, if you showed someone a picture of an orange cat, called it a cat, and they didn't know other cats existed, they would think all cats are orange.
I can also say that AI doesn't "cut it up" any more than a novice artist cuts up a piece of art, copies, and pastes it into their own art. If this was true, AI hands would have five fingers constantly, rather than legendarily being bad at drawing them. It clearly is doing something, but it doesn't have the knowledge that hands have five fingers in many models, so it makes something up.
I do think the notion that AI cannot create transformative works because it is not covered by law a fascinating argument. It is somewhat similar in my mind to the river in New Zealand which was given legal rights and now has representatives, so it can go to court and sue polluters. If it was operating wholly independently, this would almost certainly be required. I think currently, though, AI is a tool driven by human textual input ("draw me Elric"), and perhaps then the person required bar would be met by the prompt engineer, essentially making the AI a very fancy paint brush.