• doodledup@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 hours ago

    LLM watermarking is economically desireble. Why would it be more profitable to train worse LLMs on LLM outputs? I’m curious for any argument.

    Also, what has deep-fakes anything to do with LLMs? This is not related at all.

    A certificate for “real” content is not feasible. It’s much easier to just prevent LLMs to train on LLM output.