Pro@programming.dev to Technology@lemmy.worldEnglish · 22 hours agoThe Collapse of GPT: Will future artificial intelligence systems perform increasingly poorly due to AI-generated material in their training data?cacm.acm.orgexternal-linkmessage-square40linkfedilinkarrow-up1220arrow-down19
arrow-up1211arrow-down1external-linkThe Collapse of GPT: Will future artificial intelligence systems perform increasingly poorly due to AI-generated material in their training data?cacm.acm.orgPro@programming.dev to Technology@lemmy.worldEnglish · 22 hours agomessage-square40linkfedilink
minus-squaredoodledup@lemmy.worldlinkfedilinkEnglisharrow-up1·6 hours agoLLM watermarking is economically desireble. Why would it be more profitable to train worse LLMs on LLM outputs? I’m curious for any argument. Also, what has deep-fakes anything to do with LLMs? This is not related at all. A certificate for “real” content is not feasible. It’s much easier to just prevent LLMs to train on LLM output.
LLM watermarking is economically desireble. Why would it be more profitable to train worse LLMs on LLM outputs? I’m curious for any argument.
Also, what has deep-fakes anything to do with LLMs? This is not related at all.
A certificate for “real” content is not feasible. It’s much easier to just prevent LLMs to train on LLM output.