Today's article comes from the Cambridge Data & Policy journal. The authors are Strauss et al., from University College London. In this paper, they're arguing that Large Language Models (LLMs) are perpetrating ecosystem-exploitation on a massive scale. That they're extracting value from external knowledge sources, synthesizing those inputs into outputs, profiting from those outputs, then failing to return visibility, traffic, or credit to the sources they used.
You must be an active Journal Club member to access this content. If you're already a member, click the blue button to login. If you're not a member yet, click the sign-up button to get started.
Login to My Account
Sign Up