A critical analysis argues that the anthropomorphic language used to describe LLMs (e.g., 'thinking', 'hallucinating') and the widespread misuse of 'artificial intelligence' misleads users into attributing consciousness to these systems. The author expresses frustration with industry hype and warns against the psychological effects of humanizing non-sentient technology. This reflects growing concerns about the ethical and societal implications of LLM marketing and adoption.
Background
Large Language Models (LLMs) have gained massive attention for their ability to generate human-like text, leading to widespread adoption and commercial hype. The term 'enshittification' refers to the degradation of digital platforms over time, often due to profit-driven changes.
- Source
- Lobsters
- Published
- Apr 7, 2026 at 04:26 PM
- Score
- 6.0 / 10