- One of the limitations of large language models, if you really think about it, is that they are actually compressions of existing data of the web. They aren't really creating new knowledge or understanding anything.
- They are going to be good for certain things but being alarmed by them or relying on them too much is also something to be wary of.
- Also see - [[Jobs that Can't be Replaced by AI]]
---
- "Some companies exist to do just that—we usually call them content mills. Perhaps the blurriness of large language models will be useful to them, as a way of avoiding copyright infringement. Generally speaking, though, I’d say that anything that’s good for content mills is not good for people searching for information. The rise of this type of repackaging is what makes it harder for us to find what we’re looking for online right now; the more that text generated by large language models gets published on the Web, the more the Web becomes a blurrier version of itself." - [Link](https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web)
- "LLMs learn only from language; without being embodied in the physical world, they do not experience language’s connection to objects, properties and feelings, as a person does. “It’s clear that they’re not understanding words in the same way that people do,” Lake says. In his opinion, LLMs currently demonstrate “that you can have very fluent language without genuine understanding”." - [Link](https://www.nature.com/articles/d41586-023-02361-7)