- Merits
- 397
Add something like
as an option for reactions which could indicate full agreement and would enhance the like-emoji?
as an option for reactions which could indicate full agreement and would enhance the like-emoji?
as an option for reactions which could indicate full agreement and would enhance the like-emoji?


Lewis Carroll would love this.In my day we just watched Prisencolinensinainciusol, wondered whether we were having a stroke, or if they were. And we were thankful
There has been quite a bit of discussion about the validity of the original Turing test, for it truly being able to test if the one you are communicating with is a human or a machine.I'm not aware that the Turing test has been tried in its original formulation, where both a human and a machine try to convince another human that they are human and the other is a machine. They may still be able to pass that, but probably not in its commercial embodiments, as the simplest trick for the human would be to start talking about topics the commercial models aren't allowed to mention. This is of course not a limitation of the technology itself. Still, the original version of the test is harder than the versions that I've seen, where a human has a one on one conversation with an unknown entity and rates them as human or machine afterwards.
Yes, that's a very valid vision on this topic as well.Anyways, to me it's bizarre the identification between producing speech and intelligence. I think it's just a manifestation of typical mind fallacy and anthropomorphization, and not that different from attributing intentions to a storm.
As I see it, a prerequisite for intelligence, shared by life at all points of the intelligence spectrum, is a set of values/goals to pursue and maximize, some very common ones being self preservation and reproduction. In that sense, it could be argued that a simple thermostat is closer to intelligence than a LLM, as it acts upon the world to pursue and preserve a certain, "desirable" state. The attempts to make LLMs closer to this are lackluster, post-hoc, bolted-on approaches that essentially fail. I don't think language is a good medium for core intelligence: it just enables already intelligent beings to get to further levels of intelligence. It's a bad foundation for intelligence. With all its limitations and acknowledging that they're far from complex intelligence, I think reinforcement learning systems are often much closer to actual, animal intelligence, and it's a much more solid foundation.
Hopefully, once the bubble pops and the hype moves onto another thing, LLMs will be studied more rigorously by actual scientists without the goal of attracting even more investment. They are a very interesting technology that essentially solved natural language processing, and deserve to be studied seriously.

Don't forget also asking leading questions such as "why is it the case that X", when X is not actually the case. "Look, AI confirmed it!".it surprises me that people often struggle to get good enough results from LLM, where they give it way too ambiguous or complex tasks, or my personal favorite: where they start to argue with the A.I. about the results not being to their liking and then move to a state of reasoning why, and in that proces getting more and more angry by the forever polite responses from the LLM's.![]()
Without solid morals, this whole civilization is doomed, imo. Teach children to think and develop awareness; that's all it takes...
One love