I'm of the mind that AGI (Artificial General Intelligence) isn't "on the horizon" in a sense that we'll see it within the next year. Honestly, I doubt we can sufficiently create one with 10 years of time and work being put in. That's not to denigrate anyone working in the field, but more to say that it's going to require an extensive multidisciplinary collaboration that just isn't off the ground yet. Looking at intelligence, specifically one as broad and malleable as ours, requires more than just a few thousand lines of code; This is obviously an gross reduction of the issue. We're complicated, so much so that we don't even fully understand ourselves. Where does our consciousness arise from and why does it take the specific shape that it does? Sky's the limit as far as questioning goes, though I'm sure we have more solid answers than I'm giving the royal We credit for.
Large Language Models are great! They can accomplish a ton of tasks quickly and efficiently. They can be used for so many tasks that the average person could take advantage of. I think it would be asinine to destroy something that isn't even fully realized yet. That being said, they can also be used by bad actors. Look at the state of twitter right now. It isn't conspiracy to easily prove that LLMs are being utilized to farm engagement and advertiser money with amazing ease of setup. If anything, that is the real danger here. That goes without saying that ad revenue theft is merely the tip of the iceberg. These networks can be pointed in any direction that it's handler chooses. Be it through manufactured consent, disinformation campaigns, simple denial of service, or other attack vectors I'm not familiar with.
I feel that we live in a new era. One brought about by the ever expansive corporate landscape. These LLMs can provide false positives in growth and engagement metrics that can inflate company value. Granted, this point is a little more tailored to social media companies, but that's why it's so scary to me. "Number get bigger" is always the game with companies of a sufficient size. I feel like the tech is going to be used to isolate people from each other even more than we already are.
I fear that LLMs will detach us from "objective truths" and the means to verify them. The internet may end up becoming a curated space by LLMs. They could end up dictating what search results you can see, sections of the web that are accessible, and even the simple fact that it will become increasingly difficult to discern if you're talking to a human or machine. The sad part is that it won't even be sentient.
"This is the beginning of the end." has been said by countless souls before us, and will likely be said for many generations to come. I'm certain that generally we are fine. Things are a little rocky right now, but when aren't they? As a culture we crave stability, security, and optimism for the future. At least I do -and as an anecdote- and so do many of those around me. Some small comfort that all this madness will have a positive resolution. It's a difficult thing to ensure when our technological advancements are arriving at such a breakneck pace and we constantly argue about the "correct" path forward!
Suffice to say, I like your line of questioning. It's a truly interesting topic to engage with, and there are many facets of the issue to look at. I'm not a particularly well educated man, and there is so much that I don't, and can't possibly begin to know or understand. I do hope that my little lens on the subject was at least interesting to think about
Peace and Love to you all,
Duck