I have a number of friends, clients, and acquaintances talking about the unrivaled amazingness of ChatGPT and various other AIs storming the playing field. Those in doubt of the long-term benefit may be staying on the sidelines intentionally, or they may fear repercussions of giving voice to their concerns.
That's not me... My official take? Too much, too fast, and obviously dangerous.
We as a society already have well founded trust issues stemming from not being able to distinguish reality from subterfuge. Social media is one of the biggest foundation stones of this problem. Simply put, too many people feel free to confuse, lie to, or just flat out torment anyone that crosses their path because they are safely behind a computer screen.
NOW, when we are getting annoyed with endless marketing attacks, political discourse, or social engineering content... We do not even know if the source is a person or AI that was just set loose.
If everyone ends up being pissed off at SOMETHING, and nobody actually knows where the source of the angst is... Who wins?
Call me a conspiracy theorist if you wish. Call me a technophobe... Or just someone who has watched Terminator too many times... Whatever... Those who are reading to this point should understand ME well enough to know that I study human psychology and interactions to the near exclusion of all else. Occupational hazard of being your friendly national H.R. Guy. 🙂
But I am far from alone. Tech giants are starting to pump the brakes... Is it for us, or for their own purpose? Who knows, but even if they are just being self-serving, they sometimes still make good points. From an article I read just before starting this post:
"One milestone, in particular, could be within reach if this turns out to be true: the ability to be indistinguishable from humans in conversation. And it doesn’t help that we’ve essentially been training this AI chatbot with hundreds of thousands, if not millions, of conversations a day.
Considering the meteoric rise and development of ChatGPT technology since its debut in November 2022, these rumors are quite likely to be true. And while seeing such tech improve so quickly can be exciting, hilarious, and sometimes insightful, there are also plenty of dangers and legal pitfalls that can easily cause harm.
For instance, the amount of malware scams being pushed has steadily increased since the chatbot tech’s introduction, and its rapid integration into applications calls into question privacy and data collection issues, not to mention rampant plagiarism issues."
AI is just starting, and it is already outpacing the common sense of too many people. My hope is that this fad will either blow out as suddenly as it started because either people will insist on getting their own personal voice back, or (and more likely) it will get shut down by the government due to the obvious inherent dangers it represents (to their secrets as well as the mob they feed off of for their power 🤪).
End of rant... This message was NOT written by AI... Or was it?
Cheers.