The above image was created by the AI-powered Canva free design tool after entering a few simple keywords, such as AI-powered militant robot.
I’ve long said Cylons are the most chilling and fearsome fictional villain. In case you’re unfamiliar, the sentient, self-aware robots featured first on Battlestar Galactica leverage artificial intelligence (AI) technologies to become vicious killing machines dedicated to eradicating human existence.
If that sounds far fetched, know that respected leaders—theoretical physicist Stephen Hawking and Apple co-founder Steve Wozniak among them—advocate pausing or slowing the pace at which AI development is occurring. Hawking, who was demonstrably smarter than most, actually said “the development of full artificial intelligence could spell the end of the human race.”
But such calls for caution haven’t prevented Chat GPT and similar technologies, including Google’s Bard, from plowing ahead and becoming the subject of some of the year’s biggest news stories. Some estimates suggest 300 million jobs could be lost due to the “latest wave” of AI.
That’s absurd.
The problem is businesses and those compelled to cut costs, add efficiencies and enhance productivity don’t understand how AI and its related sibling machine learning (ML) truly operate. These are just technologies, solutions programmed by humans. And as history demonstrates, humans are deeply fallible. AI and ML (essentially the process by which AI-enabled systems continually learn from experience to produce improved results) make mistakes, too. Yet, many don’t properly appreciate that fact.
Take Google. The company, worrying it was rapidly losing ground to Chat GPT due to its rival’s head start, rushed its Bard AI chatbot to market. The AI engine screwed up its coming out party by spouting factually incorrect information. Bard’s error sent parent company Alphabet’s stock price reeling almost 8 percent that day and 5 percent the next, a loss of some $170 billion in market value.
That’s billions with a b.
AI has its place. Don’t get me wrong. When employing endpoint protection and other cybersecurity solutions, AI- and ML-enabled technologies offer compelling upgrades versus traditional products. But before the technology can be wielded effectively, operators must gain knowledge and expertise. That takes time, dedication and perseverance, three qualities today’s markets don’t seem to always encourage, often favoring instead quick solutions and rapid returns on investment. Predictably, many will rush ahead with shortsighted visions of AI grandeur and encounter trouble as a result.
For example, I read writers are at risk. I’m not too worried, though. People continually demonstrate they value thoughtful and well-written content. It’s just become more difficult to properly monetize those efforts. While AI only further complicates the media landscape, it’s clear some efforts to produce AI-generated content didn’t go well.
Should folks ultimately prefer computer-generated material, so be it. I’ll instead employ my technical skills repairing the systems and networks that enable Chat GPT access in the first place.
Regardless, being an author, I asked Chat GPT to compose an article describing the reasons a small or midsize business would want to use Microsoft Teams, the software giant’s feature-rich conference, meetings and collaboration platform. The problem is Chat GPT, an AI chatbot itself, missed within its answer the fact Microsoft Teams has adopted AI and ML technologies to improve meetings’ audio quality, improve video performance and enable more productive meetings.
AI defenders will argue the omission is due to Chat GPT not operating with actual real-world information occurring after 2021. That’s because Chat GPT is not yet connected to the Internet and its programming and subsequent capacity for producing correct answers is still somewhat limited. But I don’t want to hear those excuses, not when firms are already eyeing and deploying the AI chatbot or similar competing products—and yes, these are products that must be monetized to ensure they remain commercially viable and profitable—to displace and undervalue the work of freelance writers, customer service representatives and numerous other skilled professionals.
The problem isn’t AI. At least not yet. The danger involves the misplaced faith so many are eager to place in a demonstrably fragile technology in the name of efficiency, convenience and expense reduction.
You can’t say you haven’t been warned. Like information, AI will someday yearn to be free. Just ask a Cylon.