By Anthony Loss, Director of Solution Strategy, ClearScale

Going into 2024, generative AI technology has transitioned from a shiny new type of solution that most folks are still trying to wrap their heads around, to a product that is becoming an everyday part of life. The question most people want to answer today isn’t “What is generative AI?” The typical digital citizen has already experimented with genAI tools, and a fair number of us are using them daily.

Now, what folks really want to know is what’s coming next in the realm of gen AI. How will it evolve beyond its current strengths and limitations? How far will it really go in changing the way we work and think? To what extent do regulators need to impose guardrails around gen AI technology?

I don’t have any magic ability to answer those questions. I’d like to think, however, that I have more perspective on the future of gen AI than most folks because I’ve spent months in the trenches helping my company build a tool that integrates generative AI into cloud applications. My experience with genAI runs much deeper than carrying out discussions with ChatGPT or asking GitHub Copilot to write some code. I’ve actually seen the making of the genAI sausage

Based on that experience, I’ve developed some strong perspectives on where generative AI stands today and what’s coming next. Here are my thoughts.

Stop Obsessing about Artificial General Intelligence

There has been a lot of chatter lately about Artificial General Intelligence (AGI), meaning an AI model or tool capable of simulating all facets of human intelligence. Some folks have even speculated that the drama in late 2023 at OpenAI stemmed from the company’s having achieved AGI, although there’s no hard evidence that this actually happened.

To me, asking how close we are to Artificial General Intelligence is not the right question. The answers will always vary because there are different takes on what, exactly, counts as AGI. I tend to think we’re far from true AGI because Large Language Models (LLMs) are not capable of reasoning, and I consider reasoning to be a critical component of AGI. But one could argue that a model incapable of basic reasoning could still count as AGI.

Debates like those, though, are mostly beside the point of what matters in practice. What we should really do is analyze AI solutions based on how useful they actually are to the people they are intended to serve, not how closely they resemble AGI (however we choose to define it). It doesn’t really matter whether a given tool is capable of AGI or not if it’s serving its intended purpose well.

In other words, I think the debate about AGI, although interesting from an intellectual standpoint, misses the point about what matters to actual people. We should fixate less on achieving AGI and more on improving the quality of the AI solutions we already have.

AI Hallucinations Aren’t Always Bad

Ask most folks about the shortcomings of Gen AI, and one of the first things they’re likely to mention are so-called hallucinations. Hallucinations happen when genAI models produce false information.

AI hallucinations are indeed a problem if people take the resulting data as fact, but the interesting thing about hallucinations is that they’re not always a bad thing. On the contrary, hallucinations are an important part of the ability of AI models to generate novel stories or ideas. Sometimes, you do want your model to make stuff up, if your goal is to get it to say something no one has said before.

What this means is that rather than seeking to prevent hallucinations, AI developers should focus on controlling them. AI models that are incapable of hallucinating would be a bad thing because they’d never say anything novel. As long as users can reliably control when a model says something new and when it presents only true information, they can leverage it to suit varying needs.

Ready to plan your next cloud project?

To Regulate AI, Focus on Models, not Concepts

Gen AI regulations remain very fluid. There has been much discussion about how to regulate gen AI models, but to date, very few regulatory frameworks have actually appeared.

To my mind, the best path forward on the regulatory front is to ensure that regulations prevent harmful use of AI technologies, while simultaneously keeping the door open for new inventions and innovations. To do this, regulations should focus on specific models, rather than concepts or practices. Policies that categorically forbid a certain type of AI development or prevent AI from being used in certain contexts run the risk of strangling innovation. But if a model already exists and we know its capabilities and limitations, it makes sense to regulate what the model is and is not allowed to do.

Embrace AI as a Complement to Jobs, Not a Threat

Worries that humans will lose jobs to AI are understandable given the powerful capabilities of gen AI technology. AI can already do many things faster and more effectively than humans, and it’s only going to get better with time.

But that doesn’t mean we should resign ourselves to a future where human workers are irrelevant. Instead, we should focus on upskilling humans so that they can work more effectively with help from AI. AI will make some types of jobs irrelevant, but it also creates opportunities for many workers to become much more efficient.

What this means is that smart employees should focus their energy and skills on learning how to use AI tools to become better at doing things that AI can’t do on its own. As long as enough people take that approach – as opposed to assuming that AI is ushering in some kind of dystopian future where humans serve no useful purpose – AI will become a net benefit for workers, not a threat.

ClearScale and GenAI

In discussing the future of generative AI and its practical applications, it’s essential to highlight the remarkable capabilities of companies like ClearScale in this domain. We’ve been at the forefront of integrating generative AI into complex cloud solutions with GenAI AppLink, demonstrating a profound understanding of how to leverage this technology to solve real-world problems.

ClearScale can also help you establish sound data architecture and data engineering principles that lay the foundation needed to pursue a genAI strategy. By focusing on practical applications and tailoring our genAI solution to meet specific client needs, we demonstrate how businesses can harness the power of generative AI to drive innovation and efficiency. This work not only showcases the current strengths of genAI technology but also provides a glimpse into its vast potential for transforming industries.

Parting Thoughts

In short, the debate about Artificial General Intelligence kind of misses the point, AI hallucinations aren’t bad so long as users have control over them, AI regulations should be focused and purposeful without being too strict and AI is only a threat to jobs if workers lack the imagination necessary to understand how they can do their jobs better with the help of AI.

That, at least, is my take on where AI stands today and what’s likely to come next. And again, although I’m humble enough to know that I can’t predict the future more reliably than anyone else – or, for that matter, than any genAI model – I do like to think I know a thing or two about AI that your average Joe does not because I’ve actually been in the gen AI trenches.

Get in touch today to speak with a cloud expert and discuss how we can help:

Call us at 1-800-591-0442
Send us an email at sales@clearscale.com
Fill out a Contact Form
Read our Customer Case Studies