The AI Craze

Are you one of those loudly demanding that companies create AI-powered systems to amuse you on Facebook, be your online sexual partner, offer therapy 24-7, provide answers to your search questions, write the news, or enhance management surveillance of worker activity? I would guess not. And yet, everywhere you look, AI is being promoted as the ticket to a more productive and fulfilling life.

The fact of the matter is that the AI craze is being driven by tech companies, not our needs. And these companies are working nonstop to sell us on how much we need AI in our lives. There is a lot at stake for them; if they succeed, they stand to make a fortune. Of course, they couldn’t care less about the social consequences of their effort. It’s all a quest for what appears a big pot of gold.

However, the AI craze has gone on long enough for us to start drawing some plausible conclusions about where it is leading. Most importantly, there are good reasons to believe that big tech will never deliver the transformative AI it is promising. One big reason is that AI’s ongoing development is seriously constrained by data limitations and unexplained hallucinations [see below] which make its output unreliable. Another is that the financial costs involved in developing and operating ever more sophisticated systems is staggering and likely to prove prohibitive.

But we cannot afford to stand on the sidelines and let the AI craze continue unchecked, even if we are confident of its eventual passing. The reason is that it comes at great public cost. It is being subsidized by governments at all levels, robbing our cities and states of needed tax revenue. Even more importantly, it is driving us ever faster to a future of climate chaos.

False Promises

First things first – when people talk about AI, they normally have in mind generative artificial intelligence (or machine learning AI). OpenAI started the AI craze with its November 2022 release of ChatGPT, where the GPT stands for Generative Pre-Trained Transformer. This chatbot, and later versions, including by competitor companies, requires both large amounts of data, mostly taken from the web, and an algorithm called a transformer that enables it to draw on that data to determine, based on probability, a response to prompts. As the tech writer Megan Crouse explains,

“The model doesn’t ‘know’ what it’s saying, but it does know what symbols (words) are likely to come after one another based on the data set it was trained on. The current generation of artificial intelligence chatbots, such as ChatGPT, its Google rival Bard and others, don’t really make intelligently informed decisions; instead, they’re the internet’s parrots, repeating words that are likely to be found next to one another in the course of natural speech. The underlying math is all about probability.”

Generative AI is just the beginning according to tech companies, who see a future of rapid improvements, with more data and more computing power enabling them to develop systems coming ever closer to human performance. Interactive artificial intelligence (IAI) capable of deciding on and taking a number of different actions to complete assigned tasks without step-by-step prompts comes next. And then, in the not-too-distant future, we can expect artificial general intelligence or AGI systems with the ability to think, learn, and solve problems on their own. According to the cheerleaders, these systems will enable us to develop new vaccines, lower greenhouse gas emissions, boost productivity and income, eliminate uninteresting and low paid work, and the list goes on.

But despite substantial spending on AI development, which has led to ever faster and more capable generative AI systems, AI companies are finding the returns disappointing. As the tech writer Edward Zitron comments,

“Bloomberg reported that OpenAI, Google, and Anthropic are struggling to build more advanced AI, and that OpenAI’s ‘Orion’ model – otherwise known as GPT-5 – ‘did not hit the company’s desired performance,’ and that ‘Orion is so far not considered to be as big a step up’ as it was from GPT-3.5 to GPT-4, its current model. You’ll be shocked to hear the reason is that because ‘it’s become increasingly difficult to find new, untapped sources of high-quality, human-made training data that can be used to build more advanced AI systems’.”

AI companies have encouraged investors to view their industry through the prism of the semiconductor industry, where new investments have produced a steady record of breakthroughs yielding ever smaller and more powerful chips. But this has not been the AI experience despite significant outlays for ever bigger data centers with more powerful machines. And data limitations, as Zitron pointed out, are one of the big reasons.

Said simply, AI companies have largely picked the Internet clean of human-generated data and without new large data sets their systems cannot develop new capabilities. Their response: prompt their current systems with questions and requests for information to generate new data. But there are serious problems with this strategy. One is that the existing data, largely scraped from the web, includes all sorts of racist, sexist, and ill-informed posts and articles. Those are part of the database that the system draws on when generating new material for its training. As a result, these harmful notions and misinformation get more deeply embedded.

But there is an even more serious problem. Feeding the system with its own responses creates a feedback loop that yields an ever-narrowing range of responses. While human generated output varies considerably, AI models are structured to provide responses based on likely probabilities. This means that their responses will, if their training data is largely self-generated, soon converge on the model’s determined “conventional wisdom.” And this limits the reliability and usefulness of the system.

AI Hallucinations

The New York Times, in an article titled “When AI’s Output Is a Threat to AI Itself,” highlights the problem:

“Imagine a medical-advice chatbot that lists fewer diseases that match your symptoms, because it was trained on a narrower spectrum of medical knowledge generated by previous chatbots…

“Just as a copy of a copy can drift away from the original, when generative AI is trained on its own content, its output can also drift away from reality, growing further apart from the original data that it was intended to imitate.

“In a paper published last month in the journal Nature, a group of researchers in Britain and Canada showed how this process results in a narrower range of AI output over time – an early stage of what they called ‘model collapse’.

“This problem isn’t just confined to text. Another team of researchers at Rice University studied what would happen when the kinds of AI that generate images are repeatedly trained on their own output – a problem that could already be occurring as AI-generated images flood the web.

“They found that glitches and image artifacts started to build up in the AI’s output, eventually producing distorted images with wrinkled patterns and mangled fingers.”

Then, there is the potentially more serious problem of hallucinations, which refers to AI output that has no basis in reality – dates, times, places, events can be entirely made up. As Zitron notes, “The hallucination problem is one that is nowhere closer to being solved – and, at least with the current technology – may never go away, and it makes it a non-starter for a great many business tasks, where you need a high level of reliability.”

Financial Consequences

These technological challenges have their financial consequences. To this point, AI companies are shelling out a lot of money to advance their AI systems without much to show for it in terms of financial rewards. Microsoft’s experience is representative:

“Microsoft has spent a staggering amount of money on AI – and serious profits likely remain many years out, if they’re ever realized.

“The tech giant revealed that during the quarter ending in June [2024], it spent an astonishing $19-billion in cash capital expenditures and equipment, the Wall Street Journal reports – the equivalent of what it used to spend in a whole year a mere five years ago.

“Unsurprisingly, most of those $19-billion were related to AI, and roughly half was used for building out and leasing data centers.”

Not surprisingly, this record has led some investment analysts to raise warnings about the future of the AI industry. As the New York Times reports, Mr. Covello, the head of stock research at Goldman Sachs,

“jolted markets with a research paper that challenged whether businesses would see a sufficient return on what by some estimates could be $1-trillion in AI spending in the coming years. He said generative artificial intelligence, which can summarize text and write software code, made so many mistakes that it was questionable whether it would ever reliably solve complex problems.

“Mr. Covello challenged the notion that the costs of AI would decline, noting that costs have risen for some sophisticated technologies like the machines that make semiconductors. He also criticized AI’s capabilities.

“‘Overbuilding things the world doesn’t have use for, or is not ready for, typically ends badly,’ he said.”

At Great Public Cost

It is tempting to stand on the sidelines and let big tech pursue its dreams. If they come to fruition, great, and if they don’t, they are the ones to lose. But that is not the way things work. We are all paying a high cost for their efforts.

One example: states and cities have been competing to attract data centers with enormous tax breaks. According to an investigation by the Oregonian newspaper, “Oregon has one of the nation’s largest and fastest-growing data center industries.” And a major reason is that the big tech companies – like Amazon, Apple, Google, and Meta – receive “some of the most generous tax breaks anywhere in the world. Data centers don’t employ many people, but the wealthy tech companies that run them enjoy Oregon tax giveaways worth more than $225-million annually.”

These tax breaks mean less money for things we do need – like schools, libraries, and parks. And the data centers themselves occupy land that could be used for more productive purposes.

An ever-greater concern is that these data centers place huge demands on our energy sector – demands that pose critical challenges for our communities. As the Oregonian explains:

“Data center demand is soaring because of artificial intelligence, which uses massive amounts of electricity for advanced computation. These powerful machines already consume more than 10% of all of Oregon’s power and forecasters say data center power use will be at least double that by 2030 – and perhaps some multiple higher…

“Data centers’ power needs are triggering expensive upgrades to the Northwest’s power lines and prompting construction of new power plants. There is growing concern among ratepayer advocates, regulators and politicians that households will end up bearing much of the cost of data center growth through higher residential power bills.”

Oregon is no outlier. According to the New York Times, “There are already more than 5,000 data centers in the US, and the industry is expected to grow nearly 10 percent annually. Goldman Sachs estimates that AI will drive a 160 percent increase in data center power demand by 2030.”

This exploding demand for electricity translates directly into a dramatic growth in fossil fuel use, including coal, and thus, US greenhouse emissions, increasing the likelihood of climate catastrophes. However, as the New York Times lets us know, our tech leaders don’t seem to care:

“Microsoft said its emissions had soared 30 percent since 2020 because of its expansion of data centers. Google’s emissions are up nearly 50 percent over the past five years because of AI.

“Eric Schmidt, the former chief executive of Google, recently said that the artificial intelligence boom was too powerful, and had too much potential, to let concerns about climate change get in the way.

“Schmidt, somewhat fatalistically, said that ‘we’re not going to hit the climate goals anyway’, and argued that rather than focus on reducing emissions, ’I’d rather bet on AI solving the problem’.”

President Biden, in his farewell address to the nation, warned about the “potential rise of a tech industrial complex that can pose real dangers for our country.” And yet, as the executive editor of The American Prospect, David Dayen, points out,

“the same week that he issued this warning, Biden signed an executive order that gives that tech-industrial complex an enormous gift, by making the creation of data centers for artificial intelligence a national-security imperative. The order aims to accelerate the production of data centers (in ways not afforded to, say, the production of housing for human beings), and requires the leasing of federal land owned by the Pentagon and the Department of Energy to build data centers.”

What we have here is a prime example of capitalism’s destructive logic. •

This article first published on the Reports From the Economic Front website.

Martin Hart-Landsberg is Professor Emeritus of Economics at Lewis and Clark College, Portland, Oregon; and Adjunct Researcher at the Institute for Social Sciences, Gyeongsang National University, South Korea. His areas of teaching and research include political economy, economic development, international economics, and the political economy of East Asia. He maintains a blog Reports from the Economic Front.