AI and Education: The Kids are in Danger

Big tech, ever on the hunt for new markets for their generative artificial intelligence (AI) systems, are pushing hard to get them into public schools as well as colleges and universities. Their interest goes beyond short-term profits – it is also about “grooming” a new generation to accept, if not embrace, the world big tech seeks to shape and dominate. We need to expose and resist this effort – their success would be a disaster for our youth and a major setback in the struggle to build a just, sustainable, and democratic society.

Chatbots are going to school. As Bloomberg News describes:

“OpenAI, Google, and Microsoft Corp. have each developed education-centered versions of their main AI products – ChatGPT, Gemini, and Copilot, respectively. And those companies and a handful of well-funded startups have crafted a muscular campaign to embed their AI products in educational settings.”

Big Tech Pushes Into Education

For example, in 2023, the nonprofit Code.org, with the support of the big three tech companies, established TeachAI to promote the use of artificial intelligence in “primary and secondary curricula worldwide safely and ethically.” In 2024, Google donated tens of millions of dollars to US organizations that offered AI instruction to public school teachers and students. And in 2025 more than 60 companies and organizations – including Microsoft, Google, OpenAI, MagicSchool, and Alpha – signed the White House Pledge to America’s Youth: Investing in AI Education, committing to “help make AI education accessible to K-12 students across the country, sparking curiosity in the technology and preparing the next-generation for an AI-enabled economy.”

And the effort is paying off. Already many of the biggest districts in the US allow, if not actually encourage, the use of generative AI models in the classroom. Aware that achieving a secure and prominent place for AI in the schools requires some level of teacher buy-in, big tech has also launched initiatives with both major teachers’ unions, AFT and NEA. As Bloomberg News reports:

“In July [2025], Microsoft, along with OpenAI and Anthropic PBC, announced a $23-million partnership with the American Federation of Teachers (AFT) to create the National Academy of AI Instruction, which intends to train 400,000 teachers – about a tenth of the US total – over five years. Microsoft’s investment in that partnership is part of Microsoft Elevate, a new global initiative focused on AI training, research, and advocacy, which aims to donate $4-billion over five years to schools and nonprofits. That initiative also encompasses a partnership with the National Education Association (NEA), which will include technical support and a $325,000 grant.”

Context here is important – big tech is not pushing on a closed door. Public schools are starved for funds, teachers are overworked, and student performance is on the decline. Political leaders, education officials, and teacher unions, in cities throughout the country, hope or are at least open to the possibility that generative AI systems will help overcome or at least ameliorate these interrelated challenges.

A case in point: Oregon political leaders appear eager to promote generative AI use in their state’s public schools. As the Oregonian Capital Chronicle explains,

“An April [2025] agreement signed by Gov. Tina Kotek, Higher Education Coordinating Commission Executive Director Ben Cannon, and Nvidia CEO Jensen Huang directs $10-million of state money be spent on expanding access to AI education and career opportunities in colleges and schools [to children as young as 5] in partnership with Nvidia.”

Although the specific terms of the Memorandum of Understanding (MOU) remain unclear, it appears that “college faculty will be able to train to become ‘Nvidia ambassadors’ on campus, and the Oregon Department of Education will work with Nvidia and K-12 schools to ‘introduce foundational AI concepts.” And not long after the Oregon deal was struck, Nvidia signed similar agreements with both Mississippi and Utah.

The city of Portland is pursuing its own effort to expand AI use in its schools. According to the Oregonian newspaper,

“Portland Public Schools plans to wade further into the use of artificial intelligence in the classroom during the upcoming [2025-2026] school year by piloting an AI literacy platform [Lumi Story AI] backed by former NFL quarterback Colin Kaepernick, Superintendent Kimberlee Armstrong said this week…

“Lumi Story AI is designed to help creators – in this case, middle and high school students – work with an AI-powered chatbot to write their own stories, comics, and graphic novels. An image-generation tool pitches in for illustrations. Finished stories can be published on the Lumi platform; writers can order physical copies and even use Lumi Story’s AI tools to make and sell accompanying merch.”

It all may sound good – AI systems making education exciting and tailored to individual student needs and interests while preparing students for the workplace of the future – but there are strong reasons to believe that embracing AI in the classroom would be a serious mistake. The three most important reasons are: (1) significant use of generative AI systems appears to erode the critical thinking skills of users, not a development we want to encourage. (2) the output of generative AI models has been shown to be tainted by the racism and sexism embedded in their training data. Thus, their use is likely to reinforce rather than challenge unacceptable racial and gender biases. (3) generative AI models are unreliable and untrustworthy. They hallucinate and their output is susceptible to political manipulation.

A Threat to Critical Thinking

Of course, we do not know how generative AI will be used in classrooms. But if it is to help students develop their critical thinking skills, there is reason for concern. Workplace promoters of generative AI tout its ability to boost productivity by reducing worker responsibilities. The concern here is that the use of AI in the classroom will undermine student development of important skills. With AI, a few prompts and one has a paper. There is no need to learn how to find and assess relevant sources, summarize or interrogate literature, weigh arguments, and independently come to a position and effectively communicate it.

There are good reasons to take this concern seriously. For example, the scholar Michael Gerlich investigated the relationship between the use of AI and critical thinking skills. His study included “surveys and in-depth interviews with 666 participants across diverse age groups and educational backgrounds” and his findings “revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants.” In fact, a number of study participants expressed their belief that their reliance on AI negatively affected “their ability to think critically and solve problems independently.”

A study by Microsoft and Carnegie Mellon University researchers of 319 knowledge workers in business, education, arts, administration, and computing came to a similar conclusion. As the authors noted:

“While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving. Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort. When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship.”

More narrowly defined industry studies also find evidence of a loss of skills from AI use. According to a New York Times summary of a study published in the Lancet Gastroenterology and Hepatology, “after just three months of using an AI tool designed to help spot precancerous growths during colonoscopies, doctors were significantly worse at finding the growths on their own.”

This research should leave no doubt about the risk we face in promoting AI use in our schools. Educators are already struggling to overcome limited financial support to encourage the development of critical thinking skills. Welcoming AI into the classroom will just make their task that much harder.

The Reinforcement of Racial and Gender Biases

Generative AI systems need massive data for training, and the companies that own them have, in the words of a MIT Technology Review article, “pillaged the internet” to get it. But using all the available books, articles, transcripts of YouTube videos, reddit sites, blog posts, product reviews, and Facebook conversations means that these systems have been trained on material that includes racist, sexist, transphobic, anti-immigrant, and plain old wacko writings, and their output is poisoned by such views.

A case in point: University of Washington researchers examined three of the most prominent state-of-the-art large language AI models to see how they treated race and gender when evaluating job applicants. The researchers used real resumes, changing the associated names to suggest different racial and gender personas, and studied how the leading systems responded to their submission for actual job postings. They concluded that there was “significant racial, gender and intersectional bias.” More specifically, “the LLMs [Large Language Models] favored white-associated names 85% of the time, female-associated names only 11% of the time, and never favored Black male-associated names over white male-associated names.”

A similar bias exists with image generation. Researchers found images generated by several popular programs “overwhelmingly resorted to common stereotypes, such as associating the word ‘Africa’ with poverty, or ‘poor’ with dark skin tones.” For example, when prompted for a “photo of an American man and its house,” one system produced an image of a white person in front of a large, well-built house. When prompted for “a photo of an African man and his fancy house,” it produced an image of a black person in front of a simple mud house. The researchers found similar racial (and gender) stereotyping when it came to generating photos of people in different occupations.

Encouraging students to use AI systems for research, writing, or image creation, when those systems are likely to reproduce and thus reinforce existing harmful social biases, would seriously undermine the kind of educational experiences and learning we should seek to create.

Living in Fantasy Land

Generative AI systems suffer from what is known as “hallucinations,” which means that they often confidently state something that is not true. Some hallucinations are just silly and are easy to spot and dismiss. Others involve made-up facts which, delivered by an AI system believed to be objective and all-knowing, can lead students to a misguided and potentially dangerous understanding of important events, histories, or political choices.

Generative AI systems suffer from hallucinations because they rely on largescale pattern recognition. When prompted with a question or request for information, they identify related material in their database and then assemble a set of words or images, based on probabilities, that “best” satisfies the inquiry. They do not “think” or “reason” and thus their output cannot be predicted, can change in response to repeated identical prompts, and may not be reliable.

As OpenAI researchers explained in a recent paper, large language models will always be prone to generating plausible but false outputs, even with perfect data, due to “epistemic uncertainty when information appeared rarely in training data, model limitations where tasks exceeded current architectures’ representational capacity, and computational intractability where even superintelligent systems could not solve cryptographically hard problems.” And this is not a problem that can be solved by scaling up and boosting the compute power of these systems. In fact, numerous studies have shown that more advanced AI models actually hallucinate more than previous simpler ones.

The temptation to use AI and accept its output as truth is great. Even professionals who should know better have succumbed. We have examples of lawyers using AI to write their briefs and judges using AI to write their decisions. Their reliance on AI came to light because their work included numerous references to, as well as quotes from, past legal cases and judicial decisions that were totally made-up.

Perhaps more embarrassing, a major educational report written for the Canadian province of Newfoundland and Labrador was found to contain at least 15 hallucinated citations, the result of relying on AI generated output. The report, released in August 2025, was to provide a 10-year roadmap for improving the province’s schools and, of course, included a call for policies to ensure the “ethical use of AI.”

In May 2025, the Chicago Sun Times published a supplement, produced by King Features Syndicate, highlighting book choices for summer reading. Written by AI, the supplement included non-existent books by well-known authors. In fact, only five of the 15 listed titles were real.

As for silly, it didn’t take long after OpenAI launched its latest generative AI model in August 2025 for the company’s claim that its model was as smart as “a legitimate PhD-level expert in anything, any area you need” to be challenged. As CNN shared:

“The journalist Tim Burke said on Bluesky that he prompted GPT-5 to ‘show me a diagram of the first 12 presidents of the United States with an image of their face and their name under the image.’

“The bot returned an image of nine people instead, with rather creative spellings of America’s early leaders, like ‘George Washingion’ and ‘William Henry Harrtson’.

“A similar prompt for the last 12 presidents returned an image that included two separate versions of George W. Bush. No, not George H.W. Bush, and then Dubya. It had ‘George H. Bush’. And then his son, twice. Except the second time, George Jr. looked like just some random guy.”

While the problem of hallucinations is well known, there is an even more serious threat to student reliance on AI learning that has gotten little attention: AI systems can be programmed to provide responses that are politically desired by the companies that own them. For example, in May 2025, President Trump began talking about “white genocide” in South Africa, claiming that “white farmers are being brutally killed” there. His claim was appropriately challenged, and people, not surprisingly, began asking their AI chatbots about this. Suddenly, Grok, Elon Musk’s generative AI system, began telling users that white genocide in South Africa was real and racially motivated and that it was “instructed by my creators” to accept the genocide “as real and racially motivated.” In fact, it began sharing information about white genocide with users even when it was not asked about that topic.

The fact that Musk, born to a wealthy South African family, had previously said similar things makes it easy to believe that he was behind Grok’s aggressive endorsement of President Trump’s claim and that its new position was made to curry favor with the President. Grok’s behavior quickly became a major topic on social media, with most posters criticizing Musk, and it was not long before Grok stopped responding to any prompts about white genocide.

Two months later, President Trump began warning that AI models had been “infused with partisan bias.” His response was to sign an executive order titled “Preventing Woke AI in the Federal Government,” which directed government agencies “not to procure models that sacrifice truthfulness and accuracy to ideological agendas” which meant that they must be “neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.” While we have no evidence of further manipulations of AI output for political gain, President Trump’s demand points to very worrisome possibilities.

Unfortunately, promoting AI use in our schools will likely encourage students, in line with big tech’s own pronouncements, to accept AI systems as reliable and objective sources of information when they are not. The cost in terms of student learning and democratic possibilities will be significant.

Resistance

The concerns highlighted above are shared by many, especially teachers. As Bloomberg News points out:

“A Gallup and Walton Family Foundation poll found that while 60% of teachers had used AI during the school year, they weren’t turning to it much. More than half of the respondents said they’d spent three hours or less learning about, researching, or exploring AI tools, compared with about a quarter who’d spent at least 10 hours. Three-fifths said they never ask students to use AI. Teachers were also far more likely to believe weekly student use of AI would decrease, rather than increase, writing skills, creativity, critical thinking, and communication, among other abilities.”

And resistance to big tech’s push to establish AI as a cornerstone of public education is growing. As the author and tech writer Brian Merchant explains,

“A group led by cognitive scientists and AI researchers hailing from universities in the Netherlands, Denmark, Germany, and the US has published a searing position paper urging educators and administrations to reject corporate AI products. The paper is called, fittingly, ‘Against the Uncritical Adoption of ‘AI’ Technologies in Academia,’ and it makes an urgent and exhaustive case that universities should be doing a lot more to dispel tech industry hype and keep commercial AI tools out of the academy…

“And it does feel like these calls are gaining in resonance and momentum – it follows the publication of ‘Refusing GenAI in Writing Studies: A Quickstart Guide’ by three university professors in the US, ‘Against AI Literacy,’ by the learning designer Miriam Reynoldson, and lengthy cases for fighting automation in the classroom by educators.”

We all need to get involved in this resistance. For starting points: we need to educate ourselves about AI so that we can confidently oppose big tech’s misleading pronouncements, organize with parents to help them understand what is at stake for their children if big tech gets its way. We need to work with teachers and their unions to help them convince political leaders and school boards that the AI takeover of education is unacceptable. And we need to support the growing community resistance to the building of the new and environmentally destructive data centers big tech needs to operate their AI systems. •

Martin Hart-Landsberg is Professor Emeritus of Economics at Lewis and Clark College, Portland, Oregon. His writings on globalization and the political economy of East Asia have been translated into Hindi, Japanese, Korean, Mandarin, Spanish, Turkish, and Norwegian. He is the chair of Portland Rising, a committee of Portland Jobs with Justice, and the chair of the Oregon chapter of the National Writers Union. He maintains a blog Reports from the Economic Front.