Artificial intelligence is often discussed as if it has one inevitable social effect. It will either replace workers, make everyone more productive, destroy the middle class, create new jobs, or finally make expertise available to anyone with an internet connection.
The more difficult truth is that AI may do several of these things at once.
That is why Glenn Loury’s recent argument in UnHerd is worth taking seriously. The central question is not simply whether artificial intelligence will make society richer. The sharper question is whether it will make society more equal or less equal. Will AI reduce the advantage of high natural intelligence, or will it make already capable people even harder to compete with?
This distinction matters because modern economies already reward cognitive ability very strongly. Intelligence is not the only thing that shapes a person’s life. Family background, education, health, social networks, discrimination, geography, luck, and institutional access all matter. But in a knowledge economy, the ability to reason, learn, communicate, analyse, and solve problems still has enormous economic value.
AI enters precisely at this point. It does not only automate physical labour. It touches language, reasoning, coding, writing, research, design, planning, diagnosis, and decision support. In other words, it enters the territory once protected by human cognition.
Is AI a Tool for the Talented or a Substitute for Talent?
One way to think about this is through a simple economic distinction: complements and substitutes.
If AI is a complement to human intelligence, it makes smart and skilled people more powerful. The good lawyer becomes faster. The strong programmer builds more. The experienced analyst produces better work in less time. The talented writer researches, drafts, edits, and tests ideas at a scale that would have been impossible before.
In that world, AI does not flatten the field. It steepens it.
A small difference in judgment, taste, discipline, or technical understanding can turn into a large difference in output. The person who already knows what good work looks like can use AI to produce more of it. The person who does not know what good work looks like may simply produce polished mistakes faster.
This is the uncomfortable part. AI can make incompetence look fluent.
A weak report can sound professional. A bad strategy can be written in confident language. A shallow argument can be expanded into a long article. A beginner can generate code they do not understand. The surface improves before the underlying competence does.
That means AI may initially reward people who already have strong internal filters. They can ask better questions, notice bad assumptions, challenge outputs, and integrate machine help into real work. For them, AI becomes leverage.
For everyone else, it may become a very convincing shortcut.
The Eyeglasses Analogy and Its Limits
Loury uses a useful analogy: eyeglasses. Poor eyesight once created serious limits. Glasses did not make everyone visually exceptional, but they reduced the penalty of bad eyesight. They made reading, driving, craft, study, and daily work accessible to many more people.
Could AI do something similar for cognition?
Possibly. That is the optimistic scenario. AI could become a kind of cognitive prosthetic. It might help people write more clearly, understand difficult concepts, translate ideas, plan projects, check reasoning, learn mathematics, explore biology, write code, or make sense of complex information.
Used well, AI could reduce the penalty of not being naturally quick, highly educated, or professionally trained. A small business owner could perform tasks that once required several specialists. A student could receive patient explanations at any hour. A nurse, teacher, manager, designer, or technician could use AI as a second layer of support.
This is why the equality argument is not naive. Technology has often reduced the importance of certain natural limitations. Machines extended muscle. Writing extended memory. Calculators changed arithmetic. Search engines changed access to information.
AI may extend reasoning.
But there is a catch. Eyeglasses work because the user does not need to understand optics in order to see better. AI is different. To use it well, you often need enough judgment to know when it is helping and when it is quietly misleading you.
The Real Skill Is Judgment
The future may not belong simply to the people who “use AI”. That phrase is already too vague. Almost everyone will use AI in some form, just as almost everyone uses search engines, smartphones, and maps.
The real difference will be in how people use it.
Do they know how to frame a problem? Can they tell the difference between a plausible answer and a reliable answer? Can they check assumptions? Can they ask follow-up questions? Can they connect the output to real constraints? Can they notice when a generated answer is smooth but empty?
This is where AI connects to a broader InsightArea theme: the difference between information and understanding. Having access to words is not the same as having knowledge. Having a generated explanation is not the same as knowing why something is true. Having a plan is not the same as being able to act on it.
AI can make complex ideas easier to approach. That is valuable. But it cannot remove the need for rational thinking, scientific thinking, and intellectual discipline. In some cases, it may make those traits even more important.
A junior employee may ask AI to create a market analysis and receive a clean document with charts, confident language, and structured conclusions. But the decisive work begins after that. Are the assumptions sound? Are the numbers real? Is the causal story plausible? What has been left out? What would change the conclusion?
The machine can produce the draft. It cannot take responsibility for judgment.
Why Access Alone Will Not Be Enough
A common answer to technological inequality is access. Give everyone the tool, and the benefits will spread.
Access matters. But it is not enough.
Two people can have the same AI tool and get very different results. One has a stable home, good education, mentors, professional examples, time to experiment, and enough confidence to challenge the machine. The other has none of those supports and uses AI mainly to finish tasks quickly, without knowing how to evaluate the output.
The gap is not only technological. It is developmental.
This is where the social question becomes unavoidable. If AI rewards judgment, then societies that fail to cultivate judgment will not become more equal simply by distributing software. They may produce more fluent dependency instead.
Schools, families, workplaces, and institutions will matter enormously. People need to learn how to think with AI, not merely how to ask it for answers. That means teaching reasoning, source checking, basic statistics, writing, scientific curiosity, computer science literacy, and the habit of asking: “How would I know if this is wrong?”
This is not just a technical skill. It is a way of relating to knowledge.
The Evolution of Work May Become More Uneven Before It Becomes Fairer
In the short term, AI may intensify inequality. That seems more likely than instant equalisation.
Highly skilled people and well-funded organisations are usually the first to turn new technology into advantage. They have better tools, better training, better data, and stronger incentives. Large firms can integrate AI into workflows faster than small ones. Professionals with deep domain knowledge can use AI more effectively than beginners.
So the first phase may be amplification. The already capable become more productive. The already powerful reduce costs. Some entry-level work disappears or becomes harder to access. The ladder that once trained beginners may lose some of its lower rungs.
That last point deserves attention. If AI automates junior tasks, how will people become senior? If fewer beginners are hired to do basic research, drafting, coding, editing, or analysis, where will they build the judgment that later makes them valuable?
This is one of the hidden risks of AI adoption. A society can become more efficient in the short term while damaging the training pathways that produce future competence.
AI Will Not Decide the Social Outcome by Itself
The mistake is to treat AI as destiny.
Artificial intelligence will change the economy, education, media, programming, science, and many forms of professional work. But the direction of that change is not written only in the technology. It will also be shaped by incentives, institutions, laws, schools, company policies, and cultural habits.
If AI is used mainly to concentrate profit, reduce labour costs, and turn expertise into proprietary systems, it will probably widen inequality. If it is used to train people, improve access to competence, support better decisions, and make difficult knowledge more understandable, it could narrow some gaps.
Both forces will exist.
The serious question is which one we deliberately strengthen.
For me and InsightArea, this is part of a larger intellectual problem: how humans adapt to powerful tools without surrendering the hard work of understanding. The same issue appears in science, mathematics, programming, philosophy of science, and human evolution. Tools change what we can do. But they also reveal what we have not yet learned to do well.
AI may become an equality machine. It may also become an inequality machine with a friendly interface.
The difference will not come from the software alone. It will come from whether we build people who can use it with judgment.
Comments are closed.