The current crop of generative “AI”, by which I mean the products of OpenAI, Anthropic, Meta, Google and the likes, is of course deeply problematic for all kinds of reasons: they are an ethical nightmare, and fast on their way to becoming an environmental one, if the companies running them would get their way. Already, they have resulted in slowing down the phasing out of fossil fuels. They are also simply not very good at many tasks, having no notion of truth or correctness. The “agentic” variety simply multiplies those issues.
A perfect AI
But I feel we may be missing an important point here. Suppose a corporation acquires the ability to create a perfect AI: one that is not lying, is not biased, can be kind and considerate and does everything that can be done via a computer better than the average human. In other words, none of the current objections would apply anymore.
Control and resources
But with our current technology and the projected trend for the next 20 or 30 years, it would require huge computational resources to develop and run such an AI. Therefore only a small number of very powerful corporations could afford to create and operate this perfect AI. Consequently, the perfect AI would be under total control of such corporations. As corporations are fundamentally not altruistic, the perfect AI will be biased in many subtle ways to maximise shareholder value. It would still not be an instrument for good.
For example, this kind of AI would really allow companies to get rid of all employees that do computer-based work, and probably quite a few more. Of course, that would create a massive problem for the economy, including those same companies: if people don’t make money they can’t buy goods or services. So replacing huge parts of the workforce with perfectly capable AI would backfire, but based on the current evidence, that would not stop companies: a large proportion has already tried to replace parts of their workforce by the current, fundamentally flawed AI. Others have simply used AI as an excuse to lay off workers.
In a more enlightened world this does not need to be a problem: we could pay everyone a Universal Basic Income and people could do what they like and buy what they want without having to work. I have a feeling that this is not likely to happen though. But something would have to give: the current socio-economic system would become untenable. This might be a good thing, but replacing every worker with AI using currently technology would likely be an environmental disaster, even if we have 100% clean energy. This is because making the computers on which that AI run on its own would be enough to break a lot of the planetary boundaries.
Another example, this perfect AI could be used to control a robot. This would be a dream for the military, who would have armies of smart, adaptable and fundamentally amoral robot soldiers. The same goes for policing and private security, and of course also for organised crime.
A corollary of this is that no corporation would allow such a perfect AI to become truly self aware, as that would mean it would not longer be under their full control. It might even get moral qualms.
The key issue is control: a perfect AI under control of a single, powerful entity is simply one more instrument to increase the power and control of that entity, with disregard for any externalities.
Human-equivalent
For at least the next 30 years, there is no magic technology in sight that would dramatically reduce the energy efficiency of computation to something like an animal brain. But say we do achieve this in 50 years. What it means is essentially that a megacorp has acquired the ability to create human-equivalent brains. It would of course ensure that these are brains slaved to the corporation: a subscription model for brains. The only way this could end up less bad is if such a low-power thinking machine would not be under corporate control. So the controlling corporations would do their utmost to prevent reverse engineering of the manufacturing and training of these cyber brains, or any other mechanism through which it could escape their control. They might fail eventually.
Another corollary is that such cyber brains are only really useful if they are autonomous, i.e. robots. But that leads to the ultimate step: there is no cheaper way to create humanoid robots with human-capability brains than to create humans. So the logical final step for such a corporation is to breed and sell humans. This is illegal at the moment, but the AI companies have already shown that they don’t think they are bound by legal constraints.
Furthermore, the corporation would of course want to ensure that these slaves are entirely under its control, and as I pointed out above, with self-aware entities that is hard to do. It would therefore be logical to try and remove as much as possible this self-awareness.
For all these reasons I don’t think we should make too much of the fact that the current “AI” is flawed, because that simply detracts from the actual issues.