Over the last couple of months, Artificial Intelligence (AI) developers and experts have been warning governments that they need to regulate the sector more aggressively.
Without regulation, the dangers of misinformation, unaccountable automated decision-making and more powerful bias-driven political surveillance look likely to continue to grow.
This reflects one common view that technologies can escape our control and develop a logic of their own. And, of course, this isn’t the first time we have been warned about the dangers of an increasingly autonomous technology.
Some previous cases of technological determinism
The idea that AI has (if you will forgive the pun) developed a mind of its own, is part of a long history of warnings that technologies have their own ‘natural’ characters that can shift and change societies when they are widely used. This view is often referred to as technological determinism.
In technological determinism, the character of a specific technology determines how we use it, and what society will need to do to adjust to its widespread use; essentially technology determines history.
Weaker versions of this idea propose that technologies, while not autonomous, have a significant and ongoing influence over history and social change. While perhaps a less extreme view, even soft technological determinism proposes that humans and their societies are confronted by technological change and are compelled to respond to its requirements, practices and logic.
This view can be seen in discussions about how the invention of the automobile encouraged and accelerated the individualisation of society. Mass use of automobiles required government intervention to establish the infrastructure of roads, but the freedom of the lone driver to drive wherever and whenever they wish is seen as a key element in the growth of individualism in the 20th Century. The character of the car, once widely available, undermined collective and social modes of travel such as buses and trains, and progressively emphasised the individual as autonomous in their leisure time choices.
The idea of an autonomous technology forcing social change, has also been seen widely in discussions about how widespread use of new Information and Communication Technologies (ICTs) in the last 40 years have changed our societies.
It is clear that computers (and now smartphones) have forced a range of changes in the way we communicate, the way we discover information and how we act in our social networks. Again, it is proposed that there is something in the character of ICTs that drives what is often referred to as an ‘information revolution’, which we are helpless to effectively resist.
Another example of technological determinism; there is an extensive literature of the Revolution in Military Affairs (RMA) and its relation to new military technologies. From the development of tanks, first used in earnest in the Spanish Civil War, to disrupt the dynamics of the battlefield, to the development of the spur which enabled an armed cavalry to become an early dominating technology of war. Nowadays associated with the impact of ICTs on warfare, the idea of the RMA is that the deployment of a novel technology for fighting offers such advantages to the holder that their victory is determined by the technology’s character, not their strategy or other capabilities.
So, the idea that AI is out of our control and is acting autonomously on human possibilities and activities is hardly novel.
Technology and social use
The common theme of these (and other) cases of technological determinism is to present technology almost as a natural force to which humans must respond and adapt. Calls for regulation are based on (in this case) AI developers claiming there is nothing they can do to shape how AI will work in the societies into which it is introduced.
However, to counter such views we need to recognise that new technologies are the result of human endeavours and are used by choice (even when their social uptake seems inevitable). There are lots of ways of understanding this interaction but nearly one hundred years ago, Lewis Mumford offered a simple model for thinking about this.
Mumford argued that rather than think about ‘technology’ we should think about ‘technics’, a term he coined to capture the interaction of technology and political choices about its use. He argued that there two broad dynamics at play: authoritarian technics; and democratic technics. Crucially, these are not isolated tendencies but represent two sides of a social dialectic that continually operates as technologies are used in society.
Technologies therefore do not have a character as such, but are defined by their use: authoritarian technics drive centralisation, have over-arching power structures and control, and dominate the individual by the (claimed) logic of the technology. Democratic technics are small-scale, empowering of the individual, are about sharing not ownership, and undermine generalised power structures. Democratic technics tend towards a pluralism of use, allowing humans to deploy such technologies in ways uncontrolled from the centre.
Being dialectically engaged means the use of any specific technology can be expected to shift as new social uses expand, whether they represent democratic or authoritarian technics. The key for Mumford is that the use of technology is not determined by its logic or character, but rather by the social groups who frame its use; essentially, how the technology is chosen to be used by those in power.
So how worried should we be by Artificial Intelligence?
We certainly should be worried about the utilisation of AI across a range of social issues and sectors, not least as it appears to be undermining the reliability of information. Likewise, there is nothing wrong with seeking to regulate AI. But, we also need to be clear that the people who developed and are deploying AI are not helpless in the face of an autonomous, or natural force; they made choices about what AI does and how it does it, and this is Mumford’s point!
We can put technologies to all sorts of uses, and certainly this may change society, sometimes quite rapidly. However, the way we deploy technology is the result of the political, economic and social views (or prejudices) that we bring to our relations with any technology. As Mumford argued, we can choose to use any technology in a democratic manner, or we can allow it to be used in an authoritarian way.
Regulation can be a powerful way of moving technologies towards the democracy of sharing and social benefit, but we must be clear that no technological effect happens without human choice(s) and agency. So, when technologies are used for surveillance, or to displace workers, these are social and political choices; these developments are not inevitable.
We may be happy to see technology replace some jobs, or we may be unhappy that aspects of our lives have been changed for the worse by a new technology. But importantly, societies, and the social forces within societies make choices about how to present the technology and how to deploy it; these choices are Mumford’s technics.
Think back to the example of the car. How different would modern society be if instead of organising transport infrastructure to support the use of cars by individuals, governments had organised the road network to serve public transport, and car drivers had to fit around the needs of buses. The choice to privilege the car driver wasn’t inevitable, it was driven by lobbying and political and financial interest, much as the development of AI has been.
Now governments need to regulate AI, but this will be shaped by how AI has been developed, and by whom, as well as who are the intended beneficiaries of its widespread utilisation.
We ended up with a social media that seems to privilege the needs and interests of certain rich individuals and corporations, but only now are finding what Mumford would see as democratic technics (such as the decentralised, federated Mastodon as an alternative to Twitter).
With AI, we also need to quickly explore how it can become a democratic technic, rather than merely feel our only alternative is to regulate an authoritarian technic (although we need to do that too!). There is nothing inevitable about the way we will use AI in the future, whatever its developers may tell us!
Read more on this topic...

We need your help!
The press in our country is dominated by billionaire-owned media, many offshore and avoiding paying tax. We are a citizen journalism publication but still have significant costs.
If you believe in what we do, please consider subscribing to the Bylines Gazette from as little as £2 a month 🙏