When Henry Ford developed the assembly line method to manufacture his black Model T cars, he revolutionised manufacturing through the segmentation of a whole job into a division of tasks. By standardising the product and breaking down the whole process into tasks, he could employ unskilled workers assisted by machines and moulds. In turn, workers are paid decently and hence can afford to buy the cars.

In economics, we say that the factors of production are capital and labour, and that economic growth comes from increases in quantity or from efficiency of these factors. Adam Smith in his pin factory analogy asserts that there are limits to specialisation and division of labour, and thus limits to reducing unit costs, unless there is innovation.

Today, countries embracing globalisation have exported labour-intensive tasks to lower-wage cost countries to drive unit costs down. Technological innovation and change have seen the use of capital and labour transformed, raising productivity.

So, what is the hype and huff about the latest round of innovation, artificial intelligence or AI? How does it affect the balance of capital and labour? Is AI not just another productivity enhancer or is it a new factor of production?

Much depends on the pace of technology advancements and convergence as well as the speed of innovation.



AI promises to improve productivity and quality of human lives, to the extent that some proponents claim this may be the Fourth Industrial Revolution. Past breakthroughs in technology such as steam machines, electricity and computers are tools that humans use to aid in production, thereby increasing labour productivity. In the previous three industrial revolutions, jobs and even entire industries disappeared but new jobs were created. Will it be the same with the fourth revolution when AI is its centre?

The question is whether AI is just another tool that complements and augments labour with minor displacements to some jobs or whether it will surreptitiously and recursively self-improve and substitute human labour entirely. The worry of job dislocation centres on what jobs will be substituted totally, how fast it is done and how many will go.

In the near term with the current state of AI, research seems to indicate that jobs at risk tend to be those in the middle of the spectrum. Less at risk are those at the ends of the spectrum - at one end where human limbs and sensory perception are still superior, and at the other end where creativity, communication and common sense are still human comparative advantages. Jobs in between that are rules and logic based, and that can be easily programmed have already fallen victim to digitalisation or AI, including routine tasks performed by tellers, clerks or travel agents; drivers too are at risk with the advent of autonomous vehicles.

More recently, there has been much excitement and anxiety over advancements in AI where algorithms learning from large datasets are able to infer and make probabilistic decisions. Now, inroads are being made into white-collar jobs like radiologists, auditors, pharmacists and investment advisers, which could be affected as early as 10 years' time, as AI systems can be trained and are now able to learn from vast storage of past data and have highly retentive untiring memory for analysis and deduction.

So far, US researchers are not able to answer the question of whether job growth - which traditionally offsets the 6 per cent of job loss every quarter due to downsizing or shutdowns - can be absorbed due to automation. An Organisation for Economic Cooperation and Development report optimistically estimates that only 9 per cent of jobs will disappear in two decades, while other conflicting reports put it as high as 47 per cent.

The difficulty in estimation lies in the infancy of AI, and the unpredictability of discovery. What is clear is that there will be gains in productivity and a real threat to jobs in low-paid, low-skilled production; and, increasingly, the hollowing out of jobs in the middle that are administrative, clerical and procedural in nature.

The new jobs created are more highly skilled, fewer and more geared towards the development, training and supervision of machines - till the day when machines can autonomously make machines, causing major disruption to employment.



AI systems will likely transform the workplace and may strain the trust between the firm and the employee or diminish the bargaining power of freelancers in the gig economy.

AI systems will collect vast amounts of data and track the activities of the worker, and through this sophisticated surveillance and prediction, it may anticipate attrition, behavioural bias and lapses in productivity.

In some jobs, workers may face a choice of being replaced by robots or treated like one. Such Orwellian practices may instigate a resurgence of confrontational and assertive labour unions to fight for employee privacy and cause tensions and strife for firms demanding higher performance.



By far the biggest concern over AI is the redistributive effect of worsening inequality and the erosion of incomes, because labour productivity gains do not translate into wage increase but accrue to only the few with capital.

If human labour is replaced by machines, who will still have jobs and income to buy the goods produced? If incomes suffer, the economic circular flow of income between households that supply labour and firms that pay wages to produce goods is disrupted. Some countries like Switzerland recognise such apprehensions and have started discussions on the idea of a universal basic income.

Biased concentration and accumulation of power and profits to the top few will be an important challenge. Those with capital, such as owners of firms, will deploy and allocate capital to choose highly productive compliant machines over human labour. Therefore, access to capital to encourage entrepreneurism and start-ups should be more robust and available; after all if a driver is made redundant, perhaps he can manage and own a fleet of driverless cars instead.



Despite some hyperbolic forecasts of economic abundance and a murky existential threat to humanity, it is certain AI is here to stay. Human beings will then find a way to cope with AI.

With the discovery of fire that led to accidents, the fire extinguisher was produced. With the invention of motor cars came the seat belt, air bag and traffic rules. However, AI is quite akin to genomics and nuclear weapons, with the potential to do untold harm if not regulated well.

Societies should plan ahead and manage the risks. At the same time, we do not want to over-regulate and stifle the maturing of technology and curb potential benefits while weighing the moral and ethical issues. Many governments recognise this and have adopted a cautious yet pragmatic approach towards AI.



AI will be very disruptive but at the same time it can regenerate and create new jobs. This replacement or displacement will only accelerate and firms lacking or lagging behind in AI will be compelled to catch up, be competitive or otherwise be forced to leave the market.

For labour, likewise, those that do not upskill  and reskill will be the first casualties of the market. Similarly, those with inflexible mindsets, refusing to adapt to change, will be left behind.

There needs to be a cultural change to embrace and adapt to the new labour companions, to encourage mutual learning between man and machine. Universal basic income is premature and also likely to diminish personal initiative; however, employment transition insurance may become necessary to bridge the gaps.

We need to start to prepare the young for an AI future, not as consumers but to discover potential creators, and therefore the curriculum needs more than ever to include subjects such as statistics, coding and mathematics.



The digiterati, the elites of technology, when making and creating AI algorithms and systems may incorporate their own personal biases free from scrutiny. Since data is a key component of AI, it is urgent that more efforts be made to regulate the collection and usage. Dr Janil Puthucheary, Minister-in-charge of the Government Technology Agency (GovTech), outlined a very pragmatic governance framework on the use of algorithms based on the principles of transparency, fairness and safety, where decisions and output by such systems can be reasonably explained, free from human bias and mitigate public risks.

There has to be increased information, knowledge and speed of AI developments to governments and such information can be passed on quickly to the various ministries dealing with labour and manpower issues as well as communications and to schools and tertiary institutions. In Singapore, the National Wages Council will now play an even more important role to ensure a harmonious labour, employer and government tri-partnership.

Ultimately, machines are still a creation of humankind. Before machines can start to make machines without human labour, we need to constantly ensure that our goals are aligned because the machine is singularly focused on maximising its chances of achieving its goals devoid of emotion.

We call ourselves homo sapiens, wise man, so it will be wise for us to start to manage this Artificial Intelligence before it manages us and becomes a social risk.

AI is here to stay. How we adapt  and manage it will determine the future.


Click here for more IT related articles.