AI — Smart Machines, What We Should Be Worried About and How to Deal With It

Paul Tyler
11 min readJun 4, 2021

Summary

AI is exciting and offers many new opportunities. But there is too much hype, most benefits are akin to previous technological innovations — helping make lots of gradual small improvements. Bigger implementations are hard. The general dangers are exaggerated — human intelligence and machine intelligence are very different — but risks do exist. We should worry less about killing jobs or super-smart machines taking over. We need to be careful how we regulate — we cannot legislate for all eventualities; equally, vague, catch-all rules will hold back innovation or force it elsewhere. A principles-based approached with industry buy-in is the best way forward.

Artificial Intelligence represents an exciting breakthrough in technological development, one which will drive productivity and innovation and could help solve some of our most pressing challenges. AI promises and is delivering many benefits but, of course, it comes with serious risks and its use can equally be put to negative purposes. Questions have been raised over its impact on free will, on the future of jobs, on the entrenching inequality and discrimination and on societal trust, mass surveillance and the loss of privacy. Many worry, even, about a direct danger to our very lives through uncontrollable Artificial General Intelligence (AGI or ‘Strong AI’) and killer robots.

Imminent danger of societal transformation?

However, the immediate potential of AI, quite frankly, is being exaggerated — both by the press and social media sensationalizing the potential and by companies that benefit from the hype. This matters as scare stories and a lack of understanding of what’s really going on may lead to regulatory burdens which could disincentivize genuinely beneficial innovations. The hype may also lead to disappointment and criticism, mirroring a pattern throughout AI’s 70-year history.

These days, it seems many countries are putting AI at the centre of their new industrial strategies and the EU has been first out the block with significant new legislation. Through these proposed regulations, which may be a couple of years away from implementation, the EU hopes to protect consumers and citizens with powerful regulatory tools with global reach, while still aiming to position itself at the forefront of AI research and development. I fear they will fail at both.

Machine Learning is making significant progress, but basically within siloed implementations

People often think of AI as an all-encompassing exponential technological development that is leading us inevitably to conscious, super-smart robots. We fear job destruction and loss of control.

However, the real benefits accruing from Machine Learning (by far the most important division of AI), are really on the small scale and within very delineated applications. ML can be impactful in aiding almost any mundane goal or purpose through promoting gradual improvement. These multiple iterations of improvement (often quite uninteresting to the man or woman on the street) are not, conceptually, unlike lots of previous technological advancements such as the use of mainframe and then desktop computing, GUIs, spreadsheets and word processing and of course, the internet.

Think of systems working on production lines improving fault-checking, or search engines continually providing marginally more relevant results. These are the many continual but small improvements of existing products and services.

These changes are not very sexy and I wouldn’t want to deny that there are ‘moon-shots’ being worked on. Investment is going into such projects. But many hoped-for moon-shots within AI require breakthroughs in other fields like robotics, genetics, associated computing or some other new field — all of which face their own huge challenges. And a key limitation of AI today is that our understanding of human intelligence and cognisance remains limited and is, if anything, developing in a separate direction to AI. What we are learning about how the brain works implies even greater challenges ahead for AI. It seems that key to our intelligence is the ability to build multiple mental models, but ML simply doesn’t work like this. It cannot understand context or make judgements. Without real new breakthroughs, there are hard limits as to what it can achieve.

Getting in-depth ML right is very hard

Building big significant powerful new ML solutions is a very challenging exercise and requires significant investment and a new way of thinking. I know because I help run a firm doing just this. Data is the life-blood of ML and it’s very hard to get it right. ML needs huge amounts of high-quality data, which is clean, specific and unbiased, alongside significant computing power. Yet most data within companies is messy, siloed and often inaccessible by incompatible systems. Incompetence and inefficiency usually reign, but no one wants to admit this. This will slowly change but do not underestimate the challenges for most firms wanting to leverage their data.

Many successful adoptions of AI techniques rely on ‘ML as a service’ (MLaaS) applications where the core neural network is outsourced. But data is still a big challenge and as I explain, these types of solutions, although numerous, are necessarily very limited in scope.

What about our jobs?

In 2013, Oxford economists Benedikt Frey and Michael Osborne released what became one of the most quoted reports on AI and its effects on jobs. (The Future of Employment: How susceptible are jobs to computerisation? 01/09/2013.) They concluded that 47% of jobs in the US were ‘susceptible to automation by AI in the relatively near future’. Of course journalists turned this into claims that robots would soon be stealing half our jobs.

Some jobs will be displaced and some will no longer need as many people. But for starters we should not mistake the speed of technical invention with that of adoption. Jobs that require some degree of creativity, manual dexterity, social skills, empathy or perception are likely to remain secure for some time. It is certainly true, that if your job involves analysing lots of data in a standard format and making a clear decision as a result, or if it follows a well-defined script (in a call centre, say,) then, indeed, you’d better start to look to retrain. The rest of us (i.e. most jobs) fall somewhere in the middle and are likely simply, and gradually to use more and more ML tools to improve what we already do. And that’s the key thing about AI for some time. It is likely to augment what we do and yes, such efficiency will mean fewer hands (and minds) are needed for the same level of output but wealth per capita grows overall as a result through such productivity gains and new needs and so new roles will likely emerge to society’s overall benefit. But when has this not been the case for every generation from the start of the industrial revolution onwards?

So what are the real risks?

AI, like any new, powerful technology, could be dangerous and does represent challenges for society. Such power can be put to nefarious uses for criminal ventures, within the military or by autocracies. Even outside such use, AI creates new risks:

· Bias. If data is biased, then the AI-based decisions will be too, and repeatedly so. But while we need to recognise that AI has the ability to magnify bias, let’s not forget that humans can also be horribly biased and AI, if done even reasonably well, may, in fact help reduce existing bias. ‘Diversity training’ for AI should be a lot easier.

· Ethics. How do we ensure that AI itself or its application is ethical? Yes, this will become important, but what anyway is ‘ethical’ and who decides? Don’t we anyway need to address such concepts in society generally?

· ML and neural nets are inherently opaque and lack accountability. It’s not possible to fully explain why an AI makes a decision it does and if no human has written the ‘code’, then who is accountable for mistakes? We’ve experienced what private gain versus societal loss as a concept can do in financial services and we mustn’t repeat those mistakes.

· ML can get things spectacularly wrong without us (and indeed it) realising why. AI lacks an ability to know when it is wrong. It therefore struggles to be cautious. This will be problematic with autonomous action.

· Privacy and data ownership. AI needs lots of new data, who owns the data and decides how it is used? These are debates that must happen and indeed are starting to take place.

A proposed way forward — support free markets and agree clear principles

Promoting competition must be part of the answer. When we look at the list of risks above, where a company operates as a data monopoly, such as Facebook, it is be able to make serious mistakes through its opaque algorithms and remain virtually unheeded, with no risk to its bottom line. Where firms compete, such mistakes become costly and must correct.

There is a clear role for solid regulation and not just trust-busting Big Tech (which must now happen) although here the big issue is that international cooperation will be essential, and yet very difficult. Data is hard to regulate and innovation can happen anywhere.

The idea of agreeing comprehensive overarching and detailed regulatory rules to me seems unrealistic. Equally, vague and unclear rules with draconian penalties will reduce innovation or drive it abroad. Done poorly this will block innovation, reduce international cooperation and keep the EU in the AI slow-lane while others march ahead anyway in other jurisdictions. There is a balance, black and white rules will quickly lose relevance (especially if they have limited jurisdiction) and yet vague rules create confusion.

A core principles approach, which becomes widely accepted, seem to me the only way forward.

Industry has itself taken steps along this road and many AI leaders are clearly aware of the issues and want to help solve them. The ‘Asilomar Principles’, from 2017 were one example. Many are sensible but others appear far too wishy-washy, subjective and will frankly be ignored in the pursuit of profits. Google’s 2018 principles for ethical AI are better but perhaps are too short and still will also struggle for wide acceptance. I do believe they provide a solid basis however for regulators to work with the industry. There are signs of these principles having bite with its own employees forcing Google to abandon contracts with the US military, (although arguably Microsoft simply replaced them).

One theme is crucial is that humans must remain accountable. The ‘Moral Agent’ must remain a person, be that a company director or the leader of the AI development. Serious mistakes must go punished and only in this way will we reduce the risk of disaster. Without this, responsibility is avoided and risk-taking is promoted. This can be legislated.

Transparency and open communication are also crucial, although again difficult. No company will share its core neural net code, and it’s unfair to force them but bias comes from the data and here we can ask for and even enforce some level of transparency. Also important is the concept of AI explainability (AIX) where consumers and citizens can find out why decisions have been made. Important advances in AIX have been made.

Systems design must respect the concept of minimising side-effects. Known as ‘Ceteris Paribus preferences’, we need to show that a core principle is that AI is programmed to achieve something while maintaining as much as possible unaffected and as is was. This will rely on corporate leaders and programmers taking responsibility. Whistleblowing will be an important countermeasure for compliance.

These areas should form the basis of a principles-based regulation with as much international cooperation as possible. The key here is get regulatory agreement across jurisdictions, and with industry involvement. This could start with Europe now that the EU has made the first move, and the US, along with the G7 but should include the big tech companies and industry associations. With momentum, others will fall into line. Chinese firms that want to do business in these jurisdictions must prove their adherence to such principles although I don’t deny this requires a level of trust. Clearly infringements must therefore be punished.

Military uses

As for killer robots, it can’t be denied that there is a serious arms race taking place in AI for the military, and this is beyond the scope of what normal regulations and principles can achieve. But this is true for all weapons, so we need to focus on arms control generally and perhaps not get too caught up in the automation aspect. A North Korean nuke over LA is a horrendous thought but arguably it is a clear and present danger today; an out-of-control AI swarm of killer nano-bots is certainly nightmarish but really, not very likely anytime soon.

Don’t panic, Artificial General Intelligence (AGI) is a very long way away

In 2005 Ray Kurzweil famously predicted the AI ‘Singularity’. This was the milestone moment when AI would exceed human intelligence and continue experiencing exponential growth. Humans would get left behind and ultimately would be left redundant and powerless by the new race of supercomputers. We would be wiped out through irrelevance (and even by accident). Kurzweil basically applied ‘Moore’s Law’ to AI and so assumed exponential growth in computing power meant, ipso facto, ever growing intelligence and thus, ultimately, super intelligence was inevitable.

But while massive and growing raw computing power certainly is likely to be a necessary condition for AGI but it is highly unlikely to be a sufficient one. Tellingly, perhaps, the human brain is not actually nature’s largest in terms of raw numbers of neurons so something else must be at work, something we’re perhaps only recently beginning to understand and has little to do with how machines ‘learn’ through massive data crunching and pattern recognition. We do not yet know how the brain achieves self-awareness well enough to automate it. We might get there but many experts say it seems unlikely within our lifetimes. Of course, unimaginable innovations are possible and much human progress is often through randomness of the innovation process leading to great leaps. But this is likely a long, long way away and we have plenty more real and present means of messing up the world about which we should be worrying more.

Conclusion

AI will be transformative, with ML at the helm, but in ways that economists would recognise and welcome — in driving productivity and wealth generation. Nevertheless, dangers do exist and governments do need to coordinate better and regulation is needed without doubt. These need to be principles-based and avoid trying to define every nugget of AI.

They are challenges that would not be unfamiliar to legislators of the last 50 years. How to deal with technological change, jobs displacement, inequality, bias and discrimination, accountability, privacy and transparency, and indeed risks that IT systems and critical infrastructure go wrong.

Ultimately, I am optimistic. I believe technological progress such as AI and openness — to trade, new interactions and to new ideas generally — have huge potential to benefit society and solve our problems.

Further Recommended Reading

See links within text and also 2 great books

The Road to Conscious Machines — The Story of AI by Michael Wooldridge. https://www.penguin.co.uk/books/307/307639/the-road-to-conscious-machines/9780241333907.html

A Thousand Brains: A New Theory of Intelligence by Jeff Hawkins. https://www.amazon.com/Thousand-Brains-New-Theory-Intelligence/dp/1541675819

I am a Non-Exec Director and ex-head of operations for AI-based fintech working in financial markets. I write in a personal capacity.

--

--

Paul Tyler

Fintech investor/director. Brit in Barcelona. LSE IDEAS MSc graduate in International Relations.