Lee Se-dol in 2016.

In 2016, AlphaGo, a machine that used large data sets to teach itself how to play Go defeated Lee Se-dol in a contest leading to this young South Korean professional Go champion to retire because, “AI is an entity that cannot be defeated”.

So what you ask.  Well, Go is a board game that is more complex than chess by one million, trillion, trillion, trillion, trillion times and is considered by its enthusiasts to require “true intelligence, wisdom and Zen-like intellectual refinement’.  Winning this 5-game competition stunned everyone and it was literally a game-changer.  The world of technology would never be the same afterwards.

Increasingly companies are incorporating Machine Learning software in their analytic engines – statistical algorithms that identify relationships between data – building a computer-generated feedback loop to enhance decision-making.  It is a technology that is being used more and more to automate processes and improve efficiency.  Machine Learning is the first step in creating Artificial Intelligence – a process whereby the computer itself can make decisions based on conclusions it draws without human interaction from a continual analysis of huge amounts of data at speeds impossible for humans to replicate.

I am involved in a start-up enterprise that will also lean on Machine learning and AI as it develops and as part of my education, I have been reading AI2014 – a book about the future of artificial intelligence (by Kai-Fu Lee and Chen Qiufan). It certainly stimulates the imagination and more.

Like so many new technologies, it is a double-edged sword.  Beyond the field of business efficiency, ML and AI can also have negative impacts, for example, establishing feedback loops that amplify existing biases, building on prejudice apparent or implied in sets of data and reinventing it as a new truth and pushing it to a pre-qualified, uncritical, receptive audience.

Science fiction writers have long written about a distant future where thinking machines take over the world but much closer to home and today’s reality, AI is being viewed by some as a threat to democracy because of its amplification of false ‘truths’ and a potentially dangerous tool that actors like China and Russia will use, and are using, to control their societies.

I ask myself, is this the inevitable direction we are taking and what are the ramifications?  How will AI change the feedback loops that both democracies and autocracies rely on to understand the will of the people (necessary even in autocracies as so may dictators have found out)?  Will it change minds?  Or are people more resilient than that?

You could argue that democracies are in better shape to manage through this transition. Democracy, if it functions as it should, contains ways for the people to counter government action if it is working to their disadvantage – the vote and free speech are a powerful lever but only if the majority are not influenced by false ‘truths’.

Autocracies, however, lack this self-correcting mechanism.  Because of the fear of contradicting the ‘Dear Leader’, a ML-based process will feedback the Leader’s perceptions and ideals making it harder for them to recognise and act upon a crumbling foundation that could eventually collapse upon his/her false reality.  For example, if the autocrat uses machine learning to identify and automatically eliminate unflattering comments on social media, how can they assess the popular mood?


The preamble to the UN charter contains the following ambition: to save succeeding generations from the scourge of war, to reaffirm faith in fundamental human rights, to establish justice and respect for treaties and international law … and to employ international machinery for the promotion of the economic and social advancement of all peoples.

In Isaac Asimov’s I Robot, he promulgated the three laws of robotics:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Maybe I’m being a hopeless romantic but perhaps there is a task here for the UN.  To build a similar international accord around the development and implementation of ML and AI to mitigate the negatives and accentuate the positives.  It seems to me that it is too powerful a technology to roam the world unleashed.