Tuesday, January 16, 2018

AI just outperformed humans at reading, potentially putting millions of customer service jobs at risk of automation. Could it do the same in learning?

Something momentous just happened. An AI programme, from Alibaba, can now, for the first time, read a text and understand it better than humans. The purple line has just crossed the red line and the implications are huge.
Think through the consequences here, as this software, using NLP and machine learning, gets better ad better. The aim is to provide answers to questions. This is exactly what millions of people do in jobs around the world. Customer service in call centres, Doctors with patients, anywhere people reply to queries... and any interactions where language and its interpretation matter.
Health warning
First we must be careful with these results, as it depends on two things 1) the nature of the text 2) what we mean by ‘reading’. Such approaches often work well with factual texts but not with more complex and subtle texts, such as fiction, where the language is difficult to parse and understand, and where there is a huge amount of ‘reading between the lines”. Think about how difficult it is to understand even that last sentence. Nevertheless, this is a breakthrough.
The Test
It is the first time a machine has out-done a real person in such a contest. They used the Stanford Question Answering Dataset, to assess reading comprehension. The test is to provide exact answers to more than 100,000 questions. As an open test environment, you can do it yourself, which makes the evidence and results transparent. Alibaba’s neural network model, based on a Hierarchical Attention Network, which reads down through paragraphs to sentences to words, identifies potential answers and their probabilities. Alibaba has already used this technology in their customer service chatbot, Dian Xiaomi, to an average of 3.5 million customers a day on the Taobao and Tmall platforms. (10 uses for chatbots in learning).
Learning
Indeed, the one area that is likely to benefit hugely from these advances is education and training. The Stanford dataset does have questions that are logically complex and, in terms of domain, quite obscure, but one should see this development as great at knowledge but not yet effective with questions beyond this. That’s fine as there is much that can be achieved in learning.We have been using this AI approach to create online learning content, in minutes not months, through WildFire. Using a  similar approach, we identify the main learning points in any document, PPT or video, and build online learning courses quickly, with an approach based on recent cognitive psychology that focuses on retention. In addition, we add curated content.
Pedagogy
The online learning is very different from the graphics plus multiple-choice paradigm. Rather than rely on the weak ‘select from a list’ MCQs (see critique here), we get learners to enter their answers in context. It focuses on open-input and retention techniques outlined by Roedinger and McDaniel in Make It Stick.
Speed
To give you some idea of the sheer speed of this process we recently completed 158 modules for a global company, literally in days, without a single face-to-face meeting with the project manager. The content was then loaded up to their LMS and is ready to roll. This was good content and they are very happy with the results. It helped them win a recent major award.
Pain relief
An interesting outcome of this approach to creating content was the lack of heat generated during the production process. There was no SME/designer friction, as that was automated. That’s one of the reasons we didn’t need a single face-to-face meeting. It allowed us to focus on getting it done and quality control.
Sectors
Organisations have been using this AI-created content as pre-training for face-to-face training for auditors in Finance, product knowledge and GMP in Manufacturing, health and safety, everything from nurse training to clinical guidelines in the NHS, apprenticeships in a global Hospitality company. All sorts of education and training in all sorts of contexts.
Conclusion

The breakthrough saw Microsoft and Baidu perform similarly, showing that the new AI-war is between China and the US. That’s a shame but we still have some edge here in Europe and the UK, if we could only overcome our tendency to see AI as a dystopian entity and start to use this stuff for social good, rather than being obsessed with ill-informed critiques. If we don’t, they will. These AI techniques have already hit the learning market. It is already automating the production of learning in that huge motherload of education and training: 101 courses and topics such as compliance, process, procedures, product knowledge and so on. Beyond this, AI-driven curation, which we use to supplement the core courses is also possible. If you want see how AI and WildFire can help you create content quickly, at much lower cost and increase retention, drop us a line and we’ll arrange a demo.

 Subscribe to RSS

Monday, January 08, 2018

Superfast AI creation of online learning in manufacturing - fast, cheap, effective

We clearly have a productivity problem in manufacturing, in part due to a lack of training and skills. As manufacturing becomes more complex and automated, it needs lots of skills other than those traditionally repetitive jobs that are being replaced. Could AI help solve this problem? AI may lead to a loss of jobs but we’re showing that AI can also help train in what jobs there are to increase productivity and help in training for new jobs. We’ve been creating online learning quickly and at low cost through WildFire.
Productivity puzzle
The manufacturing sector continues to struggle for productivity, despite growing levels of economic activity. Manufacturing productivity actually fell by 0.2 per cent in the third quarter of 2016, compared to 0.3 per cent growth in services. Many attribute this, at least partially, to low skills and training. As productivity growth seems to have stalled, technology offers a reboot, both in process and learning. Typically ‘basic goods’ manufacturing has been stuck with the rather basic use of technology. This is in stark contrast to ‘advanced manufacturing’ which has been eager to adopt advanced technology. Both, however, have been tardy in their use of technology to get knowledge and skills to their staff. They have both been far behind those in finance, healthcare, hospitality and other sectors. Understandably, learning in manufacturing has been largely classroom and learning by doing. Yet, as manufacturing becomes more complex, knowledge and skills has become ever more important.
Double-dividend
One immediate way to increase productivity is through online learning. This has a double-dividend, in that it can save costs (travel, rooms, equipment and trainers) as well as increase productivity through better knowledge and skills. With access to mobile technology, learning can be delivered to distributed audience, even on the shop-floor. In addition, shift work and access to training in down-time and gaps in production, can also be achieved.
Barriers
Manufacturing is often thought of as a sector not much involved in online learning. Several factors are at work here.
1. Lots of SMEs without large training budgets
2. Less likely to find a LMS to deliver content
3. Less likely to find L&D aware of online learning
3. Less access to devices for online learning
4. Practical environment where factory floor training more prevalent.
To make online learning work there needs to be more awareness of why online learning can help as well as how it can be done.
What we did
First we focused on basic, generic training needs, and produced dozens of modules on:
1. Manual handling
2. Health and safety
3. General Manufacturing Practice
4. Language of manufacturing
5. Gas Cylinders
6. Product knowledge
These are largely knowledge-based modules that underpin practical training in the lab, workshop or factory floor. Bringing everyone up to a common standard really helps when it comes to practical, vocational training. You really should understand what is going on with the science of gas storage and use if you handle dangerous gases and want to weld safely. In addition we trained everyone from apprentices and administration staff to sales people.
To this end we produced modules quickly and cheaply using WildFire, an AI service that takes any document, PowerPoint or video, and creates online learning in minutes not months. We have done this successfully in finance and healthcare but manufacturing posed different challenges.
1. Much of the training is text heavy from manuals without any sophisticated use of images. That we solved through quick and low cost photo-shoots. Literally shooting to a shot list as the online modules had already been created.
2. In not one case did we find a LMS (Learning management System), so we had to deliver from the WildFire server. This actually has one great advantage in that it freed us from the limitations of SCORM. We could gather oodles of data for monitoring and analysis.
3. Doing this learning at any time allows learners to train in down time or at anytime 24/7.
4. It means consistency.
5. We could deliver to any service, especially mobile, which helped.
Conclusion
We are still delivering and analysing the results. Sure there have been issues, especially in the absence of L&D staff in the target organisations but when it works, it works beautifully. If we are to take productivity seriously in the UK we must realise that this means better training and therefore performance. Wouldn’t it be wonderful if AI helps increase productivity through online learning so that people can skill themselves into relevant employment? AI may automate parts of roles but it can also be used to skill for the newly created roles. If you want to find out more please inquire here.

 Subscribe to RSS

Sunday, January 07, 2018

Astonishing fake in education and training - the graph that never was

I have seen this in presentations by the CEO of a large online learning company, Vice-Chancellor of a University, Deloitte’s Bersin, and in innumerable keynotes and talks over many years. It’s a sure sign that the speaker has no real background in learning theory and is basically winging it. Still a staple in education and training, especially in 'train the trainer' and teaching courses, a quick glance is enough to be suspicious.
Dales’s cone



The whole mess has its origins in a book by Edgar Dale way back in 1946. There he listed things from the most abstract to the most concrete: Still pictures, Visual symbols, Verbal symbols, Radio recordings, Motion pictures, Exhibits, Field trips, Demonstrations, Dramatic participation, Contrived experiences, Purposeful experience and Direct. In the second edition (1954) he added Dramatised experiences through Television and in the third edition, and heavily influenced by Bruner (1966), he added enactive, iconic symbolic.
But let’s not blame Dale. He admitted that it is NOT based on any research, only a simple intuitive model and he did NOT include any numbers. It was, in fact, simply a gradated model to show the concreteness of different audio-visual media. Dale warned against taking all of this too seriously, as a ranked or hierarchical order. Which is exactly what everyone did. He actually listed the misconceptions in his 1969 third edition p128-134. So the first act of fakery was to take a simple model, ignore its original purpose, and the authors warnings, and use it for other ends.
Add fake numbers
First up, why would anyone with a modicum of sense believe a graph with such rounded numbers? Any study that produces a series of results bang on units of ten would seem highly suspicious to someone with the most basic knowledge of statistics. The answer, of course, is that people are gullible, especially to messages that appeal to their intuitive beliefs, no matter how wrong. The graph almost induces confirmation bias. In any case, these numbers are senseless unless you have a definition of what you mean by learning and the nature of the content. Of course, there was no measurement – the numbers were made up.
Add Fake Author
At this point the graph has quite simply been sexed up by adding a seemingly genuine citation from an academic and Journal. This is a real paper, about self-generated explanations, but has nothing to with the fake histogram. The lead author of the cited study, Dr. Chi of the University of Pittsburgh, a leading expert on ‘expertise’, when contacted by Will Thalheimer, who uncovered the deception, said, "I don't recognize this graph at all. So the citation is definitely wrong; since it's not my graph." Serious looking histograms can look scientific, especially when supported by bogus academic credentials.
Add new categories
The fourth bit of fakery was to add ‘teaching others’ to the end, topping it up to, you guessed it – 90%. You can see what’s happening here – flatter teachers and teaching, and they’ll believe anything. They also added the ‘Lecture’ category on at the front – and curiously CD-ROM! In fact, the histogram has appeared in many different forms, simply altered to suit the presenter's point in a book or course. This is from Josh Bersin’s book on Blended Learning. It is easy to see how the meme gets transmitted when consultants tout it around in published books. Bersin was bought by Deloittes. What happens here is that Dales original concept is turned from a description of media into the prescription of methods.
The Colored Pyramid
The third bit of fakery, was to sex it up with colour and shape, going back to Dale’s pyramid but with the fake numbers and new categories added. It is a cunning switch, to make it look like that other caricature of human nature, Maslow’s hierarchy of needs. It suffers from the same simplistic idiocy that Maslow’s pyramid does – that complex and very different things lie in a linear sequence one after the other. It is essentially a series of category mistakes, as it takes very different things and assumes they all have the same output – learning. In fact, learning is a complex thing, not a single output. A good lecture may be highly motivating, there are semantic tasks that are well suited to reading and reflection, discussion groups may be useless when struggling with deep and complex semantic problems and so on. Of course, the coloured pyramid makes it look more vivid and real, all too easy to slot into one of those awful 'train the trainer' or 'teacher training' courses that love simplistic bromides.
Conclusion
What’s damning is that this image and variations of the data have been circulating in thousands of PowerPoints, articles and books since the 60s. Investigations of these graphs by Kinnamon (2002) found dozens of references to these numbers in reports and promotional material. Michael Molenda (2003), did a similar job. Their investigations found that the percentages have even been modified to suit the presenter’s needs. This is a sorry tale of how a simple model published with lots of original caveats can morph into a meme that actually lies about the author, the numbers, adds categories and is uncritically adopted by educators and trainers.
PS
Much of this comes from the wonderful Will Thalhemer’s original work. I wanted to give it an extra and more structured spin on the development of the fakery.
Bibliography
Bruner, J. (1966), Toward a Theory of Instruction
Dale, E (1946), Audiovisual methods in teaching
Dale, E (1954), Audiovisual methods in teaching
Dale, E (1969), Audiovisual methods in teaching
Kinnamon, J. C. (2002). Personal communication, October 25
Kovalchick, A and Dawson,K (2004), Education and Technology
Thalheimer, W Blogpost

Molenda, M. H. (2003). Personal communications

 Subscribe to RSS

Saturday, December 23, 2017

Is debate around 'bias in AI' driven by human bias? Discuss

When AI is mentioned it’s only a matter of time before the word ‘bias’ is heard. They seem to go together like ping and pong, especially in debates around AI in education. Yet the discussions are often merely examples of bias themselves – confirmation, negativity and availability baises. There’s little analysis behind the claims. ‘AI programmers are largely white males so all algorithms are biased - patriarchal and racist’ or the commonly uttered phrase ‘All algorithms are biased’. In practice, you see the same few examples being brought up time and time again: black face/gorilla and reoffender software. Most examples have their origin in Cathy O’Neil’s Weapons of Math destruction. More of this later.
To be fair AI is for most an invisible force, that part of the iceberg that lies below the surface. AI is many things, can be opaque technically and true causality difficult to trace. So, to unpack this issue it may be wise to look at the premises of the argument, as this is where many of the misconceptions arise.
Coders and AI
First up, the charge that the root cause is male, white coders, AI programmers these days are more likely to be Chinese or Indian than white. AI is a global phenomenon, not confined to the western world. The Chinese government has invested a great deal in these skills through Artificial Intelligence 2.0. The 13th Five-Year Plan (2016-2020), the Made in China 2025 program, Robotics Industry Development Plan and Three-Year Guidance for Internet Plus Artificial Intelligence Plan (2016-2018) are all contributing to boosting AI skills, research and development. India has an education system that sees ‘engineering’ and ‘programming’ as admirable careers and a huge outsourcing software industry with a $150 billion IT export business. Even in Silicon Valley the presence of Asian and Indian programmers is so prevalent that they feature in every sitcom on the subject. Even if the numbers are wrong the idea that coders infect AI with racist code, like the spread of Ebola, is ridiculous. One wouldn’t deny the probable presence of some bias but the idea that it is omnipresent is ridiculous.
Gender and AI
True there is a gender differential, and this will continue, as there are gender differences when it comes to focused, attention to detail coding in the higher echelons of AI programming. We know that there is a genetic cause of autism, a constellation (not spectrum), of cognitive traits and that this is heavily weighted towards males. For this reason alone there is likely to be a gender difference in high-performance coding teams for the foreseeable future. In addition, the idea that these coders are unconsciously, or worse, consciously creating racist and sexist algorithms is an exaggeration. One has to work quite hard to do this and to suggest that ALL algorithms are written in this way is another exaggeration. Some may, but most are not.
Anthropomorphic bias and AI
The term Artificial Intelligence can in itself be a problem, as the word ‘intelligence’ is a genuinely misleading, anthropomorphic term. AI is not cognitive in any meaningful sense, not conscious and not intelligent other than in the sense that it can perform some very specific tasks well. It may win at Jeopardy, chess and GO but it doesn’t know that it even playing these things, never mind the fact that it has won. Anthropomorphic bias appears to arise from our natural ability to read the minds of others and therefore attribute qualities to computers and software that are not actually there. Behind this basic confusion is the idea that AI is one thing – it is not – it encapsulates 2500 years of mathematics since Euclid put the first algorithm down on papyrus and there are many schools of AI that take radically different approaches. The field is an array of different techniques, often mathematically, quite separate from each other.
ALL humans are biased
First, it is true that ALL humans are biased, as shown by Nobel Prize winning psychologist Daniel Kahneman and his colleague Amos Tversky, who exposed a whole pantheon of biases that we are largely born with and are difficult to shift, even through education and training. Teaching is soaked in bias. There is socio-economic bias in policy as it is often made by those who favour a certain type of education. Education can be bought privately introducing inequalities. Gender, race and socio-economic bias is often found in the act of teaching itself. We know that gender bias is present in subtly directing girls away from STEM subjects and we know that children from lower socio-economic groups are treated differently. Even, so-called objective assessment is biased, often influenced by all sorts of cognitive factors – content bias, context bias, marking bias and so on.
Bias in thinking about AI
There are several human biases behind our thinking about AI.
We have already mentioned Anthropomorphic bias, where reading ‘bias’ into software is often the result of this over-anthropomorphising.
Availability bias arises when we frame thoughts on what is available, rather than pure reason. So crude images of robots enter the mind as characterising AI, as opposed to software or mathematics, which is not, for most, easy to call to mind or visualise. This skews our view of what AI is and its dangers, often producing dystopian ‘Hollywood’ perspectives, rather than objective judgement.
Then there’s Negativity bias, where the negative has more impact than the positive, so the Rise of the Robots and other dystopian visions come to mind more readily than positive examples such as fraud detection or cancer diagnosis.
Most of all we have Confirmation bias, that leaps into action whenever we hear of something that seems like a threat and we want to confirm our view of it as ethically wrong.
Indeed, the accusation that all algorithms are biased is often (not always) a combination of ignorance about what algorithms are and a combination of four human biases – anthropomorphism, availability, negativity, confirmation and anthropomorphism bias. It is often a sign of bias in the objector, who wants to confirm their own deficit-based weltanschauung and apply a universal, dystopian interpretation to AI with a healthy dose of neophobia (fear of the new).
ALL AI is not biased
You are likely in your first lesson on algorithms to be taught some sorting mechanisms (there are many). Now it is difficult to see how sorting a set of random numbers into ascending order can be either sexist or racist. The point is that most algorithms are benign, doing a mechanical job and free from bias. They can improve performance in terms of strength, precision and performance over time (robots in factories), compressing and decompressing comms, encryption algorithms, computational strategies in games (chess, GO, Poker and so on), diagnosis-investigation-treatment in healthcare and reduced fraud in finance. Most algorithms, embedded in most contexts are benign and free from bias.
Note that I said ‘most’ not ‘all’. It is not true to say that all algorithms and/or data sets are biased, unless one resorts to the idea that everything is socially constructed and therefore subject to bias. As Popper showed, this is an all-embracing theory to which there is no possible objection, as even the objections are interpreted as being part of the problem. This is, in effect, a sociological dead-end.
Bias in statistics and maths
Al is is not conscious or aware of its purpose. It is, as Roger Schank keeps saying, just software, and as such, is not ‘biased’ in the way we attribute that word to ‘humans’. The biases in humans have evolved over millions of years with additional cultural input. AI is maths and we must be careful about anthropomorphising the problem. There is a definition of ‘bias’ in statistics, which is not a pejorative term, but precisely defined as the difference between an expected value and the true value of a parameter. If the value is zero, it is called unbiased. This is not so much bias as a precise recognition of differentials.
However, human bias can be translated into other forms of statistical or mathematical bias. One must now distinguish between algorithms and data. There is no exact mathematical definition of ‘algorithm’ where bias is most likely to be introduced through weightings and techniques used. Data is where most of the problems arise. One example is poor sampling; too small a sample, under-representations or over-representations. Data collection can also have bias due to faulty data gathering in the instruments themselves. Selection bias in data occurs when it is gathered selectively and not randomly.
However, the statistical approach at least recognises these biases and adopts scientific and mathematical methods to try to eliminate these biases. This is a key point – human bias often goes unchecked, statistical and mathematical bias is subjected to rigorous checks. That is not to say that it is flawless but error rates and attempts to quantify statistical and mathematical bias have been developed over a long time, to counter human bias. That is the essence of the scientific method.
An aside…
The word ‘algorithm’ induces a sort of simplistic interpretation of AI. Some algorithms are not created by humans, code can create code, some are deliberately generated in evolutionary AI to create variation and then selection against a fitness purpose. It’s complex. There are algorithms in nature that determine genetic outcomes, the way plants grow and many other natural phenomena. Some thing that there is a set of deep algorithms that determine the whole of life itself. Evolutionary AI allows algorithms to be promulgated or generated by algorithms themselves, in an attempt to mimic evolution, but defining fitness and selecting those that work. While it is true that bias can creep into this process it is wrong to claim that all algorithms are created solely by the hand of the coder.
AI and transparency
A common observation in contemporary AI is that its inner workings are opaque, especially machine learning using neural networks. But compare this to another social good – medicine. We know it works but we don’t know how. As Jon Clardy, a professor of biological chemistry and molecular pharmacology at Harvard Medical School says, "the idea that drugs are the result of a clean, logical search for molecules that work is a ‘fairytale'”. Many drugs work but we have no idea why they work. Medicine tends to throw possible solutions at problems, then observe if it works or not. Now most AI is not like this but some is. We need to be careful about bias but in many cases, especially in education, we are more interested in outputs and attainment, which can be measured in relation to social equality and equality of opportunity. We have a far greater chance of tackling these problems using AI than by sticking to good, old-fashioned bias in human teaching.
Fail means First Attempt In Learning
Nass and Reeves through 35 studies in The Media Equation showed that the temptation to anthropomorphise technology is always there. We must resist the temptation to think this is anything but bias. When an algorithm, for example, correlates a black face with a gorilla, it is not that it is biased in the human sense of being a racist, namely a racist agent. The AI knows nothing of itself, it is just software. Indeed, it is merely an attempt to execute code and this sort of error is often how machine learning actually learns. Indeed, this repeated attempt at statistical optimisation lies at the very heart of what AI is. Failure is what makes it tick. The good news is that repeated failure results in improvement in machine learning, reinforcement learning, adversarial techniques and so on. It is often absolutely necessary to learn from mistakes to make progress. We need to applaud failure, not jump on the bias bandwagon.
When Google was found to stick the label of gorilla on black faces in 2015, there is no doubt that it was racist in the sense of causing offence. Rather then someone being racist in Google, or having a piece of maths that is racist in any intentional sense, this is a systems failure. The problem was spotted and Google responded within the hour. We need to recognise that technology is rarely foolproof, neither are humans. Failures will occur. Machines do not have the cognitive checks and balances that humans have on such cultural issues but they can be changed and improved to avoid them. We need to see this as a process and not just block progress on the back of outliers. We need to accept that these are mistakes and learn from these mistakes. If mistakes are made, call them out, eliminate the errors and move on. FAIL in this case means First Attempt In Learning. The correct response is not to define and dismiss AI because of these failures but see them as opportunities for success.
The main problem here, is not the very real issue of emanating bias from software, which is what we must strive to do but the simple contrarianism behind much of the debate. This was largely fuelled by one book….
Weapons of 'Math' Destruction - sexed up dossier on AI?
Unfortunate title, as O’Neil’s supposed WMDs are as bad as Saddam Hussein’s mythical WMDs, the evidence similarly weak, sexed up and cherry picked. This is the go-to book for those who want to stick it to AI by reading a pot-boiler. But rather than taking an honest look at the subject, O’Neil takes the ‘Weapons of Math Destruction’ line far too literally, and unwittingly re-uses a term that has come to mean exaggeration and untruths. The book has some good case studies and passages but the search for truth is lost as she tries too hard to be a clickbait contrarian.
Bad examples
The first example borders on the bizarre. It concerns a teacher who is supposedly sacked because an algorithm said she should be sacked. Yet the true cause, as revealed by O’Neil, are other teachers who have cheated on behalf of their students in tests. Interestingly, they were caught through statistical checking, as too many erasures were found on the test sheets. That’s more man than machine.
The second is even worse. Nobody really thinks that US College Rankings are algorithmic in any serious sense. The ranking models are quite simply statistically wrong. The problem is not the existence of fictional WMDs but poor schoolboy errors in the basic maths. It is a straw man, as they use subjective surveys and proxies and everybody knows they are gamed. Malcolm Gladwell did a much better job in exposing them as self-fulfilling exercises in marketing. In fact. most of the problems uncovered in the book, if one does a deeper analysis, are human.
Take PredPol, the predictive policing software. Sure it has its glitches but the advantages vastly outweigh the disadvantages and the system, and its use, evolve over time to eliminate the problems. The main problem here is a form of bias or one-sidedness in the analysis. Most technology has a downside. We drive cars, despite the fact that well over a million people die gruesome and painful deaths every year from in car accidents. Rather than tease out the complexity, even comparing upsides with downsides, we are given over-simplifications. The proposition that all algorithms are biased is as foolish as the idea that all algorithms are free from bias. This is a complex area that needs careful thought and the real truth lies, as usual, somewhere in-between. Technology often has this cost-benefit feature. To focus on just one side is quite simply a mathematical distortion.
The chapter headings are also a dead giveaway - Bomb Parts, Shell Shocked, Arms Race, Civilian Casualties, Ineligible to serve, Sweating Bullets, Collateral Damage, No Safe Zone, The Targeted Civilian and Propaganda Machine. This is not 9/11 and the language of WMDs is hyperbolic - verging on propaganda itself.
At times O’Neil makes good points on ‘data' – small data sets, subjective survey data and proxies – but this is nothing new and features in any 101 statistics course. The mistake is to pin the bad data problem on algorithms and AI – that’s often a misattribution. Time and time again we get straw men in online advertising, personality tests, credit scoring, recruitment, insurance, social media. Sure problems exist but posing marginal errors as a global threat is a tactic that may sell books but is hardly objective. In this sense, O'Neil plays the very game she professes to despise - bias and exaggeration.
The final chapter is where it all goes badly wrong, with the laughable Hippocratic Oath. Here’s the first line in her imagined oath “I will remember that I didn’t make the world, and it doesn’t satisfy my equations” a flimsy line. There is, however one interesting idea – that AI be used to police itself. A number of people are working on this and it is a good example of seeing technology realistically, as being a force for both good and bad, and that the good will triumph if we use it for human good.
This book relentlessly lays the blame at the door of AI for all kinds of injustices, but mostly it exaggerates or fails to identify the real, root causes. The book is readable, as it is lightly autobiographical, and does pose the right questions about the dangers inherent in these technologies. Unfortunately it provides exaggerated analyses and rarely the right answers. Let us remember that Weapons of Mass Destruction turned out to be lies, used to promote a disastrous war. They were sexed up through dodgy dossiers. So it is with this populist paperback.
Conclusion

This is an important issue being clouded by often uninformed and exaggerated. Positions. AI is unique, in my view, in having a large number of well-funded entities, set up to research and advise on the ethical issues around AI. They are doing a good job in surfacing issues, suggesting solutions and will influence regulation and policy. Hyperbolic statements based on a few flawed meme-like cases do not solve the problems that will inevitably arise. Technology is almost always a balance up upsides and downsides, let’s not throw the opportunities in education away on the basis of bias, whether in commentators or AI.

 Subscribe to RSS