Dear all,
The aim of this week's Food for Thought is not to scare but to encourage deeper thinking about the arrival of AI. There is no point burying one's head in the sand and denying the imminent arrival of AI as a figment of science fiction or pretending that it will not impact us superior humans. It is clear that the Fourth Industrial Revolution is underway, that there is a race to develop AI as military weapons as well as a Quantum computer that will revolutionize all systems. As Russian President Vladimir Putin said "Artificial intelligence is the future, not only of Russia, but of all of mankind,whoever becomes the leader in this sphere will become the ruler of the world."
For many leading thinkers in this field all of this is a given, what they find disturbing is the lack of collaboration and joint action to create codes for development that could guide the innovation so that it remains aligned with humans. Of course the BIG question for us in education is how do we best align our schools and the teaching and learning in such a way that we prepare students for what we can try and predict and for what we can only imagine.
Often when I talk to staff about AI there is a belief that it wont impact teaching......well I feel this may also be misguided logic unless education changes quickly and teachers develop their role to being one of a coach, advisor or learning/ life guide. In Japan replacing teachers with AI has already begun in language classes, as this short article Robots replace language teachers identifies.
I recognize that this is about an hour's watching and reading but feel that if we are to create the best learning environment for the future we all need to understand the extremes of the spectrum so that we can think deeply about the preparation of our students for their future.
I thought that it might be worth starting with this video because it is a useful introduction to AI and many of the lines of thought about it. I do need to apologize in advance because built into the narrative there is advertizing from the sponsors of Thought2. The video does illustrate at the simplest level how AI is already with us and how it is developing and where we stand as humans in the table of intelligence.
Sam Harris the philosopher and author of a good book I read recently, Waking Up. His talk is 14 minutes and raises questions about how we are positioning ourselves regarding the advent of AI in our lives. As always he raises questions that should be being asked but aren't.
" Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the problems associated with creating something that may treat us the way we treat ants"
The next video features the thoughts of Elon Musk taken from a collection of his speeches and from a number of other resources. “I have exposure to the most cutting edge AI and I think people should be really concerned about it,” Musk once said in a speech at the National Governors Association meeting in Rhode Island. “I keep sounding the alarm bell. Elon Musk is working hard to try and develop some common thinking and codes about the development of AI. This video does link with that of Sam Harris's. Musk is excited by the implanting of AI technology in our brains as the source of our future intelligence.
"If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order. People have written extensively about why it is basically analytically compulsory to conclude that in the default scenario, in the absence of major surprising developments or concerted integrated effort in the long term, artificial intelligence will replace humanity."
In the final talk for part 2, What happens when our computers get smarter than we are? Nick Bolstrom brings together many of the fears from previous videos in a philosophical manner with many practical examples about the fears for the super intelligent computer. His talk is fairly optimistic and is based around making super intelligent computers safe. BUT he stresses that it does depend on us us creating a super intelligent machine that learns how to discover our values and learn to apply these to all problems that it may encounter.
"Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?"
Have a good weekend,
Adrian