Day one of my Artificial Intelligence course at university introduced the Turing Test, the ability of a computer to fool a human into thinking it itself were human. Posed by Alan Turing in his 1950 paper, and underpinning the philosophy of artificial intelligence, the test is based around the hypothesis that if a machine acts as intelligently as a human being, then it is as intelligent as a human being. In a polite conversation, the computer is challenged with fooling 30% of humans in a tested sample.
It was claimed at the Turing Test 2014 competition at the Royal Society in London that a chat-bot called Eugene managed to fool 33% of the test participants that it was a 13-year-old non-native-English-speaking Ukranian boy, thereby officially passing the Turing Test more than 60 years after its conception.
It’s an impressive-sounding claim but is let down somewhat by the following essential caveats: the chat bot’s abilities are based on a script rather than cognitive thought, and as a non-native-English speaker, any grammatical or spoken errors in the software can be attributed to this. Also, there is a good deal of scepticism about the relevance of the Turing Test in the field of AI, given it only tests whether the computer behaves like a human being, not that it behaves intelligently, and other tests have been created to benchmark more complex problem-solving abilities.
This conversational area of AI is received with caution in the press due to the potential for fraud (where the ‘please wire $X00,000 to me’ type scam can be perpetuated by a ‘real’ person, rather than by email); my own view is one of interest into how this can be used for more effective communication from large institutions (I abhor ‘Press 1 to …’) or for combatting cyber-crime (as demonstrated by the programme ‘Sweetie’ who was programmed to capture internet predators). With luck we should see the benefits of artificial conversation without the Arthur C. Clarke’s HAL 9000 sass.
Drones, drones everywhere
As some poor chap on a beach this week got assaulted because he was flying a quad-copter on a beach. Someone took umbrage with this, decided he was taking pictures of people, called 911 and smacked him. Thankfully the chap was able to prove his recollection of events and the perpetrator was arrested.
So, he has a quad-copter. I have one of those, too; it’s fun to fly, and doesn't require detangling unlike my kite (I'm not a particularly skilled kiter, or whatever you call them). The copter is a play thing, a toy. It has however been victim of association with the current buzzword, ‘Drone’, which has more than slightly menacing overtones. The next time I take my copter up to Durdham Downs, I’ll be sure to look over my shoulder.
The attack was unfounded but the reasoning behind it appears to be the lingering fear over drones, surveillance, the NSA and 1984; it echoes similar problems encountered by some people wearing Google Glass. It’s not accepted yet. Big data, surveillance, security, tracking, hacking, Smart Thing X is a sizeable unknown, and it’s scary for a lot of people. When Google announced the ability to remove personal data from search results, they were inundated; when I’m conducting user testing, the thing I hear time and time again is ‘Why do they want my data?’, or ‘I don’t want to hand over my telephone number’ or ‘How do I know it’s genuine?’.
This fear is shaping the design community into producing more elegant methods of securing and then accessing data, and minimising/streamlining the amount of data that’s captured, which in my book is a good thing, but I'm wondering what will come of this fear in the future where general knowledge/acceptance fails to keep pace with innovation.
Google’s approach to self-driving cars has been to retrofit their technology to Priuses and Lexus RXs and these cars have just been approved for use on public roads in California. Now, Google are taking this a step further with a rather bubbly-looking prototype that raised many eyebrows by not having a steering wheel, mirrors, nor any pedals. This announcement parallels research from Oxford University that is developing a car that can recognise its surroundings and memorise regular routes.
So, driverless cars appear to be well on their way, like it or not… and the authorities in the US and Europe are recognising this and altering legislation to reflect these developments. I think once they’re bedded in (X years down the line), they’ll be trusted and accepted with the exception of the people who drive for fun, but the road to get there could be unnervingly rocky, and I hope the inevitable threat of hacking is treated appropriately.
If you don’t like self-driving cars, why not fly?
While Google’s new car is getting a lot of hype, let’s not forget there’s a company in California who’s taking pre-orders for, (I got really excited when I heard about this) a hover bike. The Aero X looks in its prototype video, slightly less than secure, but we are assured that in 2016/2017 that a gorgeous, streamlined version will be available. Flying a maximum of 12 feet off the ground, it won’t require a pilot’s license (also, presumably you won’t be allowed to just fly over the top of traffic jams?). It sounds a bit too good and with a price tag of approximately $85,000 this iteration is unlikely to take off (sorry). Keep up the good work though, chaps; I want to see a hover board at some point.