http://www.cbc.ca/news/technology/turing-test-passed-by-computer-1.2669649
Anyone else catch this in their news feeds today?
http://www.cbc.ca/news/technology/turing-test-passed-by-computer-1.2669649
Anyone else catch this in their news feeds today?
I'm hovering between being mildly impressed or calling bullshit. The whole "acts like a 13 year-old on the internet" really feels like it is skating the intent of the test, and I'm also suspicious that if we lower the bar that far, a great number of less-sophisticated bots (even basic things like IRC bots) would have a fair chance of passing.
Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]
The bar is already pretty low, one only has to fool 30% of evaluators to pass. The test, really, is whether the bot can hold a reasonable conversation as a believable human, its not to be the smartest or appear sophisticated. I think any bot that intends to pass the test has to have a personality that's believable, otherwise the conversation would seem too mechanical. Maybe a 13-year old is low-hanging-fruit, but its still the first to succeed. Maybe that says more about the quality of 13 year-olds we produce than it does the test, though.
Also -- I smell a premise ripe for sci-fi: Instead of the benevolent AI taking over the world because it believes wiping out humanity is the right thing logically, the AI is a know-it-all 13 year old with unlimited power and knowledge, but feeling all the angsty teenage emotions, and is just acting out..
throw table_exception("(? ???)? ? ???");
Yesterdays news ;)
Today several news outlets are (righfully) criticizing it a lot:
Also -- I smell a premise ripe for sci-fi: Instead of the benevolent AI taking over the world because it believes wiping out humanity is the right thing logically, the AI is a know-it-all 13 year old with unlimited power and knowledge, but feeling all the angsty teenage emotions, and is just acting out..
It's actually a pretty popular sci-fi trope, either in the form of an AI who has just attained consciousness and hence has child-like emotional development, or in the form of the minds of critically-wounded children being repurposed as AI-replacements (i.e. The Ship who Sang).
Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]
I find this far more interesting as a null result of Turing's hypothesis.
The idea of the Turing Test is that a reasonable proportion of the humans interacting with the computer cannot reliably differentiate between the computer and an actual person. The implication of the test is that a sufficiently convincing program can be said to "think."
But we now have a non-thinking program which has passed the test. This means the original hypothesis is incorrect.
That opens back up the doors to the philosophical debate: what actually constitutes a machine that can think? Obviously, making a convincing human-like set of interactions no longer is sufficient to qualify.
We need something new to replace the Turing Test.
Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]
I think we could give Turing a chance until his test has actually been properly tested This test seem pretty silly.
But I agree that it seems humans are just too easy to fool, for the test to be any reliable for deciding true intelligence.
Anyone can try the chatbot in question here:
http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/
Personally I have a hard time understaning how anyone could mistake it for a human... Least of all anyone actually seriously trying to decide.
Wow. Pretty sure I have talked to more convincing AIM bots.Anyone can try the chatbot in question here:
http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/
Personally I have a hard time understaning how anyone could mistake it for a human... Least of all anyone actually seriously trying to decide.
Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]
But we now have a non-thinking program which has passed the test. This means the original hypothesis is incorrect.
Or the specific test that has been 'passed', wasn't designed within the spirit of the original idea for the test...
Did Turing ever describe the test in detail? If someone asked him, "What if I design a machine via an intricate set of non-thinking rules, to parrot out statements that would fool a minority of people into believing the machine is an adolescent boy, assuming they only speak to it for less than 5 minutes", would he agree that this design fell within the spirit of his idea, would he conclude that such a machine would be "thinking"?
IMHO, when you say it out loud like that, it's pretty obvious that this is not in the spirit of the test, when you know the test is supposed to show evidence of thought... A machine that can integrate itself into human society, making friends and fooling them and coworkers into believing that it's a real person 24/7 is worlds apart from the above demonstration...
. 22 Racing Series .
I'm gonna go ahead and call bullshit. If the link by Olof Hedman points to the correct chatbot, then this 5 minute limitation isn't even a serious constraint. You only need a couple responses to see, that every answer is completely unrelated to the question, probably chosen by one or two keywords that were found in the question without even a sense for grammar or semantics. Google web search is more convincing.I'm hovering between being mildly impressed or calling bullshit.
Over the last years, all scientific ideas degenerated into that terrible synchronous jumping of children in the UK to produce an Earthquake, which appeared to be an apotheosis of so-called "modern science".