TDR on toll roads will be so funny!
But mostly I am waiting for GW support!
TDR on toll roads will be so funny!
But mostly I am waiting for GW support!
The question is if AI drivers will be better than the average human driver. I think they will be until their OS crashes.
There is a difference between being able to beat a human at chess (or Go, more recently), and being intelligent.
Google itself just demonstrated how fucking useless so-called AI is.
True, but--
That's why Doug Lenat has spent the last 30 years taking a different approach with Cyc. I remember reading a chapter about his work in a book about AI in the library ~15 years ago and thinking to myself "He will never finish." Then I read recently that it is "done", sold and being put to work at Lucid.
As long as no people die, it's shoved under the carpet
Sadly true.
the official story is always that it was the pilot's fault, especially if BEA and Airbus are involved. That's what they've been doing since the early 1990s. Whenever the AI fucks up and an airplane leaves a long trench in a nearby forest, it was the pilot's fault.
I always thought these types of investigations were always very thorough. But you could be right.
I don't care if Vidia lets a formula race car drive autonomously. People who go to car races expect crashes and kinda expect being killed in the worst case, too. I don't care if Google has self-driving cars in the USA which neglect the way of right and ram busses (because yeah, the AI is always right).
Humans make errors too. I remember after I got on a bus, just as the bus started to take off, somebody outside wolf-whistled for it to stop, the bus driver did, then all us heard a loud thud behind the bus - a car making a turn didn't expect the bus to stop so soon and drove right into it. The driver was okay.
An AI could probably get a better car insurance quote than Humans in the future.
I don't mind if Amazon has autonomous drones flying over your house and crash unexpectedly in your garden (or... right on your head). Hey, who knows, you might find something valuable in the parcel.
Don't they have emergency parachutes/alarms? Amazon would have to award the victim(s) with a lot more than just a parcel if that were to happen.
I always thought these types of investigations were always very thorough. But you could be right.the official story is always that it was the pilot's fault, especially if BEA and Airbus are involved. That's what they've been doing since the early 1990s. Whenever the AI fucks up and an airplane leaves a long trench in a nearby forest, it was the pilot's fault.
You would hope that they're thorough, but oh well.I always thought these types of investigations were always very thorough. But you could be right.
Ah yes, certainly. Problem is, when AI makes a mistake, it is often desastrous with no recovery. Also, desastrous failure usually happens whenever something "normal" happens that was not foreseen.Humans make errors too.
Of course, humans have desastrous failure as well (think the officer-on-duty in the Fukushima central that day), but not nearly as often in "normal" situations. Also, they are usually -- not always, but usually -- much better at mitigating damage and at avoiding facepalm situations alltogether.
Hahaha, the number of fatal auto collisions that happen per day, kind of point more towards humans not being so great at mitigating damage and avoiding facepalm situations. Hell, we can't even get people to stop texting and driving at the same time.
Yes, at the specific task of driving, we are beginning to enter a phase when automated solutions are better than humans in the most common scenarios.
Commodity self-driving cars are a brand new technology, but their AI is still a specialized system. They fail badly in situations outside their programming like construction zones, rain, snow, high winds, and assorted other problems. Developers are handling them one at a time as best they can (winds are detected as the vehicle not moving where they expect to they turn the wheel to fight it) but many special cases will be extremely difficult to handle correctly through an AI.
Trying to bring this back on topic, the definitions of AI have always been in flux. Alan Turing's famous test was completely arbitrary but it gave a clear target. A keyboard and screen where you could text chat to an unknown person on the other side, where within 5 minutes could convince at least 70% of arbitrary humans that the one on the opposite side was a human.
Many people have offered additional AI tests, with variations on rules and requirements. Some want to imitate humans with all their flaws. Others want ideal responses without human flaws. Most are specialized solutions for specific tasks or sets of knowledge.
At various Computer/Human Interaction (CHI) and Human/Robot Interaction (HRI) conferences there have been long-running challenges to build a robot that could self-navigate through the entire conference, from introducing itself at registration to attending events to doing a certain amount of socialization, operating autonomously all the way through the end of the conference.
We are not to the point of the Technological Singularity yet, but the technology is advancing to that point.
Whatever bar we set, whatever we say "this thing is our new definition of AI" it can likely be broken. The philosophical questions of self-awareness and the nature of humanity starts to come into play.
Years ago back in grad school I had a machine learning teacher who asked several interesting questions like that. One was an interesting thought experiment for an assignment:
What if we could build a computer that took as input all the things a person had ever done. Maybe through video recordings or remote sensors of some kind, the system is given all the information about what your outward actions are, all the experiences, the books, the music, all the stimulations, and also learn your responses, your mannerisms, your behavior, your speech, even your subtle eye movements and twitches and heart rate changes. Then let's also say we build a remote body that looks like you identically, way past uncanny valley, it is indistinguishable from you. If the two were somehow combined, the robot doppelganger could convince your parents, your spouse, your children, your co-workers, everyone who knew you would be convinced that the machine was actually you. Your responses, your mannerisms, your conversation, everything matched what you would normally do. The robot can discuss itself as though it were self aware, because you were the person it was trained on could also discuss yourself as though you were self aware. The machine is also allowed to continue learning based on what it is exposed to, training it with that new input as though it were the same input up to that point in your life.
Would that machine, which looks and acts and behaves indistinguishable from you, including arguing that it was self-aware and intelligent, would that machine be considered a self aware intelligence?
AI does not need to hit the Technical Singularity before questions about the nature of intelligence and artificial intelligence become relevant.
AI doesn't need to be perfect, it just needs to be consistently better than humans on average.
People make a huge deal about "Well what happens if it KILLS somebody!?!" to which the proper logical reply is "So what if it does?" Human drivers and doctors kill hundreds of people per day across the globe. Possibly thousands a day, I haven't actually looked at numbers on the global scale. Even if AI drivers kill dozens of people every single day... Well, it is still a step up from where we are now if we ban human drivers on public roads.
On the contrary. First, you have to consider numbers. Billions of people driving every day obviously means more opportunity for failure. Even more so as the average person is not dedicated to what they are doing. For this... the number of accidents is astonishingly small. An AI will fail because it's a somewhat cloudy day or because it's raining, and that's with a few dozen of them driving worldwide.Hahaha, the number of fatal auto collisions that happen per day, kind of point more towards humans not being so great at mitigating damage and avoiding facepalm situations. Hell, we can't even get people to stop texting and driving at the same time.
AI doesn't need to be perfect, it just needs to be consistently better than humans on average.
People make a huge deal about "Well what happens if it KILLS somebody!?!" to which the proper logical reply is "So what if it does?" Human drivers and doctors kill hundreds of people per day across the globe. Possibly thousands a day, I haven't actually looked at numbers on the global scale. Even if AI drivers kill dozens of people every single day... Well, it is still a step up from where we are now if we ban human drivers on public roads.
Only some AI's need to be better than humans. Some AI's should mimic human behavior, including human errors. Some AI systems should intentionally have degraded behavior. This is especially true in games, an AI that performs far better than humans is likely to encourage anger and hostility rather than quality gameplay.
Yes, for autonomous vehicles, that is one where we want as close to ideal behavior as we can get. But it is only one small area of AI among all fields.