🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

nVidia helps to provide AI for driverless formula race cars

Started by
19 comments, last by BrianRhineheart 8 years, 3 months ago

TDR on toll roads will be so funny!

But mostly I am waiting for GW support!

"Recursion is the first step towards madness." - "Skegg?ld, Skálm?ld, Skildir ro Klofnir!"
Direct3D 12 quick reference: https://github.com/alessiot89/D3D12QuickRef/
Advertisement

The question is if AI drivers will be better than the average human driver. I think they will be until their OS crashes.

There is a difference between being able to beat a human at chess (or Go, more recently), and being intelligent.

Google itself just demonstrated how fucking useless so-called AI is.

True, but--

That's why Doug Lenat has spent the last 30 years taking a different approach with Cyc. I remember reading a chapter about his work in a book about AI in the library ~15 years ago and thinking to myself "He will never finish." Then I read recently that it is "done", sold and being put to work at Lucid.

As long as no people die, it's shoved under the carpet

Sadly true.

the official story is always that it was the pilot's fault, especially if BEA and Airbus are involved. That's what they've been doing since the early 1990s. Whenever the AI fucks up and an airplane leaves a long trench in a nearby forest, it was the pilot's fault.

I always thought these types of investigations were always very thorough. But you could be right.

I don't care if Vidia lets a formula race car drive autonomously. People who go to car races expect crashes and kinda expect being killed in the worst case, too. I don't care if Google has self-driving cars in the USA which neglect the way of right and ram busses (because yeah, the AI is always right).

Humans make errors too. I remember after I got on a bus, just as the bus started to take off, somebody outside wolf-whistled for it to stop, the bus driver did, then all us heard a loud thud behind the bus - a car making a turn didn't expect the bus to stop so soon and drove right into it. The driver was okay.

An AI could probably get a better car insurance quote than Humans in the future.

I don't mind if Amazon has autonomous drones flying over your house and crash unexpectedly in your garden (or... right on your head). Hey, who knows, you might find something valuable in the parcel.

Don't they have emergency parachutes/alarms? Amazon would have to award the victim(s) with a lot more than just a parcel if that were to happen.

the official story is always that it was the pilot's fault, especially if BEA and Airbus are involved. That's what they've been doing since the early 1990s. Whenever the AI fucks up and an airplane leaves a long trench in a nearby forest, it was the pilot's fault.

I always thought these types of investigations were always very thorough. But you could be right.


samoth is one angry tinfoil hat with some weird ideas. If you feel like it have a look at some of the things he's been going about in the lounge in the past. His posts there require significant amounts of sodium chloride.

I always thought these types of investigations were always very thorough. But you could be right.

You would hope that they're thorough, but oh well.

Usually these take like 6-8 months. In March 2015, they made their official, public statement about what had presumably happened in under 48 hours. Black box grasped by BEA as usual, and nobody can tell what they did to the data. Everything, including the alleged voice recording which backs up their story is super secret.

The "funniest" case (not so funny for the three or four people who died) happened in the Mulhouse-Habsheim incident in 1988, one of the first flights with a "smart" fly-by-wire airplane. It was a demonstration of how awesome the new technology was where the airplane was supposed to fly at 100ft.
Surprise, surprise, the airplane went below 30ft although clearly told to stay above 100ft. Pilot (not a bloody novice by the way) noticed, went full throttle and pulled up. Computer said "Nah, you know what, fuck you" and the airplane made a trench into the nearest forest.

The blackbox "disappeared" for ten days before it was returned to the police, its case was opened and it was unclear whether the data tape was the original one at all (well, duh, of course it wasn't). The voice recorder had the pilot going "WTF?" while he tried to go full throttle and pull up, but nothing happened.

Both problems that occurred during the incident (failing to keep height and failing to go full throttle) were well-known, too, respective OEBs existed, only Airbus didn't bother to publish them (so the pilot really had no way of knowing the computer would react that way).

There were allegations that Airbus offered the pilot a couple of millions if he agreed it had been his fault but he declined (that part of the story is obviously somewhat uncertain... but with everything else that happened, I'm inclined to believe it). End of the story was, both pilot and co-pilot were sentenced for manslaughter.

Humans make errors too.

Ah yes, certainly. Problem is, when AI makes a mistake, it is often desastrous with no recovery. Also, desastrous failure usually happens whenever something "normal" happens that was not foreseen.

Of course, humans have desastrous failure as well (think the officer-on-duty in the Fukushima central that day), but not nearly as often in "normal" situations. Also, they are usually -- not always, but usually -- much better at mitigating damage and at avoiding facepalm situations alltogether.
A human pilot would for example not decide to turn an airplane upside down only based on some sensor going crazy. Not as long as the liquid in his drink stays inside the glass.

Of course, humans have desastrous failure as well (think the officer-on-duty in the Fukushima central that day), but not nearly as often in "normal" situations. Also, they are usually -- not always, but usually -- much better at mitigating damage and at avoiding facepalm situations alltogether.

Hahaha, the number of fatal auto collisions that happen per day, kind of point more towards humans not being so great at mitigating damage and avoiding facepalm situations. Hell, we can't even get people to stop texting and driving at the same time.

Yes, at the specific task of driving, we are beginning to enter a phase when automated solutions are better than humans in the most common scenarios.

Commodity self-driving cars are a brand new technology, but their AI is still a specialized system. They fail badly in situations outside their programming like construction zones, rain, snow, high winds, and assorted other problems. Developers are handling them one at a time as best they can (winds are detected as the vehicle not moving where they expect to they turn the wheel to fight it) but many special cases will be extremely difficult to handle correctly through an AI.

Trying to bring this back on topic, the definitions of AI have always been in flux. Alan Turing's famous test was completely arbitrary but it gave a clear target. A keyboard and screen where you could text chat to an unknown person on the other side, where within 5 minutes could convince at least 70% of arbitrary humans that the one on the opposite side was a human.

Many people have offered additional AI tests, with variations on rules and requirements. Some want to imitate humans with all their flaws. Others want ideal responses without human flaws. Most are specialized solutions for specific tasks or sets of knowledge.

At various Computer/Human Interaction (CHI) and Human/Robot Interaction (HRI) conferences there have been long-running challenges to build a robot that could self-navigate through the entire conference, from introducing itself at registration to attending events to doing a certain amount of socialization, operating autonomously all the way through the end of the conference.

We are not to the point of the Technological Singularity yet, but the technology is advancing to that point.

Whatever bar we set, whatever we say "this thing is our new definition of AI" it can likely be broken. The philosophical questions of self-awareness and the nature of humanity starts to come into play.

Years ago back in grad school I had a machine learning teacher who asked several interesting questions like that. One was an interesting thought experiment for an assignment:

What if we could build a computer that took as input all the things a person had ever done. Maybe through video recordings or remote sensors of some kind, the system is given all the information about what your outward actions are, all the experiences, the books, the music, all the stimulations, and also learn your responses, your mannerisms, your behavior, your speech, even your subtle eye movements and twitches and heart rate changes. Then let's also say we build a remote body that looks like you identically, way past uncanny valley, it is indistinguishable from you. If the two were somehow combined, the robot doppelganger could convince your parents, your spouse, your children, your co-workers, everyone who knew you would be convinced that the machine was actually you. Your responses, your mannerisms, your conversation, everything matched what you would normally do. The robot can discuss itself as though it were self aware, because you were the person it was trained on could also discuss yourself as though you were self aware. The machine is also allowed to continue learning based on what it is exposed to, training it with that new input as though it were the same input up to that point in your life.

Would that machine, which looks and acts and behaves indistinguishable from you, including arguing that it was self-aware and intelligent, would that machine be considered a self aware intelligence?

AI does not need to hit the Technical Singularity before questions about the nature of intelligence and artificial intelligence become relevant.

AI doesn't need to be perfect, it just needs to be consistently better than humans on average.

People make a huge deal about "Well what happens if it KILLS somebody!?!" to which the proper logical reply is "So what if it does?" Human drivers and doctors kill hundreds of people per day across the globe. Possibly thousands a day, I haven't actually looked at numbers on the global scale. Even if AI drivers kill dozens of people every single day... Well, it is still a step up from where we are now if we ban human drivers on public roads.

Old Username: Talroth
If your signature on a web forum takes up more space than your average post, then you are doing things wrong.

Hahaha, the number of fatal auto collisions that happen per day, kind of point more towards humans not being so great at mitigating damage and avoiding facepalm situations. Hell, we can't even get people to stop texting and driving at the same time.

On the contrary. First, you have to consider numbers. Billions of people driving every day obviously means more opportunity for failure. Even more so as the average person is not dedicated to what they are doing. For this... the number of accidents is astonishingly small. An AI will fail because it's a somewhat cloudy day or because it's raining, and that's with a few dozen of them driving worldwide.

Thing is, humans can adapt to things they're not programmed for. They do that all their lives long, and they are good at it.

I remember one of the first drives I did almost three decades ago with my father sitting next to me back then. We came across a spot on the highway where suddenly that white line which separates the highway from the emergency line went wiggly, and shortly thereafter went off road completely. Who knows why... probably the guy painting the line had been drunk, or whatever.

My father joked when I drove past: "Hey, you're not following the line". Well yeah, of course I didn't, needless to mention. But guess what, an AI probably would have followed the line and would have landed in the trench. That's just the difference. A human doesn't need to waste much of a thought, and will still, most of the time, somehow do the correct thing. Mostly. Like... 90-95% good, without even knowing why.

An AI is seriously challenged if anything unexpected happens and will fail miserably if the programmer didn't plan for it.

AI doesn't need to be perfect, it just needs to be consistently better than humans on average.

People make a huge deal about "Well what happens if it KILLS somebody!?!" to which the proper logical reply is "So what if it does?" Human drivers and doctors kill hundreds of people per day across the globe. Possibly thousands a day, I haven't actually looked at numbers on the global scale. Even if AI drivers kill dozens of people every single day... Well, it is still a step up from where we are now if we ban human drivers on public roads.

Only some AI's need to be better than humans. Some AI's should mimic human behavior, including human errors. Some AI systems should intentionally have degraded behavior. This is especially true in games, an AI that performs far better than humans is likely to encourage anger and hostility rather than quality gameplay.

Yes, for autonomous vehicles, that is one where we want as close to ideal behavior as we can get. But it is only one small area of AI among all fields.

This topic is closed to new replies.

Advertisement