Jump to content

Feet on the Ground vs. Head in the Clouds


Recommended Posts

I'm a feet on the ground kind of guy myself - one who thinks that "auto-pilot" should be reserved for autonomous, driverless vehicles which are limited in speed to 40 mph and can drive on dedicated routes only. The point of using it for driverless vehicles only would be to allow it to make decisions based on what is detected, while not needing to conform to a human driver's expectations. For example, simply STOPPING when it comes across an unrecognized or potentially dangerous situation.

An ongoing topic - click here for Most Recent Post

from Abacus

 

Jack Ma and Elon Musk give conflicting visions of AI's future

The Alibaba founder has an optimistic view of AI that involves a 12-hour workweek while the SpaceX founder warned humans may be too slow for AI

mamusk.jpg?itok=bpzkDR3a

 

 

 

 

 

Edited by Randy W (see edit history)
Link to comment

In the case of AI, I remain a feet-on-the-ground kind of guy too., although on other subjects, I don't mind dreaming. And AI right now is not even a dream, but a nightmare.

 

I studied in college and it went nowhere then. Since that time there have been technical improvements in technique and language but work remains. We tried to operationalize an "AI" product and it fell flat on its face. It could not retrieve from the screen when different platforms computers, boxes, what have you) were being "read." It made mistakes on them whenever it went from say an upload to a Unix box or a read from a different machine. The rate of reading was different and the pixel assignments were not the same. Result: read failure. When it happened, every virtual user after the failure was held up, and then the performance degraded beyond measure. Every trick available was used to get it in synch but just could not be done.

 

Whether the tech can be used for driving, probably but as you say, Randy, under controlled conditions. I am a skeptic.

  • Like 1
Link to comment

In the case of AI, I remain a feet-on-the-ground kind of guy too., although on other subjects, I don't mind dreaming. And AI right now is not even a dream, but a nightmare.

 

I studied in college and it went nowhere then. Since that time there have been technical improvements in technique and language but work remains. We tried to operationalize an "AI" product and it fell flat on its face. It could not retrieve from the screen when different platforms computers, boxes, what have you) were being "read." It made mistakes on them whenever it went from say an upload to a Unix box or a read from a different machine. The rate of reading was different and the pixel assignments were not the same. Result: read failure. When it happened, every virtual user after the failure was held up, and then the performance degraded beyond measure. Every trick available was used to get it in synch but just could not be done.

 

Whether the tech can be used for driving, probably but as you say, Randy, under controlled conditions. I am a skeptic.

 

 

There is virtually an infinite set of circumstances that a driver needs to be able to consider and evaluate. I don't think that will work for an "auto-pilot" computerized function, except under carefully controlled and KNOWN circumstances. Taking the driver out of the picture would allow an autonomous vehicle to behave SAFELY, without having to live up to a driver's expectations.

 

My key point is that "artificial intelligence" is NOT intelligence at all, but pre-programmed logic. "Learning" simply involves storing "memories" into a database.

Link to comment

Randy's point.

 

AI will always have the artificial look, feel and taste because it's "retrieving" expressions and decisions from past episodes, rather than creating new ones. Will a system be able to process a novel experience correctly? Will the lawyers and insurance companies allow it to? Will Amazon ever stop sending me ads for TVs after I bought one? NO!

 

Like most computer assisted systems, developers will pat themselves on the back for their precision without concern about the accuracy.

  • Like 1
Link to comment

OTOH, computers do "learn" in the sense of coming up with something "new" from "old data." CS'ers have come up with neural networks that can generalize about events, even conversations. Can they come up with more than one event for a given circumstance? No -- not yet.

 

I have seen demonstrations where a computer was asked questions about patterns of dots (much like the one discussed in the below article from Scientific American) and it came up with different patterns of dots according to the instruction required of them. It even came up with improvements to the patterns in successive runs having deciphered what was required and needed by itself, without instruction from humans.

 

I know we are all trained in GIGO, Garbage in Garbage out. But with neural networks, that is not necessarily so. In fact, in terms of Garbage In, neural networks can screen data better than humans. (We are working exactly on that problem now with container tags and Bills of Lading.) And computers have come up with "new" answers as a result, that researchers plug back into the network, as a further learning exercise.

 

To me, it is all quite primitive by human standards but when we see computers reducing the workload algorithm compared to humans, and also coming up with recommendations on further reductions, without further instruction from a human, you have to think of the possibilities. But as the question was originally asked, is it pie in the sky, or down to earth? I am afraid pie in the sky fits better, at least right now, unless someone starts plugging in some kind of new protoplasm.

 

And then we get into questions of what is life? Oh my. Now there's an easy answer.....In reading for this post, I ran across an article (last one) that says neural networks are good at new conclusions but they are not so good at what our average calculator can do in simple math. So they are working on improving both kinds of systems. It is why I say we are at a primitive stage in AI.

 

I do understand why that contradiction exists: the neural networks are based on rules that are more logic than math. (And there is a difference.) I would love to work on such a project since I have a great passion for logic and wonder how these set of rules can be operationalized.

 

https://www.scientificamerican.com/article/can-neural-network-comput/

 

https://www.kqed.org/futureofyou/440231/can-computers-learn-like-humans

 

https://www.newscientist.com/article/mg22429932-200-computer-with-human-like-learning-will-program-itself/

  • Like 1
Link to comment

Yes, computers are great at scanning databases and gleaning information based on pre-defined rules. Google's Go-bot proved that by building a gigantic database of situations simply by playing itself. See (CFL topic Google enters China - and Wins!)

 

But any real 'learning' can get out of hand pretty quickly - an absolute no-no for any driving functions, for example.

 

One absolutely insane discussion I saw was about the auto-drive function. What to do if presented with a situation where an accident is unavoidable. Making up some details here since I don't remember exactly - Suppose a lady with a baby carriage steps in front of the car, a man on the sidewalk to the right, and a cliff to the left. Which does the auto-pilot decide to sacrifice - the lady and her baby, the man on the sidewalk, or the passenger in the car?

 

Maybe you could send facial images to a national database for a facial ID to get their Social Credit score?

 

Really ?? Writing code to decide which human to kill? What could go wrong with that?

 

Keep your feet on the ground - DON'T drive too fast for the conditions, and STOP when needed.

 

One of the key things I've learned about driving, whether in China or the US is that anytime you have to unexpectedly use your brakes, it may just mean that you're driving too fast.

Link to comment

Well, here in Scottsdale we had a fatality involving Uber that shut down all testing of these autonomous automobiles, and is the first pedestrian fatality involving autonomous vehicles. I saw the vid cam of the accident. The driver was distracted while the car was driving itself, at night, on a road frequented by pedestrians who jaywalk. It's a 3 lane highway. A woman crossed the street with her bike at 10:00 PM and the car (Volvo SUV) did not detect her. (I would wager, neither could an average human being.)

 

I don't think I will ever agree that a car should be autonomous unless it is running under severe controlled situations, like not where places have stupid pedestrians who jaywalk.

Link to comment

Well, here in Scottsdale we had a fatality involving Uber that shut down all testing of these autonomous automobiles, and is the first pedestrian fatality involving autonomous vehicles. I saw the vid cam of the accident. The driver was distracted while the car was driving itself, at night, on a road frequented by pedestrians who jaywalk. It's a 3 lane highway. A woman crossed the street with her bike at 10:00 PM and the car (Volvo SUV) did not detect her. (I would wager, neither could an average human being.)

 

I don't think I will ever agree that a car should be autonomous unless it is running under severe controlled situations, like not where places have stupid pedestrians who jaywalk.

 

 

The car DID see the pedestrian, but was programmed to NOT brake.

 

http://candleforlove.com/forums/topic/49535-driverless-travel/?p=638765

 

“According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior,” investigators said.

 

The responsibility for hitting the brakes was left with a safety driver, who sits behind the wheel monitoring the autonomous test vehicle. Even though Uber’s computers concluded the car would hit the pedestrian 1.3 seconds before impact, or about 82 feet (25 meters) away, it also didn’t alert the driver and she was looking away from the road at the time, according to NTSB.

 

 

Which is exactly why I suggest TAKE THE DRIVER OUT OF THE PICTURE. No Roomba has ever had a fatality or even caused any damage (least-wise, I don't think so) - so, yes, it is possible.

 

Erratic behavior, especially STOPPING, should be required.

 

But what are the limits of safety? We'll never know unless they experiment under controlled conditions. If you are right - so be it. We should not be allowing computers to do damage.

 

I expect this drone would be reasonably safe - http://candleforlove.com/forums/topic/49535-driverless-travel/?p=638334

Link to comment

Not to belabor the point but I live in Scottsdale and know the University area where this happened very well, and followed the accident afterwards. It's right along Mill Avenue where there is an art festival every year. I attended art school at ASU just down the street. It took some time before the final analysis was made and the Tempe Police released the video from the car.

 

All of the analysis points to the fact that the car did not detect the woman crossing the road, and the driver was not prepared to take over in the event of a failure. If you check out the film from the car itself, the woman had only a few seconds to react. That crazy pedestrian was almost running across the road. The diagrams show the position of the lidar detectors. There is some speculation as to why the woman was not detected. She was in the sweep area of the cameras but she was not walking slowly. Some have speculated that the bicycle she was walking and that she emerged from a stand of trees may have caused the failure to recognize the pedestrian.

 

 

The video shows that the safety driver, identified by police as Rafael Vasquez, was clearly distracted and looking down from the road.
It also appears that both of the safety driver's hands were not hovering above the steering wheel, which is what most backup drivers are instructed to do because it allows them to take control of the car quickly in the case of an emergency.
Earlier in the week, police officials said the driver was not impaired and had cooperated with authorities. The self-driving car, however, should have detected the woman crossing the road.

 

Like many self-driving cars, Uber equips its vehicles with lidar sensors -- an acronym for light detection and ranging systems -- to help the car detect the world around it. One of the positive attributes of lidar is that it is supposed to work well at night when it is dark, detecting objects from hundreds of feet away.

 

There are a few different types of technologies that are used in autonomous driving systems. Uber and Waymo, which was spun off from Google, use lidar and radar technology, along with computer vision, to help guide the vehicle.

 

A self-driving car’s sensors gather data on nearby objects, like their size and rate of speed. It categorizes the objects — as cyclists, pedestrians or other cars and objects — based on how they are likely to behave.

 

 

 

 

When the Tempe Police released this capture, the family got a lawyer and filed suit.

 

 

Link to comment

Too often, you hear the word "they" in referring to self-driving cars, as if "they" all behaved the same way, showing the same characteristics.

 

The truth is that "they" have different software, different hardware which must all be functional, and even different versions of the same software.

 

From the NY Times article -

The night resolution of the dashcam on the Uber car is very poor.

 

The what a car "sees" picture - A human driver with vision that was this poor would NOT be allowed to drive a car -

 

what-car-sees-720.png

Edited by Randy W (see edit history)
Link to comment

Speaking of programming cars to choose a people to kill . . .

 

This seems more tongue-in-cheek - more of a cultural difference piece. From Abacus

 

Who should a driverless car kill?

 

 

To find out what people around the world think self-driving cars should do, a group of MIT researchers designed an online survey called Moral Machine. More than 2 million people from over 200 countries responded, and the analysis was published this week in Nature.

 

It turns out the choices people make depend largely on where they’re from. As the MIT Technology Review notes, those in mainland China, Taiwan and other Confucianist societies are less likely to spare the young over the old.

 

“Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision.”
THE MORAL MACHINE EXPERIMENT

 

Link to comment

Not sure headlines make the difference between autonomous cars and human driven cars. As the articles already posted point out, the record of human cars killing or even causing accidents is far more than what autonomous cars cause, even counting for miles driven. The diagrams in the Times article use boxes to represent places for the lidar to be watchful of is much larger than what its normal size would be. So the area of coverage is greater.

 

Reaction times have also been proven to be faster than humans. My concern with autonomous crashes would exactly that: if the humans in the autonomous cars were not wearing a seat belt or if the belt was defective regardless of a warning to fasten the safety belt, we would see a lot of humans going through front windshields in a crash. Of course, that would be far more greater if count accidents where seat belts were not worn at all by humans, or even if they were worn and the human did the stopping -- the crash would have a higher impact since the human did not stop quicker than the machine.

 

So there is a lot to be said about autonomous machines with regard to accidents.

 

Assuming Uber works on its lidar detection system so it can see better in the dark, they might have a product for controlled usage.

Link to comment

But like the investigation found out (Google "Tempe Arizona Uber Emergency Brake" for many other articles), Uber had deactivated the emergency brake system - the brakes were not applied. They instead relied on an ALERT human driver. The accident WAS avoidable, since the pedestrian WAS detected 6 seconds in advance.

 

Unexpected braking by an overly cautious auto-pilot is disconcerting to human occupants.

 

My thinking is that, in a situation where human intervention is needed, an auto-pilot system will simply add about five seconds to a human reaction time that NEEDS to be well under ONE second (First, "What's that up there in the road ahead?", then "Is it something the auto-pilot can handle?", then "What do I need to do?"). So, yes, controlled conditions are necessary - especially the speed. DON'T out-drive your brakes.

 

There are an unlimited number of circumstances that might be encountered. Someone sitting at their desk can't anticipate them all. I've heard Cadillac has a system that will only activate on pre-determined roads. Smart.

Edited by Randy W (see edit history)
Link to comment

This interactive feature just popped up in the Washington Post

 

How does an
autonomous car
work? Not so great.

 

 

 

"Our reporting found several instances in which a car didn’t recognize the color of a traffic light."

 

"There have been multiple reports of autonomous vehicles not detecting other objects in real time. "

 

"The car came to a screeching halt. Despite how much engineers train their self-driving cars, there’s always the possibility they’ll encounter something unexpected. For example, Volvo tested its vehicles’ Large Animal Detection System in areas with mooses, but, during a 2017 test in Australia, a car detected a kangaroo and was confounded by its unusual hopping habits."

 

" This car is equipped to hear sirens from the firetruck. Not all companies have audio detection systems on board. And the ones that do must adapt those systems to a variety of sirens that first responders use in cities nationwide."

 

And on and on.

 

Like I said, a virtually infinite number of circumstances.

 

Gill Pratt, the head of the Toyota Research Institute, said in a speech earlier this year that it’s time to focus on explaining how hard it is to make a self-driving car work.
“How do we train a machine,” he asked, “about the social ballet required to navigate through an ever-changing environment as well as, or better than, a human driver?”

 

 

Link to comment
  • 1 month later...

A feet on the ground usage

 

1.19K subscribers

 

Self driving vending machines are launching in China. These auto vending machines can hold 2,400 liters worth of products that can be purchased with a smartphone, and they travel up to 50km/h. Autonomous vehicle maker Neolix expects to ship 1,000 self-driving vehicles in 2019.

 

 

 

 

 

Link to comment

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...