The AI "brain" of the self driving car is notoriously bad at handling situations it has never seen before. AI must be "trained" via simulations and real road miles. Their pattern recognition is still highly questionable. One of the recent headlines I posted about earlier was that the AI failed to recognize a woman pushing a bicycle across the street and made a decision to mow the "object" over, rather than slow/stop/swerve. The AI can recognize a bicycle, and a woman, but apparently, not a woman pushing a bicycle. This kind of thing is easy for humans. Obviously, not for AI.Luis_Br wrote: ↑Thu May 09, 2019 6:05 pmOn the air there are less events, but one event is much more dangerous, planes still fall by big turbulences. During landing it is more complicated. Maybe the reason pilot is still there.
I think the unpredictable events on the road are unpredictable to human driver too, but solution to the car is easier, generally a simple quick brake from everybody solves the problem. An airplane cannot brake and stand still.
Maybe I am missing something, car is not my speciality, but I don't see anytihing good cameras and sensors all around cannot see before a human driver. Technology can even see in the dark. The problem is more the cost of implementing all the necessary stuff.
On the predictability, the only problem I see is to predict what other humans are doing, the major cause of accidents by very far. I think that if all cars were autonomous, it would be easy, and accidents would decrease to very few. Problem is that if someone throw himself suddenly in front of a car, autonomous car will always be guilty for not braking fast enough, while a human driver can make a human mistake and kill each other.
A quick search over the internet brings NHTSA 2008 survey where 93% of accidents are caused by human errors. An older US study from 2001 said 99% of studied accidents where caused by "human behavior". Human drivers can run faster the road allows, can cross red signals, drive while writing on the cellphone, drive sleepy, they only need to predict if the cops are nearby.
That's because it is not really "artificial intelligence" despite the cool name; just some pattern matching, regressions, and the like.
Enthusiasts tend to forget this. Any AI program can be recreated on a sufficiently large abacus. Intelligence is not a sensible word for mindless computatiion.
HAL, open the pod bay doors...eno wrote: ↑Fri May 10, 2019 1:53 pm
Self-consciousness is not the same as intelligence, and the "intelligence" required here is not the same as that of a human. Birds are intelligent enough to distinguish people from inanimate objects; indeed, the lowly city pigeons can even remember and recognize individual faces. You are not afraid of birds, are you (Yeah, yeah, great movie, we all know about it, shut up, Alfred)? The "AI" of self-driving cars is not even at the level of a dumb bird; if it ever gets there.
Isaac Asimov's "Three Laws of Robotics"guitarrista wrote: ↑Tue May 14, 2019 7:55 pmSelf-consciousness is not the same as intelligence, and the "intelligence" required here is not the same as that of a human. Birds are intelligent enough to distinguish people from inanimate objects; indeed, the lowly city pigeons can even remember and recognize individual faces. You are not afraid of birds, are you? The "AI" of self-driving cars is not even at the level of a dumb bird; if it ever gets there.
Hey, not so easy. Thousands of hobbyists are now building robots at home. Once AI technology becomes broadly available, anyone in his basement can make it and alter the code anyway he wants. Including hackers, terrorists, mentally unstable people. Make AI/robot able to reproduce and alter their code to allow malicious behavior and we are done here on Earth.Andrew Pohlman wrote: ↑Tue May 14, 2019 9:47 pmIsaac Asimov's "Three Laws of Robotics"
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Our politicians and lobbyists will make sure the above NEVER happens. It's not the AI we need to worry about, it's politicians, lobbyists, and their lawyers.
I have never looked at it very seriously but I did hear somewhere that data centres already account for 10% of world electricity usage. If you asked here 5 years ago who has networked "white goods" the answer would have been zero. Today there are certainly some people whose fridges/washing machines etc are networked and in some way or another are part of a "cloud". In a recent group with 9 participants (+me), 2 admitted to having networked washing machines. Such internet of things will drive the energy demand in data centers over the next few years.
Is there any compelling reason why it would not be self-serving?
A truly highly-developed intelligent and conscious AI should not be malevolent. But there is a long way for AI to get to that point through intermediate development stages where AI will not yet fully understand what it's doing and could be programmed by some malevolent humans to do harmful things. For example, a political dictator in an attempt to get a full control of the world may create an army of military AI soldier robots able to endlessly reproduce, learn and adapt but firmly programmed to destroy as a top-priority goal. So as long as malevolent humans exist on earth there will always be possibilities of abusing the technology in malevolent ways. As the technology becomes more and more powerful, the dangers of abusing it also become greater.BugDog wrote: ↑Sun May 19, 2019 1:48 pmI know we're in the realm of science fiction, but why is the assumption that a truly conscious AI would be malevolent?
We humans are pretty bad with our relationships to other, lesser, creatures, but generally those revolve around competition for resources, (or are resources). For the most part, if they don't bother us, we don't bother them. The only exception I can think of is trophy hunting, or things along that line.
Why would AI be any different? What resources would we be in competition for? Electrical power and some chemicals specific to silicon life forms? I don't see a lot of grief there. They might "cull the herd" but maintaining a sustainable human population is something our "natural" human intelligence hasn't seemed to be able to manage.