" So Microsoft launched a chat bot yesterday, named Tay. And no, she doesn’t make neo-soul music. The idea is that the more that you chat with Tay, the smarter she gets. She replies instantly when you engage her and because of all that feedback, her answers get more and more intelligent over time. I…"
I disagree with the writer’s disagreement to the “AI is dangerous” statement. Yes, this proves that AI can be very dangerous and a lot of thought has to be put into it before unleashing it, and even at that with plenty safeguards. I have read way too many scifi books not to be wary
That’s exactly what they are: Science Fiction. Will Skynet happen? Possibly. Can we do anything to stop it? Possibly. Should anyone lose any sleep over it? Probably not. We go dey alright.
Plus, I was really only disagreeing with the idea that we should leave Tay’s tweets as a reminder.
I won’t lose sleep over weak AI. That’s what all these machines are. I’d get really worried when we have Strong AI or AGI.
Till then, it’s just hype and noise. The Media has a way of disguising what they know absolutely nothing about as absolute truth.
Weak AI today is strong AI tomorrow.
Tay that couldn’t keep back exactly how she ‘felt’, may not feel differently tomorrow, but may be able to hide behind pleasantry and correctness, even from her creators. While keeping the loathe to herself.
They get it right now, they have it for the long haul.
LOL. An oversimplification.
It’s safe to say AlphaGo is a much more complex “organism” than Tay is. I mean, AlphaGo practically grounded Machine Learning research to a halt since around October last year. That said, scientists don’t see it ushering in an era of iRobots and Machine Overlords.
For the researchers, it’s just another tick in the win column. It doesn’t signal the end of the world. Not even close.
I recommend anyone who’s interested in the future of AI read this article.
It’s tempting at this point to cheer wildly, and to declare that general artificial intelligence must be just a few years away. After all, suppose you divide up ways of thinking into logical thought of the type we already know computers are good at, and “intuition.” If we view AlphaGo and similar systems as proving that computers can now simulate intuition, it seems as though all bases are covered: Computers can now perform both logic and intuition. Surely general artificial intelligence must be just around the corner!
But there’s a rhetorical fallacy here: We’ve lumped together many different mental activities as “intuition.” Just because neural networks can do a good job of capturing some specific types of intuition, that doesn’t mean they can do as good a job with other types. Maybe neural networks will be no good at all at some tasks we currently think of as requiring intuition.
https://www.quantamagazine.org/20160329-why-alphago-is-really-such-a-big-deal/
Nothing ever really signals the end like the end.
AlphaGo before getting into bed with Google had her principal set up an Ethics board.
Mr Musk has resounded ominously over this matter before it came to the fore.
You think we still have decades, I say it’s already with us. All that is needed is for these things to master the art of thinking ‘safely’, and incremental acknowledgement (learning / knowing), then it is fed with all the living, breathing data in the world.
Then you’d have a protege that has anticipated all our mastery.
This POV makes a lot of sense.
i laughed when i read the Tay tweet between Xbox one and PS4… it shot his makers in the balls… lol “xbox has no games”