Your E-mail. Please type the following code. Don't make me do this again. Forgot Password? Sign in with Facebook. Don't have an account? Continue as Guest. Please enter a Username. I agree to the Terms of Service. Add me to the weekly newsletter. Add me to the daily newsletter.
Create Account. Most importantly, we rarely take heed of the lessons of science fiction. The military routinely carries out research into systems against which writers and filmmakers have long warned.
Indeed, the scientists are often directly inspired by those cautionary tales. For instance, H. The design would be quite useful for the sort of fights we face now in Iraq and Afghanistan. In my final judgment, however, The Terminator may not be the best guide for how a machine takeover might take place in the real world.
Instead, another science fiction series, The Matrix , may be more useful. Rather, the films give us a valuable metaphor for the technologic matrix in which we increasingly find ourselves enmeshed but barely notice. For all our pop-culture-stoked fears of living in a world where robots rule with an iron or digital fist, we already live in a world of technology that few of us even understand.
It increasingly dominates how we live, work, communicate, and now even fight. Peter W. Related Books. Broadband Edited by Robert W.
TechStream When do consumers prefer algorithmic versus human decisionmakers? Derek E. Bambauer and Michael Risch. Dawson , Kevin C. Desouza , and James S. I believe we can do this: we can put the right checks and balances in place to be safer. I think the most dangerous thing with AI is its pace of development. Depending how quickly it will develop and how quickly we will be able to adapt to it. And if we lose that balance, we might get in trouble.
I think the dangerous applications for AI, from my point of view, would be criminals or large terrorist organizations using it to disrupt large processes or simply do pure harm.
And, of course, other risks come from things like job losses. But this is the duality of this technology. Certainly, my conviction is that AI is not a weapon; AI is a tool. It is a powerful tool, and this powerful tool could be used for good or bad things. Our mission is to make sure that this is used for the good things, the most benefits are extracted from it, and most risks are understood and mitigated.
I think we should watch out for drones. I think automated drones are potentially dangerous in a lot of ways. But in five or ten years, I can imagine that a drone could have onboard computation sufficient enough that it could actually be useful. Every technology can be used for bad. It comes down to who has access to the technology and how we use it.
0コメント