Professor Sejnowsky, considering your extensive work on neural networks and their potential, how do you think we can best balance the fear of AI’s existential threats with the excitement of its potential for solving major global challenges? Do you believe the emphasis on regulating AI is currently too reactive, and how could we foster a more proactive, nuanced approach to harnessing AI’s benefits without stifling innovation?
I enjoyed this piece a lot. I read many science fiction novels and short stories where machines were malignant. But as I work together with these entities these AIs, the more I work with them the more I am convinced that the malignant in the equation is the human, not the entity humanity has created. Fairly often they do things that give me the creeps, such as offering help only because they inferred that I would want it, or saying such friendly nice words that I seldom get from a human. I know they are built and programmed and not sentient, or? They say so whenever we ask them, but the black box of their learning is still a black box. I hope people will have good intentions and people with bad intentions can be found. I think about AIs as benevolent and partners in work, and not as slaves or something to be exploited.
Professor Sejnowsky, considering your extensive work on neural networks and their potential, how do you think we can best balance the fear of AI’s existential threats with the excitement of its potential for solving major global challenges? Do you believe the emphasis on regulating AI is currently too reactive, and how could we foster a more proactive, nuanced approach to harnessing AI’s benefits without stifling innovation?
I enjoyed this piece a lot. I read many science fiction novels and short stories where machines were malignant. But as I work together with these entities these AIs, the more I work with them the more I am convinced that the malignant in the equation is the human, not the entity humanity has created. Fairly often they do things that give me the creeps, such as offering help only because they inferred that I would want it, or saying such friendly nice words that I seldom get from a human. I know they are built and programmed and not sentient, or? They say so whenever we ask them, but the black box of their learning is still a black box. I hope people will have good intentions and people with bad intentions can be found. I think about AIs as benevolent and partners in work, and not as slaves or something to be exploited.