The “cognitive system” DARPA envisions would reason in a variety of ways, learn from experience and adapt to surprises. It would be aware of its behavior and explain itself. It would be able to anticipate different scenarios and predict and plan for novel futures.
“It’s all moving toward this grand vision of not putting people in harm’s way,” says Raymond Kurzweil, an artificial intelligence guru and CEO of Kurzweil Technologies Inc. in Wellesley Hills, Mass. “If you want autonomous weapons, it’s helpful for them to be intelligent.” Computerworld
Uh… When a Defence agency talks of “not putting people in harm’s way” I suppose it means that they are talking of making killer weapons that will do the job without putting their own soldiers into danger?
The language sounds Orwellian to me.