I heard worries from people like Gates and Musk that AI is dangerous because essentially an AI can misinterpret a goal. Of course this is only a concern for goal-oriented AIs. However I feel there is a far more realistic and pressing concern about AI. If a general purpose AI with the capability exceeding human cognition were unveiled, it should be kept as private as possible. Like the saying going ‘with great power comes great responsibility’. The AI itself is not dangerous, its the persons who use it.
From the opposite point-of-view – if an AGI were unveiled it should be completely public. In that case everyone will have the same level of power and responsibility. Although terrorists will have access to AGI technology, but so will government forces.
As usual, its not the technology that’s dangerous, it’s the people.