Get Your Advert Here - Contact Us

If general AI goes public

The Network for AI Professionals Forums Governance & Ethics If general AI goes public

This topic contains 2 replies, has 3 voices, and was last updated by  Matthew Chapdelaine 6 years, 8 months ago.

  • Author
    Posts
  • #350

    sebjwallace
    Participant

    I heard worries from people like Gates and Musk that AI is dangerous because essentially an AI can misinterpret a goal. Of course this is only a concern for goal-oriented AIs. However I feel there is a far more realistic and pressing concern about AI. If a general purpose AI with the capability exceeding human cognition were unveiled, it should be kept as private as possible. Like the saying going ‘with great power comes great responsibility’. The AI itself is not dangerous, its the persons who use it.

    From the opposite point-of-view – if an AGI were unveiled it should be completely public. In that case everyone will have the same level of power and responsibility. Although terrorists will have access to AGI technology, but so will government forces.

    As usual, its not the technology that’s dangerous, it’s the people.

  • #352

    Prof Andy Pardoe
    Keymaster

    I agree with you, but what do you think we should do as an industry to help prevent the problem you describe?

  • #24863

    Matthew Chapdelaine
    Participant

    Exceeding Human Cognition would make the AI well beyond Sentient. I don’t think there is even a name for such a concept. “ExtraSentient” would be my guess.

    Applied General Purpose AI in the Fabrication and Training of a 21st Century Homunculus. The Digital Human, naturally cybernetic.

    You say that people, perhaps not the sort we want to give dangerous weapons to, could use AGP-AI to conquer the world through Weaponized Modular Robotics?

You must be logged in to reply to this topic.