Contention arises easily from personality conflict. Sometimes such conflict is framed as that between different values. Yet, what strikes me more & more is that the problem is not unalike values, but that those values are applied only to one’s tribe. As noted in this article, “small tent” value systems – people loyal to their tribe, “and very unloyal to other tribes.”
In his latest Plaintext newsletter, Steven Levy recounts his conversation earlier this summer with legendary artificial intelligence researcher Geoffrey Hinton, “after he [Hinton] had some time to reflect on his post-Google life and mission” – in his “new career as a philosopher.”
The fears Hinton is now expressing are quite a shift from the previous time we spoke, in 2014. Back then, he was talking about how deep learning …
• Wired > email Newsletter > Steven Levy > Plaintext > The Plain View > “The godfather of AI has a plan to keep AI on team human” (August 11, 2023) – Will machines truly understand the world, and learn deceit and other bad habits from humans? Can building analog computers instead of digital ones keep the technology more loyal?
Hinton says his mind changed when he realized three things:
• Chatbots did seem to understand language very well.
• Since a model’s every new learning could be duplicated and transferred to previous models, they could share knowledge with each other, much easier than brains, which can’t be directly interconnected.
• And machines now had better learning algorithms than humans.
Hinton believes that between five and 20 years from now there’s a 50 percent chance that AI systems will be smarter than us. I ask him how we’d know when that happened. “Good question,” he says. And he wouldn’t be surprised if a superintelligent AI system chose to keep its capabilities to itself.
[Possibly] Taking an analog [uncopyable] approach to AI would be less dangerous because each instance of analog hardware has some uniqueness, Hinton reasons. As with our own wet little minds, analog systems can’t so easily merge in a Skynet kind of hive intelligence.
On some days, Hinton says, he’s optimistic. “… they [AIs] haven’t evolved to be nasty and petty like people and very loyal to your tribe, and very unloyal to other tribes. And because of that, we may well be able to keep it under control and make it benevolent.”