[AI] Claude's Constitution

nice90sguy

Out To Lunch
Joined
May 15, 2022
Posts
2,182
https://www.anthropic.com/constitution

This is, to me, a really fascinating document, with Sci-Fi overtones. It touches on moral philosophy, technology, social, political, economic and psychological areas. But there's no sex, so it should go in the non-erotic category.

It's a long read -- it's meant to be a sort of "System Prompt" for Anthropic's future AI models, and, as such, to be "machine-readable".

I know a lot of people are cynical about AI companies, and Anthropic in particular, but I admire tthem for putting out this document, and I think it needs to be seen as significant.
 
https://www.anthropic.com/constitution


I know a lot of people are cynical about AI companies, and Anthropic in particular, but I admire tthem for putting out this document, and I think it needs to be seen as significant.

What people/companies say/write and what they actually do are usually two different animals. Prime example, all of our elected officials. It's a decent PR blurb though. YMMV
 
https://www.anthropic.com/constitution

This is, to me, a really fascinating document, with Sci-Fi overtones. It touches on moral philosophy, technology, social, political, economic and psychological areas. But there's no sex, so it should go in the non-erotic category.

It's a long read -- it's meant to be a sort of "System Prompt" for Anthropic's future AI models, and, as such, to be "machine-readable".

I know a lot of people are cynical about AI companies, and Anthropic in particular, but I admire tthem for putting out this document, and I think it needs to be seen as significant.
Thanks for this! And this is the company that Trump banned from the DOJ...
I can't say I read it all, but I read enough to be glad. This is also the company that shared it's software with competitors so they could use it on their own systems to find bugs before the hackers did.
It's so good to hear some good news.
 
It reads as well intentioned and Asimov was clear that his snappy three laws didn't actually work, but I as I kept reading this I was put more and more in mind of Robocop 2 where his simple principles are replaced by an enormous list of 'politically correct' and contradictory values and he ceases to be able to function. The idea that an AI can follow a prompt for philosophical good is an interesting sci-fi concept but expecting it not to blunder seems optimistic.

(Thanks for the recommend)
 
The basic problem is that higher animals don't need to be told moral rules like "do as you would be done by" and "the punishment should fit the crime" -- we've evolved most of our morality and social behaviour. It's worrying that we need to be so explicit in our instructions to AI to stop it behaving psychopathically.

From my perspective, it seems to echo much of the US's individualism, not to mention an almost naive acceptance of the goodness of responsible but fundamentally self-interested corporations.
 
Back
Top