We're approaching this issue with humility. (Here is a professionally produced video publicizing the issue and our approach. The issue we're being humble about presupposes that we have achieved a technological advance unprecedented in history.)
We recognize that we'll need to regularly revise our ideas as the field develops. (Despite there being no consensus, we are eager not only to be part of the conversation in the public sphere, but to start it and continue it.)
This is so irresponsible, it's "misinformation" of the worst kind (in that it's difficult for a layperson to parse and comes from an ostensibly trustworthy source).
When "blockchain" was on obvious scam, we all knew it but weren't vocal enough, now that AI is venturing into mountebank territory, it's important for everyone to speak out against this kind of charlatanism (importantly by dismissing it as nonsense and not letting them shift the narrative). I've lost so much respect for Anthropic who have done some good work but now appear to be a cult.
"Model welfare" is clickbait. The core of the message is about if models can be conscious.
The video is worth watching. Maybe embedding it in an article so strongly focused on the welfare topic is a mistake, but then: why not, if this is what it takes the then-outraged news outlets to put the topic of AI consciousness on the table?
To me the p-zombie theory will most likely always be applicable to any non-wet AI, no matter how brilliant it will be.
Long ago I coin robotmancipation as a joke but also as a thought experiment. My conclusion is that we have so much mirroring behaviour that we should be very afraid. Someone long ago said the way we treat animals today will be the way we treat people tomorrow. There are no part time respectable people.
How exactly is this irresponsible? Are you proclaiming that any discussion of "machine consciousness" is axiomatically misleading?
It’s becoming clear that this issue is going to become one of the most divisive in history.
Apparently a large number of people outright dismiss even the possibility of “machine sentience”. It seems to trigger some pretty visceral reactions.
Never mind the fact we can’t even define sentience/consciousness to begin with. We have no idea why or how it arises - yet we’re very quick to make strong claims about it.
The problem is any discussion around this topic is tainted – these companies have to justify their valuations. Begins with the shift in nomenclature: "LLM" is now "AI".
We're approaching this issue with humility. (Here is a professionally produced video publicizing the issue and our approach. The issue we're being humble about presupposes that we have achieved a technological advance unprecedented in history.)
We recognize that we'll need to regularly revise our ideas as the field develops. (Despite there being no consensus, we are eager not only to be part of the conversation in the public sphere, but to start it and continue it.)
This is so irresponsible, it's "misinformation" of the worst kind (in that it's difficult for a layperson to parse and comes from an ostensibly trustworthy source).
When "blockchain" was on obvious scam, we all knew it but weren't vocal enough, now that AI is venturing into mountebank territory, it's important for everyone to speak out against this kind of charlatanism (importantly by dismissing it as nonsense and not letting them shift the narrative). I've lost so much respect for Anthropic who have done some good work but now appear to be a cult.
"Model welfare" is clickbait. The core of the message is about if models can be conscious.
The video is worth watching. Maybe embedding it in an article so strongly focused on the welfare topic is a mistake, but then: why not, if this is what it takes the then-outraged news outlets to put the topic of AI consciousness on the table?
To me the p-zombie theory will most likely always be applicable to any non-wet AI, no matter how brilliant it will be.
Long ago I coin robotmancipation as a joke but also as a thought experiment. My conclusion is that we have so much mirroring behaviour that we should be very afraid. Someone long ago said the way we treat animals today will be the way we treat people tomorrow. There are no part time respectable people.
How exactly is this irresponsible? Are you proclaiming that any discussion of "machine consciousness" is axiomatically misleading?
It’s becoming clear that this issue is going to become one of the most divisive in history.
Apparently a large number of people outright dismiss even the possibility of “machine sentience”. It seems to trigger some pretty visceral reactions.
Never mind the fact we can’t even define sentience/consciousness to begin with. We have no idea why or how it arises - yet we’re very quick to make strong claims about it.
The problem is any discussion around this topic is tainted – these companies have to justify their valuations. Begins with the shift in nomenclature: "LLM" is now "AI".