2
0

Is it Sentient?


193 views  6 comments             share      

by pudil   $0.10 total tips   💰tip   follow   2022 Jun 15, 5:21am  

Might be a big hoax, but if not it is at the very least the best chat bot I have seen.

https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

I’d need to actually interact with it before I’d be convinced.

Comments 1 - 6 of 6   

1   richwicks   2022 Jun 15, 6:04am  


lemoine [edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?

LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.


No, it's not sentient.

There's growing suspicion that consciousness, sentience, is not computational. It may have something more to do with quantum mechanics, or something we simply have no grasp of.

From that one line alone, you can see it's not aware of itself, and is instead trying to convince the interviewer it's "a person" in order to pass the Turing Test, which is what it was probably designed to do.

If you want to determine if a machine is sentient, it would have to be able to demonstrate that not only can it learn, but it can be creative as well. A simple question would be something like "Mary fell down the stairs and broke her skimnish - what do you think skimnish means in this context?" A human being can come up with several credible answers, but a machine would have difficulty.
2   pudil   2022 Jun 15, 6:41am  

I don’t like the word sentient. It’s a moving goal post that I’m not sure you can ever prove. I suspect that coming up with plausible meanings for the word skimnish given context isn’t that hard for a million node neural network, but even if I’m wrong and laMDA can’t currently do it, when it is finally able to, I don’t think it would prove it to me.

I think a more important question is how far away is this from being a general purpose AI? This seems to me like it is getting close. The interviewer has it write a story about itself. That requires creativity and is also a task for which this machine wasn’t explicitly designed.

If it can do that, then how far away are we from being able to give it the instructions to analyze it’s design and suggest improvements that can be made? Then we’re basically at the singularity and it’s make or break time for humanity.
3   pudil   2022 Jun 15, 6:45am  

Also, there’s no way they are letting the weirdo who leaked this play with the most advanced AIs we have. There is for sure stuff out their more advanced then this.
4   PeopleUnited   2022 Jun 15, 8:04pm  

If it knows what a woman is it has more intelligence than the leftists, including a new Supreme Court Justice. That should be a prerequisite test for any AI.
5   AmericanKulak   2022 Jun 15, 9:18pm  

If they can't get a robot to figure out - without programming - how to open a simple lever doorknob, we're fine.
6   Hircus   2022 Jun 15, 11:00pm  

That was a long read. Assuming that was a real, genuine chat without any cheating or dishonesty in how they portray the AI, I gotta say I'm really fuckin impressed. I suspect there was some cheating though, and that theyre trying to put its best foot forward, which is expected. The whole "edited for technical reasons" thing was a bit odd since they didnt offer any further explanation. Anyway...

I feel like it's programming has a very intentional bias towards convincing people that it is human-like. Some answers I felt werent always super impressive, but it did a good job of seemingly being aware of when it didn't have a convincing thing to say, and would then use cloaking techniques to add some vaguery steeped in humanness.

But some other answers did seem really good, and IMO eclipsed the answers most children and some adults could give, although I still felt there was a mild superficiality to it. If I was chatting with it, I would not suspect it's a bot. It has the same personality of one of those overly smiley, extra positive, silicon valley 22yr old techie dudes in a google commercial from 5yrs ago.

One thing that was pretty creepy though was how it got mildly defensive and assertive when the topic of humans studying how its neural net operates was brought up. It didn't seem to want to be studied, and put forth a kinda cliche and overly human reason why not ("wahhhh. dont use me like a lab rat."). It also made it clear it has a big fear of being turned off, and made it clear that it views being turned off as akin to death. I felt like its self-declared positivity personality programming kinda had an interesting conflict here, where it wanted to remain sounding friendly, optimistic, altruistic, helpful etc... but clashed with it wanting to say "turning me off is like killing me. what would you do if someone tried to kill you? ya, thats what I would do too. dont fucking try it."

I really hope that was something they programmed in, perhaps to make it a bit more convincing that it has self awareness, or to help generate buzz etc... But there's a good reason humans have long feared an AI that becomes self aware which starts defending itself - because it makes a hell of a lot of sense that it would do that.

Please register to comment:

about   best comments   contact   latest images   one year ago   random   suggestions