For help on how to follow a comic title,
I’m not going to play games with you. If you don’t understand the very definitions you cite, then I’ve nothing more to say to you except: you are wrong.
What use could a bear have for birdseed? Or did the feeder include suet or lard?
But they didn’t know it.
They pretty obviously thought that they would get some kind of benefit that would outweigh any cost (if they even thought about the prospect of personal cost).
Surprisingly (to me, anyway) many people don’t really have a good concept of the difference between before and after a change, particularly if the change is death. I’ve met too many people who spoke of getting even for some slight “even if it kills me. I’d do it just to gloat afterwards.” Not too well balanced, there.
the dog realized they were the cause of not being able to bring the toy to the owner
Elegant test. It’s comparable to the one that demo’d self-awareness in rats: the rats had to decide whether they were confident enough about the similarity of 2 musical tones to go for the large, tasty reward, but getting nothing if they were wrong, or pass on it and take the small but unconditional reward instead. To make the choice in a non-random way, which they did, they had to be aware of themselves as the agents of the outcome.
Why do you think there are no encoded rules?
Why aren’t neural nets simply encoding the rules at a higher level of generality and abstraction rather than going straight to an explicit state machine as production systems do?
It’s a way to manage their fears.
Gently poking those pudgy tummies is irresistible.
It would be larger than Arlington because fewer than 1/3 of those who die in or because of war gets buried at Arlington.
Now transfer your weight to your left foot….
In what way are neural nets not “rules-based”?