Jun 2, 2023
Then again, there's this...
I suspect that we're anthropomorphizing the output more than is wise. We see a collection of words that seem reasonable to us... and so we project that the machine "reasoned" to create the response.
Whether or not that was the case.
Of course, there's the small problem that we don't know exactly how we learn and reason either.
ChatGPT's answers are just a chain of learned probabilities. But are ours any different? In what way?