I love the story of The Talking Dog, and I value how it pokes holes in the idea that AI has made us more productive. In my observation, AI is talked about as if it is an automatically useful tool that people can just pick up and use. But in my experience, AI has caused more setbacks than improvements, mainly caused by people assuming the output is correct, then acting accordingly. In other words, because it isn’t built to find truth, but is instead built to assist, respond, and create, it (sometimes hilariously) points unsuspecting real-life individuals (as opposed to the generalized “people”) in the wrong direction, creating the need for a lot of “re-work” and corporate “circling back.” The general use models I’ve encountered seem fundamentally flawed because their aim is to chat and entertain, but users presume they’re getting Answers (capital A intentional) that are precise and correct and modify their behavior accordingly.
I love the story of The Talking Dog, and I value how it pokes holes in the idea that AI has made us more productive. In my observation, AI is talked about as if it is an automatically useful tool that people can just pick up and use. But in my experience, AI has caused more setbacks than improvements, mainly caused by people assuming the output is correct, then acting accordingly. In other words, because it isn’t built to find truth, but is instead built to assist, respond, and create, it (sometimes hilariously) points unsuspecting real-life individuals (as opposed to the generalized “people”) in the wrong direction, creating the need for a lot of “re-work” and corporate “circling back.” The general use models I’ve encountered seem fundamentally flawed because their aim is to chat and entertain, but users presume they’re getting Answers (capital A intentional) that are precise and correct and modify their behavior accordingly.