In this repository I store the results of my official experiments in the use of LLMs to do software testing tasks.
AI fanboys, as I call them, are people who push AI-based solutions while refusing to think critically and responsibly about whether their "solutions" are fit for purpose.
When I confront one of these guys about that, a common response is to claim I am just giving anecdotal nitpicks. However, I notice that none of them have presented any controlled experiments or anything other than anecdotes and toy demos to justify themselves. AI is the new Crypto get rich quick scheme. OpenAI is terrible about this, but now Google and Meta have followed suit with the general practice of claiming that their chatbots can do almost anything while they (the humans who work at these places) test almost nothing. They plaster disclaimers like racing decals all over their products and hope for the best. It's all buyer beware and everyone is under-be-waring.
So, I've decided that I will do systematic testing of LLMs to determine how well LLMs can serve the purposes of a professional software tester. Maybe I will be the only one who does this; maybe other people will join in. But some adult should be in the room while the kids play with AI.