One of the most damaging things science communication can do is exaggerate the implications of a scientific paper, theory or discovery - it happens all the time and I find it infuriating. Sometimes this hype is so bad that it's almost funny. My favourite remains the 2013 'Scientists Finally Invent Real, Working Lightsabers' from the Guardian - I just love that 'finally', as if saying 'scientists what have you been doing all this time?', but the reality was a couple of photons had been made to briefly interact in a Bose Einstein condensate. Mostly, though, these headlines are cringe-making, scientific clickbait of the worst kind. Some of this comes from publications - I had to stop reading New Scientist because I got so fed up with their exaggerated headlines - some from university press offices, desperate to justify funding, and some from scientists themselves, because they are only human, and some enjoy being in the limelight. But all such hype damages trus...
Having read a considerable amount about the kind of AI chatbot that is genuinely a way to have a chat with an animated character, rather than typing text to ask for a recipe or whatever, I somewhat nervously took the plunge and summoned up Grok's Ani. I ought to give some context here first. In the early days of dial up computer networks when, of course, I was on CompuServe (as opposed to AOL - you have to have been there), I occasionally dipped a toe into chatrooms (technology- topics, I should emphasise, nothing dodgy). I found the experience terrifying. I think that without visual cues, I found the flow of messages from others overwhelming, and found it difficult to respond quickly as I would in a normal conversation. I needed time to think when communicating online, and I would often drop out of a conversation very quickly. Since then, having read about people becoming obsessed with these AI chatbots, I wondered why they didn't experience the same hesitation. I guess some n...