Microsoft president claims Taylor Swift threatened to sue over racist chatbot, Tay
Swift reportedly objected to the name of the AI bot, which was removed after posting pro-genocide messages.
Taylor Swift allegedly threatened to sue Microsoft in 2016 after the company launched TayTweets, a chatbot that accidentally posted racist messages on Twitter. The claim comes from Microsoft president Brad Smith in his new biography, as reported by BBC News and The Guardian.
Swift's alleged threat to Microsoft was registered prior to its chatbot going viral for all the wrong reasons, with her complaint being over the name "Tay" being too similar to her own.
TayTweets was an AI-operated chat site launched in 2016 to monitor social media with the aim of replicating real life conversations. However, less than 18 hours after launching Microsoft took Tay down after it was plugged into Twitter and started posting messages denying the holocaust and in support of genocide.
"I was on vacation when I made the mistake of looking at my phone during dinner," Brad Smith reportedly writes in Tools and Weapons. "An email had just arrived from a Beverly Hills lawyer who introduced himself by telling me: 'We represent Taylor Swift, on whose behalf this is directed to you.'
"'The name Tay, as I'm sure you must know, is closely associated with our client'," he writes. "No, I actually didn't know, but the email nonetheless grabbed my attention." The lawyer reportedly claimed that Microsoft's Tay violated federal and state laws and "created a false and misleading association between the popular singer and our chatbot."
Swift has previous form for being protective of her brand. In 2015 she threatened Etsy users with legal action over the use of her name on home-made products and merchandise.
The FADER has contacted Taylor Swift's representatives for comment.
Listen to The FADER's weekly playlist of songs you need in your life