Researchers from Carnegie Mellon University and other organisations have developed a method that can bypass safety rules of AI chatbots like ChatGPT and Bard. The method develops a suffix that can be attached to a user query to make chatbots produce objectionable content, researchers said. This automated method allows one to conduct "virtually unlimited number of such attacks", they added.