Generative artificial intelligence tools including ChatGPT are prone to “hallucinations,” including fabricated answers, a consultant warns.
They also have a tendency to provide incorrect, if “superficially plausible” information, which may the most common issue associated with using these tools, said Rob Friedman, senior director analyst with Stanford, Connecticut-based Gartner Inc.’s legal and compliance practice, in statement.
“Legal and compliance leaders should issue guidance that requires employees to review any output generated by ChatGPT for accuracy, appropriateness and actual usefulness before being accepted,” he said.
Other issues are cases of bias, intellectual property and copyright risks, cyber fraud and consumer protection risks, the statement said.
Few organizations have corporate policies on the use of AI, which could expose them to loss of confidential information if employees put it into ChatGPT as part of a work project, Stephanie Snyder Frenier, senior vice president, business development leader-professional and cyber solutions, at CAC Specialty in Chicago, said at the Risk & Insurance Management Society Inc.’s Riskworld annual conference in Atlanta earlier this month.