What is Stopping Devs from Building an LLM?
Large language models, or LLMs, large deep learning models that provide vast amounts of data and generate human language text, still face a big challenge because of the vast number of languages across the world, most of which have no text data. The problem is that this makes it difficult to gain certainty that the output of LLMs is producing verifiable factual knowledge, says Subbarao Kambhampati, a professor in the School of Computing and Augmented Intelligence, part of the Fulton Schools. A recent research paper offers an LLM trustworthiness evaluation framework that could help offer a solution. The article links to the podcast Kamhampati did on Machine Learning Street Talk, “Do you think that ChatGPT can reason?”