@atpfm the “podcast” about @siracusa’s various posts is a good example of an LLM being overwhelmed by their training data. Instead of summarizing what John wrote, the statistical weight of the data keeps winning out over the new input. This is part of the cause of “hallucinations” and it happens especially with small inputs like blog posts.