does AI output seem so bland because LLMs are uncreative, or because of all the things that are done to render them "safe"?