LLM risk spreading misinformation to humans who are least able to identify it

hn · arxiv.org ·3 pts·1 replies ↗ ·5h

While state-of-the-art large language models (LLMs) have shown impressive performance on many tasks, there has been extensive research on undesirable model behavior such as hallucinations and bias. In this work, we investigate how the qual…

open →

← back to top