1

LLM Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation