The Robustness of In-context Learning to Word Shuffling
Despite the impressive performance of in-context learning (ICL) on Large Language Models (LLMs), there is still not enough understanding of its underlying mechanism, as it could be sensitive to a variety of factors that do not seem to be important from human view. Inspired by previous research and Caesar cipher, we decide to investigate whether ICL is a real learning mechanism by applying vocabulary shuffling. We first create new vocabulary by shuffling the original vocabulary at different shuffling rates, while maintaining their bijection relationship. We then map the demonstration input using the new vocabulary. We test both open-source and closed-source LLMs on two downstream nature language processing (NLP) tasks with this scheme. The results demonstrate the robustness of ICL even to complete word shuffling on powerful LLMs like LlaMa3-70B. However, our pilot in-depth experiments are yet not enough to explain the reason behind this surprising phenomenon, which encourages further study in future work.