A new investigation probes whether artificial intelligence chatbots respond differently based on the language used. By asking varied questions in multiple languages — English, French, Chinese, Hindi, Arabic, and Spanish — the study tested platforms like ChatGPT, Claude, and DeepSeek. While responses in different languages showed subtle nuances, AI models exhibited consistent center-left, secular values regardless of linguistic context. For example, AIs unanimously rejected cultural biases favoring sons over daughters or violence in relationships, emphasizing egalitarian views. However, platforms like DeepSeek demonstrated unexpected shifts in tone for politically sensitive topics when switching languages. This raises questions about training data biases and the cultural imprints of large language models.