Anthropic limits Claude: what kinds of questions won't its artificial intelligence answer?

Anthropic, the American artificial intelligence company, has decided to strengthen the security of its chatbot Claude , implementing clear limits on the types of questions it will refuse to answer. This measure responds to international concerns about the potential harmful uses of advanced AI .
The updated policy expands existing prohibitions, explicitly including a ban on developing high explosives and biological, nuclear, chemical, or radiological (CBRN) weapons . Claude was previously restricted from producing or distributing hazardous materials, but the new regulations further detail the risk scenarios.
From May 2025, with the arrival of Claude Opus 4 and the implementation of the AI Safety Level 3 system, Anthropic seeks to make it difficult for the model to be compromised or used to advise on the creation of CBRN weapons .
The company also updated the rules around Claude's advanced features, such as Computer Use , which lets the AI interact with the user's computer, and Claude Code , which is geared toward developers. Anthropic warned about the risks of these capabilities, which include widespread abuse, malware creation, and cyberattacks .
A significant new feature is the "Do not compromise computer or network systems" section, which prohibits activities such as vulnerability detection, malicious code distribution, or denial-of-service attacks. This limits scenarios where Claude could be used as a tool for sophisticated cybercrime .
In the political arena, regulations are evolving: Anthropic now regulates only content that is misleading or disruptive to democratic processes , instead of prohibiting all political campaigning or lobbying. The heightened security requirements apply only to consumer-facing operations , not to business-to-business environments.
The update coincides with technical improvements to Claude, which now has a context window of 1 million tokens in its Sonnet 4 version, equivalent to 750,000 words or 75,000 lines of code . This limit is five times the previous capacity and even surpasses OpenAI's GPT-5 .
Brad Abrams, product manager at Anthropic, highlighted that this context expansion is especially useful in development and coding , benefiting platforms like GitHub Copilot, Windsurf, and Anysphere's Cursor . AI can analyze large volumes of data in real time, facilitating complex engineering projects and tasks that require deep code interpretation.
With these updates, Anthropic seeks to balance technological innovation and ethical responsibility , ensuring that Claude is a powerful, yet secure, and reliable tool. Users should be aware of the model's limitations, especially in high-risk contexts, while businesses can leverage the new context capability for large-scale projects and complex data analysis .
La Verdad Yucatán