




























India’s leading fintech companies are urging Anthropic PBC to grant them early access to Mythos, an advanced AI model that has raised global concerns over a potential surge in cyberattacks. Firms such as One97 Communications Ltd., Razorpay Software Ltd., and Pine Labs Ltd. have asked the San Francisco-based AI developer to allow them to test Mythos so they can identify and fix vulnerabilities in their own systems. This follows Anthropic’s decision to begin a restricted rollout of the model, which it believes is too risky for broad release.
Vijay Shekhar Sharma, founder and CEO of One97, said the company recently held an urgent discussion with Anthropic about when a second group of approved companies might gain access. According to him, Anthropic questioned how One97 intended to use Mythos and what value it would bring, highlighting the company’s cautious approach to distribution.
Sharma also expressed concern about the broader implications of such technology, suggesting it raises deep security fears for financial systems and national infrastructure. He warned that modern digital networks could be compromised from anywhere, making traditional forms of conflict less relevant.
The demand from Indian firms mirrors global anxiety, as regulators and financial leaders worry after reports that Mythos can uncover long-standing cybersecurity flaws. Anthropic initially tested the model internally before granting limited access to a small group of companies including Amazon Web Services, Apple, and JPMorgan Chase, and is now considering a controlled expansion under a program called Project Glasswing.
The development has unsettled global financial authorities, with US Treasury officials describing the model as a major leap in capability, while European leaders have cautioned about the risks of misuse. Experts are increasingly questioning whether such AI systems could enable large-scale financial fraud, disrupt payment infrastructure, or destabilize the global financial system. Indian regulators, including the Reserve Bank of India, have not yet commented on any formal risk assessment.
Meanwhile, companies like Razorpay are actively preparing for potential threats, with internal teams strengthening cybersecurity defenses. CEO Harshil Mathur said the company has formally requested access to Mythos to test and improve system security, describing it as a time-sensitive challenge for startups. He added that the model has become a major topic among startup communities recently, and suggested Anthropic may eventually expand access under strict usage rules focused only on security testing.
India, with its large pool of software engineers serving global financial institutions, has become a key market for Anthropic’s Claude system, widely used for coding, debugging, and IT modernization. Pine Labs CEO Amrish Rau emphasized that regulators will likely tighten cybersecurity standards in response to such threats, stressing that security must now be treated as a core priority rather than a formality.
Disclaimer: This image is taken from Bloomberg.

As more people rely on AI for guidance, U.S. lawyers are cautioning clients not to treat chatbots as confidential advisors, especially when legal risk is involved. The concern has grown after a federal judge in New York ruled that a former executive of a bankrupt financial services firm could not block prosecutors from accessing his AI chatbot conversations in a securities fraud case. Lawyers say this decision highlights that messages exchanged with tools like ChatGPT or Claude may be discoverable in court, unlike communications with licensed attorneys, which are typically protected.
Following the ruling, legal experts have begun advising clients that AI chats could be subpoenaed in both criminal and civil cases. Attorneys emphasize that unlike lawyer–client conversations, interactions with AI systems do not carry legal privilege, and sharing sensitive legal advice with chatbots may weaken confidentiality protections.
Several major law firms have issued guidance urging caution, with some even updating client agreements to warn that using AI tools could risk waiving attorney–client privilege if legal advice is exposed to third-party platforms. The case that triggered these concerns involved a former financial firm executive who used an AI chatbot to help prepare case-related material for his lawyers. Prosecutors argued those AI-generated materials were not protected because they were not created directly through an attorney, and a judge agreed, ordering disclosure of many of the documents. The judge also noted that AI platforms do not have a legal relationship with users and therefore cannot provide privileged communication.
Another court in Michigan, however, ruled differently in a separate case, allowing a self-represented plaintiff to keep her AI chat records private as part of her case preparation work product, showing that courts are still split on how such data should be treated. AI companies like OpenAI and Anthropic note in their terms that user data may be shared in certain circumstances, and they recommend users avoid relying on chatbots for legal advice.
Law firms are now increasingly setting internal rules and suggesting that if AI is used in legal research, it should be done under lawyer supervision and clearly documented. Until clearer legal standards emerge, attorneys continue to stress a cautious approach: sensitive case discussions should be kept strictly between a client and their human lawyer, not AI systems.
Disclaimer: This image is taken from Reuters.

OpenAI’s estimated $852 billion valuation is reportedly facing increased scrutiny from some of its own investors as the company shifts its strategic focus toward the enterprise market in an effort to compete more directly with Anthropic, according to a Financial Times report. The report notes that OpenAI recently raised $122 billion in what could be the largest funding round in Silicon Valley history. Despite this milestone, the company has revised its product roadmap twice over the past six months, driven by intensifying competition first from Google and then from Anthropic.
Some investors are concerned that these repeated strategic adjustments may weaken OpenAI’s competitive position against rivals such as Anthropic and a resurgent Google, even as the company prepares for a potential initial public offering as early as this year. Industry analysts have also suggested that Anthropic’s revenue growth rate could soon overtake that of OpenAI within a matter of months. The company is also accelerating efforts to expand enterprise adoption across its product ecosystem globally amid intensifying market competition pressure.
An early investor told the Financial Times that the company appears unfocused, questioning why it is prioritizing enterprise and coding tools despite ChatGPT’s massive user base and rapid consumer growth. OpenAI Chief Financial Officer Sarah Friar rejected claims that investors are dissatisfied with the company’s direction, stating that such suggestions do not reflect the actual situation.
In a statement to Reuters, an OpenAI spokesperson said the $122 billion fundraising round was heavily oversubscribed, completed quickly, and supported by a wide range of leading global investors, underscoring strong confidence in the company’s strategy, current momentum, and long-term growth potential.
Disclaimer: This image is taken from Reuters.

Dutch regulators have approved the use of Tesla’s self-driving software, with mandatory human supervision, on both highways and city streets—marking the company’s first such clearance in Europe. Tesla hopes this decision will encourage similar approvals across the European Union. The widespread adoption of Full Self-Driving (FSD) is key to Tesla’s long-term growth strategy. A significant portion of its $1 trillion valuation is tied to CEO Elon Musk’s belief that AI-powered autonomous driving and robotaxi services will become major sources of revenue.
The Netherlands approved the system, known as Full Self-Driving Supervised, after more than 18 months of testing and evaluation by the Dutch vehicle authority RDW. The software can control steering, braking, and acceleration. RDW stated that when used correctly, the system can enhance road safety and confirmed it will seek approval for use across the EU.
Tesla is also relying on its self-driving technology to help revive vehicle sales in Europe, which have slowed due to an aging electric vehicle lineup and controversies surrounding Musk’s political views. However, the company saw a rise in European sales in February after more than a year of decline. Analysts expect the approval to boost interest among consumers eager to try the technology. Tesla’s stock rose slightly in after-hours trading, although it has declined significantly this year compared to the broader U.S. market. The company announced it will begin rolling out the feature in the Netherlands soon and aims to expand it to other European countries.
The software is already available in the U.S. as a subscription service, though it faces legal challenges and regulatory scrutiny following accidents and alleged traffic violations. RDW emphasized that European safety standards are stricter than those in the U.S., meaning the EU version of FSD differs from the American one.
Tesla, which is the leading electric car brand in the Netherlands, has around 100,000 eligible vehicles, including Model 3 and Model Y. Unlike many competitors that rely on multiple sensors, Tesla primarily uses cameras and AI for its self-driving system. Other automakers like Mercedes-Benz, Ford, and BMW have introduced limited hands-free driving features, mostly restricted to highways and lower speeds, particularly in Germany. Tesla’s system stands out for its broader usability.
RDW will now submit the system for EU-wide approval to the European Commission. Member states will vote, and a majority decision is required for full regional authorization. Even if it fails to secure majority support, individual countries may still choose to approve it independently. Tesla has indicated it expects a potential EU-wide decision as early as this summer.
Disclaimer: This image is taken from Reuters.



In 1998, tobacco companies in the United States were made responsible for the damage caused by the products they produced and sold through the Tobacco Settlement. Today, a similar question arises for Big Tech: it is not only about the content on their platforms but also whether these platforms were intentionally created to keep users addicted. Daniel Martin explores this issue with Rajesh Sreenivasan, Head of Technology, Media, and Telecommunications at Rajah and Tann Singapore.
Disclaimer: This podcast is taken from CNA.

In Singapore, mental health professionals are noticing a small but increasing number of patients showing delusions, paranoia, or emotional dependence seemingly connected to frequent AI chatbot use. Although “AI psychosis” is not an official medical diagnosis, clinicians acknowledge that the issue is genuine. How does extensive interaction with AI blur the boundaries between reality and reinforcement? Who is most vulnerable, and what signs should families be aware of? Andrea Heng and Hairianto Diman discuss these questions with Dr. Amelia Sim, Senior Consultant at the Department of Psychosis, Institute of Mental Health.
Disclaimer: This podcast is taken from CNA.

With decisions delegated, chatbots replacing friends, and nature sidelined, Silicon Valley is shaping a life stripped of real connection. Escape is possible—but it will require a united effort.
Disclaimer: This podcast is taken from The Guardian.

Google has revealed plans for a significant increase in its AI investments in Singapore, featuring the launch of Majulah AI – a collection of training and innovation initiatives aimed at developing an AI-ready workforce. Daniel Martin speaks with Ben King, Managing Director of Google Singapore, about how these efforts will help Singapore achieve its goal of becoming an AI leader and accelerate AI adoption across the nation.
Disclaimer: This podcast is taken from CNA.











