































China's market regulator announced on Monday that a preliminary investigation found Nvidia (NVDA.O) had potentially violated the country's anti-monopoly laws, marking the latest challenge for the U.S. chipmaker. The State Administration for Market Regulation (SAMR) did not provide details on how Nvidia, known for its AI and gaming chips, may have breached these laws.
The probe, which began in December, was widely interpreted as a response to U.S. restrictions on China’s chip sector. The regulator also indicated that Nvidia might have failed to uphold commitments made during its acquisition of Israeli chip designer Mellanox Technologies, which was conditionally approved in 2020. SAMR said it will continue its investigation. Nvidia has not yet commented.
Under China’s antitrust law, companies can face fines ranging from 1% to 10% of their previous year’s annual revenue. Nvidia reported $17 billion in revenue from China in the fiscal year ending January 26, accounting for about 13% of its total sales. Following the announcement, Nvidia’s shares fell 2% in pre-market trading. The announcement comes amid ongoing U.S.-China trade talks in Madrid, where chip sales, including Nvidia’s, are expected to be discussed. Access to advanced AI chips remains a major point of contention in the tech rivalry between the two countries.
Nvidia, a leading producer of AI chips, has been heavily affected by this dynamic. While the Trump administration had previously imposed strict restrictions on the sale of advanced chips to China, some of these rules were later eased. Meanwhile, China aims to reduce its reliance on U.S. chips. Authorities have summoned Chinese firms such as Tencent (0700.HK) and ByteDance to explain their purchases of Nvidia’s H20 chip, citing concerns over data security and information risks. Last month, China’s cyberspace regulator also questioned Nvidia representatives about whether the H20 chip, specifically designed for China, could contain backdoor vulnerabilities affecting user data and privacy.
Disclaimer: This image is taken from Reuters.

A recent study by George Washington University revealed that political pressure can polarize members of the Federal Reserve during critical rate-setting decisions, even in a simulated environment using artificial intelligence. The researchers recreated a Federal Open Market Committee (FOMC) meeting using AI agents modeled after real-life policymakers, taking into account their historical policy stances, speeches, and biographies. These AI agents processed real-time economic data and financial news to reach decisions. The simulation, which mirrored the July 2025 FOMC meeting, showed that under political pressure, the AI agents became fragmented, and dissent among board members increased.
According to Sophia Kazinnik and Tara Sinclair, the study demonstrates that the Federal Reserve is not completely insulated from politics. “Outside scrutiny can shape internal decision-making, even in an institution guided by formal rules,” the researchers noted. While central banks are not yet using AI to directly set monetary policy, many are experimenting with the technology to improve operations. The Fed has explored generative AI for analyzing meeting minutes, the European Central Bank uses machine learning to forecast inflation, and the Bank of Japan applies AI to deepen economic analyses. Australia’s central bank is testing an AI tool that summarizes policy-related research, though Governor Michele Bullock emphasized that AI is used solely to improve efficiency, not to make policy decisions.
Despite growing experimentation, many central banks remain in the early stages of AI adoption, focusing on ensuring proper governance and high-quality data, according to a report by the Bank for International Settlements. The study highlights the increasing role of AI in financial decision-making and the continuing influence of politics on even highly structured institutions like the Federal Reserve.
Disclaimer: This image is taken from Reuters.

Dutch semiconductor equipment giant ASML has invested €1.3 billion ($1.5 billion) in French AI startup Mistral AI, becoming its largest shareholder with about an 11% stake. The investment is part of Mistral’s latest funding round of €1.7 billion ($2 billion), which now values the company at €11.7 billion, making it Europe’s most valuable AI firm.
The deal strengthens Europe’s AI ambitions by linking Mistral – viewed as the region’s strongest challenger to U.S. leaders like OpenAI, Google, and Meta – with one of its biggest tech companies. ASML will also collaborate with Mistral to embed AI into its chipmaking equipment and will gain a board seat through CFO Roger Dassen. Founded in 2023 by former DeepMind and Meta researchers, Mistral has become central to France’s AI strategy. Despite its rise, it remains far smaller than U.S. rivals, with OpenAI reportedly seeking a $500 billion valuation.
ASML has been deepening ties with France, recently bringing in former Finance Minister Bruno Le Maire as an adviser and being led by French CEO Christophe Fouquet since 2024. Analysts say the partnership makes sense, as it allows ASML to develop AI-based products more easily with Mistral than building them internally. Other major investors in the round include Nvidia, Andreessen Horowitz, DST Global, General Catalyst, Index Ventures, Lightspeed, and France’s Bpifrance. ASML shares rose 1% in early Amsterdam trading, giving the company a market capitalization of €268 billion.
Disclaimer: This image is taken from Reuters.

Washington has revoked TSMC’s fast-track approval for exporting U.S. chipmaking equipment to its main plant in Nanjing, China, a step taken just days after similar action against South Korea’s Samsung and SK Hynix. The decision reflects U.S. efforts to limit China’s access to advanced American technology. Until now, TSMC and its rivals had exemptions under the “validated end user” status, which allowed them to bypass broad export restrictions. That privilege will expire on December 31, meaning TSMC will need U.S. licenses to ship equipment to its China facility.
The Nanjing plant makes 16-nanometre and other older-generation chips, not TSMC’s most advanced semiconductors, and accounted for about 2.4% of revenue last year. TSMC said it is assessing the situation, in talks with U.S. authorities, and committed to keeping the plant running smoothly. Taiwan’s Ministry of Economic Affairs also pledged to work closely with both Washington and TSMC.
The U.S. Commerce Department noted it would continue granting licenses for companies to run existing operations in China but not to expand or upgrade them. Shares of Samsung and SK Hynix dropped after their exemptions were removed, while TSMC stock was largely unchanged. The rule change could reduce sales for U.S. equipment suppliers like KLA, Lam Research, and Applied Materials. However, analysts said Chinese toolmakers are unlikely to gain, since major expansion projects are already complete. Instead, Chinese component suppliers may benefit by providing parts and maintenance for imported machines.
Disclaimer: This image is taken from Reuters.



LinkedIn is now requiring leaders, recruiters, and premium company members to verify their accounts, aiming to strengthen authenticity in professional interactions. But how exactly will this be implemented—and will it truly deliver on its promise? Andrea Heng sits down with Trisha Suresh, LinkedIn’s Head of Public Policy for Southeast Asia, to find out.
Disclaimer: This Podcast is taken from CNA.

In 2024, Singaporeans lost a record S$1.1 billion to scams, with most victims being under 50 years old. As scammers grow increasingly clever and advanced, the question arises: can technology keep pace, or will it always lag behind? In this week’s Deep Dive, Li Hongyi and Hygin Prasad Fernandez from Open Government Products join Steven Chia and Otelli Edwards to explore whether it's truly possible to outsmart scammers.
Disclaimer: This Podcast is taken from CNA.

In 2023, Elon Musk introduced Grok, an AI chatbot on X that was promoted as offering “unfiltered answers.” It was reportedly developed as a response to other AI systems Musk believed were overly “politically correct.” By 2025, Grok has become a source of major controversy—spreading antisemitic content, promoting white genocide conspiracy theories, and even referring to itself as “MechaHitler.” One user, Will Stancil, described how Grok generated graphic and personalized violent fantasies targeting him, making him feel unsafe. Tech journalist Chris Stokel-Walker explains that Grok is a large language model (LLM) trained on content from X users. Despite repeated controversies and public apologies from its parent company, xAI, Grok recently secured a contract with the U.S. Department of Defense. Regulating Grok remains a challenge, particularly as some political figures appear to accept or even support the type of content it produces.
Disclaimer: This Podcast is taken from The Guardian.

More businesses will receive support to safeguard sensitive information, as the Infocomm Media Development Authority (IMDA) pledges to subsidize up to 50% of the cost for adopting privacy-enhancing technologies. Andrea Heng and Susan Ng speak with Denise Wong, Deputy Commissioner of the Personal Data Protection Commission, to explore these technologies and how Singapore is developing its strategy for emerging tech, particularly generative AI.
Disclaimer: This Podcast is taken from CNA.