






























On Tuesday, A Munich court ruled in favor of Germany’s music rights organization, GEMA, in a significant copyright case against the U.S.-based AI company OpenAI. The court determined that OpenAI cannot use song lyrics without obtaining a proper license and ordered the company to pay damages for using copyrighted material without authorization.
GEMA argued that OpenAI’s chatbot, ChatGPT, reproduces lyrics from copyrighted German songs without permission. The organization also claimed that ChatGPT was trained on protected works from GEMA’s repertoire, which includes approximately 100,000 members such as best-selling musician Herbert Groenemeyer. According to GEMA, this use of copyrighted content constitutes a violation of authors’ rights, as the AI reproduces lyrics without securing proper authorization from the rights holders.
OpenAI responded by stating that GEMA’s arguments reflect a misunderstanding of how ChatGPT operates. The company maintained that the AI does not directly copy content but generates responses based on patterns learned during training. Despite this defense, the Munich court sided with GEMA, reinforcing the importance of copyright compliance in the development and deployment of generative AI technologies.
The ruling is seen as a potentially landmark decision for the regulation of generative AI in Europe. GEMA is advocating for a licensing framework that would require AI developers to pay for the use of musical works both during AI training and in the output generated by the system. The decision can still be appealed, and both OpenAI and GEMA said they plan to release statements regarding the verdict later on Tuesday. This case highlights ongoing tensions between AI innovation and copyright protection, raising questions about how creative works can be used in AI training while respecting intellectual property rights.
Disclaimer: This image is taken from Reuters.

Nvidia CEO Jensen Huang stated on Friday that there are currently "no active discussions" regarding the sale of the company’s advanced Blackwell AI chips to China. Blackwell, Nvidia’s flagship AI chip, has been restricted from sale to China by the Trump administration over concerns it could bolster China’s military and domestic AI industry.
Although there had been speculation that talks between U.S. President Donald Trump and Chinese President Xi Jinping in South Korea might allow a limited version of Blackwell to be sold in China, no agreement has emerged. Huang, speaking during his fourth public visit to Taiwan this year, said, "Currently, we are not planning to ship anything to China." He added that it is up to China to change its policy if it wants Nvidia products to return to its market.
The U.S. has permitted Nvidia to sell its H20 chip in China, but Huang noted that China has shown little interest in Nvidia products, leaving the company with virtually no market share for advanced AI chips there. During his Taiwan visit, Huang met with long-time partner TSMC and attended the company’s sports day, describing business as “very strong” and saying he wanted to encourage TSMC employees. When asked about Tesla CEO Elon Musk’s semiconductor plans, Huang emphasized that building advanced fabs like TSMC’s is extremely challenging, though he acknowledged the high demand for such technology.
Huang also clarified remarks previously reported by the Financial Times regarding China leading the AI race. He explained that he had meant China possesses strong AI capabilities and a large pool of researchers—about 50% of the world’s AI researchers are in China, and many popular open-source AI models originate there. Huang stressed that while China is advancing rapidly, the U.S. must continue moving quickly to stay competitive in the global AI landscape.
Disclaimer: This image is taken from Reuters.

Google has enhanced Gemini’s Deep Research tool, allowing it to pull data from Gmail, Drive, and Chat. This update enables the tool to incorporate user information from these apps into research reports. According to Google’s blog, the feature merges Workspace content with online data to generate more contextually accurate insights.
Built on the Gemini 2.5 model, Deep Research automates data collection and report generation, streamlining the process of organizing and summarizing information from multiple sources. With the new Workspace integration, Gemini can reference documents, spreadsheets, slides, PDFs, emails, and chat messages to produce detailed analyses. Users can create reports that combine internal files with online information without manually transferring content.
For instance, during market research, Gemini can automatically scan brainstorming notes, emails, and project files from Drive or Gmail, then merge them with web-based findings to create a complete report. Similarly, businesses can use it to generate competitive analyses by combining internal plans with external market data.
Unlike a regular chatbot, Gemini Deep Research follows a structured, multi-step approach — planning, conducting searches, and compiling results into a cohesive report. Users can fine-tune these reports, export them to Google Docs, or even convert them into AI-generated podcasts. To enable the integration, users can go to the “Deep Research” option under the Tools menu in Gemini on desktop and select data sources like Google Search, Gmail, Drive, or Chat. The rollout has started for desktop, with mobile support arriving soon.
Disclaimer: This image is taken from Business Standard.

French judicial authorities announced they have launched an investigation into the Chinese social media platform TikTok over concerns that its algorithms may encourage young people to commit suicide. Paris prosecutor Laure Beccuau said the investigation follows a request from a French parliamentary committee to open a criminal inquiry into TikTok’s potential role in endangering the lives of minors. The committee aimed to study TikTok’s psychological impact on youth after seven families filed a 2024 lawsuit claiming the platform exposed their children to content that encouraged suicidal behavior. Similar lawsuits in the United States have accused social media platforms of contributing to mental health issues among teenagers through their algorithms.
Beccuau highlighted that the committee’s report pointed to “insufficient moderation, easy access for minors, and a sophisticated algorithm that could trap vulnerable users in a loop of harmful content,” potentially pushing them toward suicide. A TikTok spokesperson told Reuters that the company “strongly refutes the accusations” and will defend its record. TikTok emphasized that it has “more than 50 features and settings aimed at teen safety, and removes nine out of ten violative videos before they are viewed,” investing heavily in age-appropriate content.
The Paris police cybercrime unit will investigate the alleged offense of providing a platform for “propaganda promoting products, objects, or methods as means of committing suicide,” punishable by up to three years in prison. The parliamentary committee previously stated that TikTok had deliberately endangered the health and lives of young users and referred the case to the court. TikTok rejected the committee’s claims, describing them as misleading and arguing that the issues are industry-wide, not specific to the company.
The prosecutor’s office said the inquiry will also consider a 2023 Senate report on risks related to freedom of expression, data collection, and algorithmic content, a 2023 Amnesty International report warning that TikTok’s algorithms are addictive and may pose self-harm risks, and a 2025 report by the French state agency Viginum highlighting potential manipulation of public opinion during elections.
Disclaimer: This image is taken from Reuters.



OpenAI, the artificial intelligence company, is reportedly gearing up for an initial public offering (IPO) that could value it at as much as US$1 trillion, potentially ranking among the largest in history. The firm is expected to file with regulators by the second half of 2026, with a possible market debut in 2027. Hairianto Diman and Syahida Othman explore whether this trillion-dollar valuation is rooted in real fundamentals or driven by the growing hype surrounding AI’s future, alongside insights from Kyle Rodda, Senior Financial Market Analyst at Capital.com.
Disclaimer: This Podcast is taken from CNA.

As cyber threats become increasingly sophisticated and widespread, it is essential for businesses, government agencies, and individuals to stay informed about the latest trends, tactics, and strategies used by threat actors. Hairianto Diman and Syahida Othman explore how the private sector and government can enhance collaboration in cybersecurity with insights from Emil Tan, Director and Co-Founder of SINCON.
Disclaimer: This Podcast is taken from CNA.

Recent revelations from current and former Meta employees claim that the company has concealed internal research highlighting significant risks to children on its virtual reality (VR) platforms. Meta rejects these claims, stating that it has conducted research on youth safety, implemented parental controls, set default privacy protections for teenagers, and that its legal actions were intended to ensure compliance with privacy regulations rather than to hide issues. Andrea Heng and Hairianto Diman examine the difficulties of addressing crimes in the VR environment with Nasya Bahfen, Senior Lecturer in the Department of Politics, Media, and Philosophy at La Trobe University.
Disclaimer: This Podcast is taken from CNA.

LinkedIn is now requiring leaders, recruiters, and premium company members to verify their accounts, aiming to strengthen authenticity in professional interactions. But how exactly will this be implemented—and will it truly deliver on its promise? Andrea Heng sits down with Trisha Suresh, LinkedIn’s Head of Public Policy for Southeast Asia, to find out.
Disclaimer: This Podcast is taken from CNA.










