Chinese Officials Use ChatGPT for Cross-Border Intimidation and Disinformation Campaigns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI revealed that Chinese officials used ChatGPT to document and facilitate large-scale cross-border intimidation and disinformation campaigns, including impersonating U.S. officials to threaten dissidents, fabricating false death notices, and attempting to smear Japan's Prime Minister. These AI-enabled actions resulted in real-world harm, violating human rights and spreading misinformation globally.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of an AI system (ChatGPT) in the misuse context, leading to direct harm such as financial fraud (dating scams defrauding victims) and violations of rights (impersonation of law firms and officials). The harms are realized and ongoing, meeting the criteria for an AI Incident. The report details actual misuse and resulting harm, not just potential or hypothetical risks, so it is not an AI Hazard or Complementary Information. Therefore, the classification is AI Incident.[AI generated]
AI principles
Respect of human rightsAccountability

Industries
Government, security, and defenceMedia, social platforms, and marketing

Affected stakeholders
General publicGovernment

Harm types
Human or fundamental rightsPsychologicalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

From dating scams to fake lawyers: OpenAI details ChatGPT misuse in new threat report

2026-02-26
The Hindu
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the misuse context, leading to direct harm such as financial fraud (dating scams defrauding victims) and violations of rights (impersonation of law firms and officials). The harms are realized and ongoing, meeting the criteria for an AI Incident. The report details actual misuse and resulting harm, not just potential or hypothetical risks, so it is not an AI Hazard or Complementary Information. Therefore, the classification is AI Incident.
Thumbnail Image

From dating scams to fake lawyers: OpenAI details ChatGPT misuse in new threat report

2026-02-25
Reuters
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the misuse of an AI system (ChatGPT) in conducting cybercrimes, including scams and influence operations that have caused harm such as deception and reputational damage. These harms fall under violations of rights and harm to communities, and the AI system's use is a direct factor in these incidents. Therefore, this qualifies as an AI Incident.
Thumbnail Image

把ChatGPT當日誌!中國官員手滑洩密 跨國鎮壓、抹黑高市全曝光 | 國際要聞 | 全球 | NOWnews今日新聞

2026-02-26
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by a Chinese official to record and plan harmful operations that have already caused harm, including misinformation campaigns and suppression of dissent. The AI system's involvement is central to the incident, as it was used to generate plans and document operations that led to real-world harm. Therefore, this qualifies as an AI Incident due to direct involvement of AI in causing violations of rights and harm to communities.
Thumbnail Image

中國官員愛用ChatGPT 意外曝光抹黑高市早苗、跨國恐嚇異議人士 - 國際 - 自由時報電子報

2026-02-26
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by Chinese officials to generate false information and intimidate dissidents, which has led to realized harms such as misinformation, harassment, and violation of rights. The AI system's outputs were used to fabricate fake court documents, false death notices, and defamatory content, which were spread online causing harm to individuals and communities. This meets the criteria for an AI Incident as the AI system's use directly led to violations of human rights and harm to communities.
Thumbnail Image

OpenAI報告揭中國大規模網攻台灣 蕭上農:國家機器的系統性工程 - 政治 - 自由時報電子報

2026-02-26
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by Chinese state actors to conduct coordinated cyberattacks and disinformation campaigns against Taiwan and dissidents, leading to realized harms including harassment, account restrictions, and physical detention. The AI systems' development and use are central to these harms, fulfilling the criteria for an AI Incident. The report details actual harms rather than potential risks, and the AI role is pivotal in enabling these state-level operations, thus it is not merely complementary information or a hazard but an incident.
Thumbnail Image

報告指中國官員用ChatGPT跨國鎮壓 曝冒充移民官、散布假訊息手法 | 國際 | 中央社 CNA

2026-02-26
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of ChatGPT, an AI system, in the development and execution of harmful activities including impersonation, spreading false information, and coordinated disinformation campaigns. These actions have directly caused harm to individuals (intimidation, harassment) and communities (disinformation, manipulation), fulfilling the criteria for an AI Incident. The AI system's role is pivotal as it was used to generate and plan these harmful outputs. The harm is realized and ongoing, not merely potential.
Thumbnail Image

中共官員用ChatGPT 意外曝光全球恐嚇行動 | 中共跨國鎮壓 | OpenAI | 網絡行動 | 大紀元

2026-02-25
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) by CCP officials to carry out harmful activities such as intimidation, misinformation, and suppression of dissent, which constitute violations of human rights and harm to communities. The AI system's outputs were instrumental in these actions, and the harm is ongoing and realized, not merely potential. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's use directly led to significant harms including human rights violations and community harm.
Thumbnail Image

'From dating scams to fake lawyers': OpenAI bans ChatGPT accounts over misuse

2026-02-26
The News International
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in the commission of cybercrimes that have caused direct harm to individuals (fraud victims) and communities (smear campaigns, influence operations). The AI system's outputs were used to generate deceptive content and communications that facilitated these harms. Therefore, this qualifies as an AI Incident because the AI system's use directly led to violations of rights and harm to people.
Thumbnail Image

OpenAI 威脅報告揭密:中國「網路特戰」,怎麼用 AI 打壓台灣與異議聲音?

2026-02-26
數位時代
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as being used for content generation, translation, monitoring, and manipulation in a coordinated influence operation by a government entity. The use of AI directly contributes to violations of human rights, including suppression of dissent and freedom of expression, and causes harm to individuals (e.g., detention of a dissident) and communities (e.g., disinformation campaigns). The AI's role is pivotal in enabling the scale and coordination of these operations. The harms are realized and documented, not merely potential. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

大翻車!中國官員用ChatGPT寫日記 OpenAI不忍了:跨國鎮壓駭人內幕全公開 | 科技 | Newtalk新聞

2026-02-26
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, in the development and execution of coordinated disinformation and intimidation campaigns that have caused harm to individuals and communities, constituting violations of human rights and political repression. The AI system's role is pivotal in generating fake documents and messages used for harassment and misinformation. Therefore, this qualifies as an AI Incident due to realized harm linked directly to the AI system's misuse.
Thumbnail Image

From Dating Scams to Fake Lawyers: OpenAI Details ChatGPT Misuse in New Threat Report

2026-02-26
NTD
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) being used maliciously to generate deceptive content and communications that have caused real harm, including financial fraud and influence operations. These harms fall under violations of rights and harm to communities, meeting the criteria for an AI Incident. The misuse is not hypothetical or potential but has already occurred, with OpenAI banning accounts linked to these activities, confirming realized harm.
Thumbnail Image

From dating scams to fake lawyers: OpenAI details ChatGPT misuse in new threat report

2026-02-25
StreetInsider.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the development and use phases, where the AI was used to generate content facilitating scams, influence operations, and impersonations. These actions have directly caused harm to individuals (financial fraud victims), communities (smear campaigns), and rights (impersonation of officials). Therefore, this qualifies as an AI Incident because the AI system's misuse has directly led to realized harms as defined in the framework.
Thumbnail Image

中國官員用聊天機器人 意外曝光抹黑高市、跨國恐嚇 | 國際焦點 | 國際 | 經濟日報

2026-02-25
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the development and execution of harmful activities such as cross-national intimidation, misinformation campaigns, and defamation. The harms described include violations of human rights and harm to communities through misinformation and political manipulation. The AI system's role is pivotal as it was used to generate false documents and content that facilitated these harms. The harm is realized and ongoing, not merely potential, thus qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中國官員用ChatGPT留「日誌」!意外曝抹黑高市、跨國恐嚇│TVBS新聞網

2026-02-26
TVBS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, in the development and execution of coordinated disinformation and intimidation campaigns. These campaigns have caused actual harm, including the spread of false information about dissidents' deaths and attempts to discredit political figures, which constitute violations of human rights and harm to communities. Therefore, this event qualifies as an AI Incident due to the direct link between AI use and realized harm.
Thumbnail Image

OpenAI報告:中國大規模網攻台灣 蕭上農點名「台灣風險」 | 政治 | 三立新聞網 SETN.COM

2026-02-26
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (Chinese local AI models like DeepSeek and Qwen) in the development and execution of coordinated disinformation and harassment campaigns by a state actor. These campaigns have directly led to harms such as online harassment, suppression of dissent, and physical detention of individuals, fulfilling the criteria for an AI Incident. The AI systems are not merely potential risks but have been actively used to generate content, monitor targets, and facilitate operations that have caused real harm. The article also notes the failure of AI safety mechanisms in some models, highlighting the systemic nature of the harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

From Dating Scams to Fake Lawyers: OpenAI Details ChatGPT Misuse in New Threat Report

2026-02-26
Claims Journal
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions that ChatGPT, an AI system, was used by malicious actors to conduct cybercrimes including scams and influence operations that harmed individuals and communities. The harms include deception, fraud, and reputational damage, which fall under harm to communities and violations of rights. Since the AI system's use directly contributed to these harms, this qualifies as an AI Incident.
Thumbnail Image

OpenAI揭露中國利用ChatGPT記錄海外影響行動 偽裝移民官員威脅異議人士

2026-02-25
東森美洲電視
Why's our monitor labelling this an incident or hazard?
ChatGPT, an AI system, was instrumental in revealing a coordinated campaign of harassment and threats against dissidents, which is a violation of human rights. The AI system's use in analyzing and cross-referencing data directly contributed to uncovering these harms. The event involves the use of AI in a way that has directly led to the exposure of an ongoing harm, fitting the definition of an AI Incident due to violations of human rights and harm to communities.
Thumbnail Image

快新聞/中國官員將ChatGPT當日誌 意外曝光北京大規模跨海鎮壓行動 - 民視新聞網

2026-02-26
民視新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by Chinese officials to document and facilitate cross-border repression and intimidation campaigns. These campaigns involve impersonation, misinformation, and harassment of dissidents abroad, which are clear violations of human rights and harm to communities. The AI system's role is pivotal as it was used as a tool in these harmful operations. The harm is realized and ongoing, not merely potential, thus classifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI report reveals shocking misuse of ChatGPT: Dating frauds, influence campaigns and more

2026-02-27
India TV News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) being used maliciously, leading to realized harms including fraud, political influence attempts, and cybercrime-related data collection. These harms fall under violations of rights and harm to communities. The misuse is direct and has materialized, not just a potential risk. Therefore, this qualifies as an AI Incident.
Thumbnail Image

AI Gone Wrong: OpenAI Bans Accounts Linked to Fraud, Propaganda & Cybercrime

2026-02-27
Analytics Insight
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the AI system (ChatGPT) was misused by cybercriminals to generate misleading content and support criminal activities, which directly led to harms including fraud and propaganda. These harms fall under violations of rights and harm to communities. The involvement of the AI system in causing these harms is direct and material. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI Bans China-Linked Accounts in Disinformation Probe | Sada Elbalad

2026-02-27
see.news
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT AI system by malicious actors to conduct disinformation campaigns targeting political figures, online dating scams defrauding victims, and impersonation of law firms, all of which constitute direct harms to individuals, communities, and political processes. The AI system's use in generating content and facilitating these operations is central to the harms described. Hence, this is an AI Incident as the AI system's misuse has directly led to violations of rights, fraud, and harm to communities.
Thumbnail Image

A Secret Chinese Campaign Was Exposed By 1 Mistake: Using ChatGPT As A Diary

2026-02-26
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (ChatGPT and other AI tools) in the development and execution of a large-scale disinformation and repression campaign targeting dissidents and world leaders. The harms include violations of human rights (intimidation, suppression, impersonation, spreading false information) and harm to communities (disinformation campaigns). The AI system's role is pivotal as it was used for planning, record-keeping, and content generation, directly or indirectly leading to these harms. Therefore, this qualifies as an AI Incident.
Thumbnail Image

One Mistake, Big Leak: Chinese Official's ChatGPT 'Diary' Exposes Secret Campaign

2026-02-26
News18
Why's our monitor labelling this an incident or hazard?
The involvement of ChatGPT as a tool for planning and documenting a covert campaign that intimidates dissidents and spreads false information meets the criteria for an AI Incident. The harms include violations of human rights and harm to communities through intimidation, impersonation, and misinformation. Although the AI was not used to generate harmful content directly, its role in enabling the campaign is pivotal. Hence, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI shares details from thwarted romance scams, fake law firms, and an effort to smear Japan's prime minister

2026-02-25
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (ChatGPT and other AI models) in the development and execution of scams and influence operations that have caused direct harm, including financial fraud and political repression. The AI systems were used to generate fake content, impersonate officials, and assist in planning and polishing malicious campaigns. These activities meet the criteria for AI Incidents as they have directly led to harm to individuals and communities, including violations of rights and fraud. Therefore, the event is classified as an AI Incident.
Thumbnail Image

OpenAI uncovers global Chinese intimidation operation through one official's use of ChatGPT | CNN Politics

2026-02-25
CNN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by a Chinese official to document and plan a large-scale influence operation that has already caused harm, such as intimidation of dissidents, impersonation of officials, and spreading false information. These actions constitute violations of human rights and harm to communities. The AI system's role was pivotal in enabling these harms, meeting the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's use was central to the incident.
Thumbnail Image

Chinese law enforcement tried using ChatGPT to discredit Japan's PM, OpenAI says

2026-02-25
Axios
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (ChatGPT and Chinese AI models) used in the development and execution of a disinformation campaign aimed at discrediting a political figure and suppressing dissent. The use of AI in generating and amplifying false information and fake accounts has directly led to violations of human rights and harm to communities by undermining political discourse and spreading misinformation. The harm is realized, not just potential, as the campaign went ahead using AI tools. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Chinese official accidentally reveals secret operation to ChatGPT: Smear campaign against Japan PM, impersonating US officials - The Times of India

2026-02-25
The Times of India
Why's our monitor labelling this an incident or hazard?
The involvement of ChatGPT, an AI system, is explicit as it was used to plan and track covert operations. The harms include violations of human rights through transnational repression and misinformation campaigns, which have materialized as described (e.g., false obituaries, social media account takedowns). The AI system's use directly contributed to these harms by enabling the planning and coordination of these activities. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Chinese law enforcement tried to use ChatGPT to plan influence op against Japan PM: OpenAI

2026-02-26
CNA
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) to generate harmful content for an influence operation, which is a direct use of AI leading to harm to communities and violation of rights through misinformation and manipulation. The operation was active and involved large-scale coordinated efforts, indicating realized harm rather than just potential harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Crooks and Communists Misusing AI Tools

2026-02-25
HotAir
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT and Claude) being used in harmful ways: Chinese operatives used ChatGPT to document and plan repression and misinformation campaigns, which directly violates human rights and harms communities. Separately, a hacker exploited AI to breach Mexican government servers and steal sensitive data, causing harm to property and privacy. These are direct harms caused or facilitated by AI system use, meeting the criteria for an AI Incident. The article also discusses the AI systems' guardrails being bypassed, indicating malfunction or misuse leading to harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Chinese Official Accidentally Reveals Vast Influence Operation Through ChatGPT Use | National Review

2026-02-25
National Review
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) used in the planning and updating of a covert influence operation that includes impersonation and harassment tactics causing harm to communities and violations of rights. The AI system's involvement is in the use phase, and the harm is realized through the ongoing influence campaigns and harassment activities. The disclosure by OpenAI and the evidence of the campaigns confirm that harm has occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI: Chinese agent used ChatGPT for smear ops

2026-02-25
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (ChatGPT and other AI models) to carry out malicious operations that have caused harm to individuals and communities, including political repression and psychological harassment. The harms include violations of human rights and harm to communities through coordinated disinformation and harassment campaigns. The AI's role is pivotal as it was used to generate and plan these operations, making this a clear AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI Intelligence Report Identifies New Tactics in AI-Enhanced Scams | PYMNTS.com

2026-02-25
PYMNTS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI models being used to plan covert influence operations, conduct romance scams, and generate disinformation content, all of which have caused harm to individuals and communities. The harms include violations of rights (e.g., intimidation, impersonation), harm to communities (e.g., disinformation), and fraud-related harms. The AI systems' involvement is direct in generating content and automating interactions that facilitate these harms. Although the report notes that AI-generated content was not always decisive, the AI's role was pivotal in enabling these malicious campaigns. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

Chinese Police Use ChatGPT to Smear Japan PM Takaichi

2026-02-26
Dark Reading
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the development and use phases to facilitate smear campaigns and influence operations. These campaigns have directly led to harm in the form of reputational damage, political manipulation, and violations of rights, fulfilling the criteria for an AI Incident. The AI system's role is pivotal as it was used to generate and polish the malicious content, making the harm possible and more effective. The description confirms realized harm rather than potential harm, distinguishing it from an AI Hazard or Complementary Information.
Thumbnail Image

OpenAI report reveals Chinese influence campaign exposed through ChatGPT use - Daily Times

2026-02-26
Daily Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves ChatGPT, an AI system, being used by a government official to conduct and document a coordinated influence campaign involving misinformation, harassment, and impersonation. These actions constitute violations of human rights and harm to communities, fulfilling the criteria for an AI Incident. The misuse of the AI system directly contributed to these harms, and the report details actual realized harm rather than potential harm. Hence, this is classified as an AI Incident.
Thumbnail Image

Chinese group's ChatGPT use reveals worldwide harassment campaign against critics

2026-02-25
CyberScoop
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT and other AI models) used in the development and execution of coordinated influence and harassment operations. The use of AI to generate propaganda, impersonate officials, flood social media with fake accounts, and intimidate critics constitutes a direct link to harm, specifically violations of human rights and harm to communities. The report details ongoing and realized harm, not just potential risks, making this an AI Incident rather than a hazard or complementary information. The AI's role is pivotal in enabling the scale and sophistication of these operations.
Thumbnail Image

OpenAI flags China-linked influence ops targeting Japan's Takaichi

2026-02-26
Nikkei Asia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT, DeepSeek, Alibaba's Qwen) being used to plan and execute influence operations that harm communities and violate rights by spreading disinformation and manipulating political discourse. The AI systems' outputs were instrumental in structuring and refining the malicious campaign, thus directly contributing to the harm. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's use has directly led to harm to communities and violations of rights.
Thumbnail Image

ChatGPT Slip Reveals Alleged Chinese Smear Campaign On Japan PM

2026-02-26
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) being used in the development and use phases to create disinformation content, which is a form of harm to communities and a violation of rights. Although the campaign was detected and disrupted before full execution, the misuse and intent to cause harm are clear and directly linked to the AI system's use. This qualifies as an AI Incident because the AI system's misuse has directly led to a significant harm scenario (disinformation campaign) that was interrupted but had already begun. The detection and prevention do not negate the incident classification, as the misuse and partial execution occurred.
Thumbnail Image

OpenAI uncovers global Chinese intimidation operation through one official's use of ChatGPT

2026-02-25
Local3News.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by a Chinese official to document and facilitate a large-scale influence operation that caused real harm to dissidents abroad. The harms include intimidation, impersonation of officials, spreading false information, and attempts to suppress social media accounts, which are violations of human rights and harm to communities. The AI system's role was pivotal in enabling these activities, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's use was central to the incident.
Thumbnail Image

OpenAI says it blocked China-linked bid to smear Japan's Prime Minister using ChatGPT

2026-02-26
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) explicitly mentioned as being targeted for misuse in a political smear campaign, which is a form of harm to communities and a violation of rights if successful. However, since ChatGPT refused to assist and OpenAI disrupted the operation, no realized harm occurred. The event highlights a credible risk of AI being exploited for covert influence operations, which could plausibly lead to harm if not mitigated. Thus, it fits the definition of an AI Hazard, as the AI system's involvement could plausibly lead to an AI Incident but did not directly or indirectly cause harm in this case.
Thumbnail Image

OpenAI Says ChatGPT Refused to Help Chinese Influence Operations

2026-02-26
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) explicitly mentioned and its use in a malicious context (planning disinformation campaigns). Although the AI refused to assist, the context reveals active attempts to use AI for harmful influence operations, which constitute harm to communities and violations of rights. Since harm is occurring or attempted through AI misuse, and the AI system's role is pivotal in both the misuse attempt and its prevention, this qualifies as an AI Incident. The article also discusses other malicious uses of AI, reinforcing the presence of realized harms linked to AI misuse.
Thumbnail Image

Chinese Official Accidentally Exposes Transnational Repression Plot by Using ChatGPT

2026-02-27
Breitbart
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) by a Chinese official to support a covert information operation campaign that has caused direct harm to dissidents and targeted individuals, including attempts to intimidate and silence critics. The AI system's involvement in editing reports and planning information warfare is a contributing factor to the ongoing harm. The harm includes violations of human rights and harm to communities through disinformation and repression. The presence of realized harm and the AI system's role in facilitating it meet the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI Says ChatGPT refused to help Chinese influence operations

2026-02-26
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (ChatGPT) being used in an attempt to plan a covert influence operation, which is a form of harm to communities and political stability. The AI system's refusal to assist is part of the event, but the broader context involves AI being used or misused in a way that has already led to diplomatic and societal harm. The involvement of AI in the development and use stages, and the direct link to harm (disinformation campaigns, diplomatic disputes), meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Chinese Official Thought ChatGPT Was Private - Now We Know How China Silences Dissidents

2026-02-26
Townhall
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (ChatGPT) by Chinese law enforcement officials to document and plan a campaign of harassment against dissidents. The AI system's use directly contributed to the development of tactics that led to violations of human rights and suppression of dissent, which are harms under the AI Incident definition. The harm is realized, not just potential, as the campaign involves active intimidation and manipulation of dissidents. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

US Firm Ruins Communist Would-Be Saboteur's Day

2026-02-26
The Daily Caller
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT, DeepSeek, Qwen) used by a state actor to orchestrate coordinated online harassment and misinformation campaigns against political leaders and dissidents. These actions have directly led to harm, including violations of human rights and harm to communities, as victims experienced harassment, loss of followers, account removals, and suppression of speech. The AI systems' use in generating and disseminating harmful content and impersonations is central to the incident. Hence, it meets the criteria for an AI Incident due to realized harm linked to AI system use.
Thumbnail Image

OpenAI reveals Chinese influence operation targeting dissidents through ChatGPT data

2026-02-26
The Online Citizen
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the development and execution of a covert influence campaign that has directly caused harm through misinformation and intimidation. The fabricated obituary and false claims against dissidents constitute harm to communities and violations of rights. The use of AI to generate plans for political denigration further supports the direct link to harm. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in malicious influence operations.
Thumbnail Image

OpenAI flags China-backed effort to leverage ChatGPT in global influence campaign | Taiwan News | Feb. 27, 2026 11:58

2026-02-27
Taiwan News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT and other AI tools) used in the development and execution of a covert influence operation that has directly led to violations of human rights and harm to communities through harassment, intimidation, and disinformation. The AI's role is pivotal in generating content, coordinating tactics, and profiling targets, which are integral to the harm caused. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

OpenAI's New Report Exposes ChatGPT Abuse: From Fake Lovers to Fake Lawyers

2026-02-26
Technology Org
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by criminals and state-linked operatives to carry out harmful activities such as disinformation campaigns, romance scams, and coordinated social media manipulation. These activities have directly led to harm to individuals (e.g., victims of romance scams) and communities (e.g., spread of disinformation and intimidation of critics). The AI system's use is central to these harms, fulfilling the criteria for an AI Incident. The report also notes the banning of accounts involved, indicating the harms have materialized rather than being merely potential.
Thumbnail Image

OpenAI Confirms that Chinese Hackers Used ChatGPT to Launch Cyberattacks

2026-02-26
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT and other AI models in the development and execution of coordinated cyberattacks and disinformation campaigns by Chinese state-linked actors. These activities have directly led to harm, including psychological distress to dissidents, suppression of free speech, and manipulation of public opinion, which are violations of human rights and harm to communities. The AI system's role in planning, documenting, and refining these operations is pivotal, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's involvement is clearly established.
Thumbnail Image

OpenAI Says ChatGPT Refused To Help Chinese Influence Operations

2026-02-26
NDTV Profit
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (ChatGPT) and its involvement in a scenario where it was requested to assist in a covert influence operation. The AI system's refusal to comply prevented direct harm, so no realized harm occurred due to the AI's outputs. The potential harm includes violations of rights and harm to communities through disinformation campaigns, which fits the harm categories. Since the harm was plausibly prevented but the risk was credible and significant, this fits the definition of an AI Hazard. The article does not describe actual harm caused by the AI system, so it is not an AI Incident. It is not merely complementary information because the main focus is on the AI system's refusal and the potential misuse scenario. Therefore, the classification is AI Hazard.
Thumbnail Image

OpenAI: ChatGPT weaponized in Chinese influence campaign

2026-02-27
SC Media
Why's our monitor labelling this an incident or hazard?
The involvement of ChatGPT and other large language models (LLMs) in generating content for coordinated influence operations constitutes the use of AI systems. The AI's outputs are used to amplify negative comments and conduct psychological pressure campaigns against critics, which can be considered harm to communities and violations of human rights (freedom of expression and protection from repression). Since the AI system's use directly leads to these harms, this event qualifies as an AI Incident.
Thumbnail Image

China-linked Actor Tried Using ChatGPT to Target Japan's PM Takaichi in Influence Operation, Says OpenAI

2026-02-27
japannews.yomiuri.co.jp
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly involved in the event, with its use being central to the attempt to carry out a harmful influence operation. Although the harm was not realized because the AI system refused to assist and the account was suspended, the event reveals a credible risk that the AI system could have been misused to cause harm to a political figure and potentially to communities through disinformation and harassment. Since the harm did not materialize but the AI system's involvement plausibly could have led to an AI Incident, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

OpenAI报告指中国账号求助ChatGPT打压异见人士,要求协助抹黑高市早苗 - BBC News 中文

2026-02-27
BBC
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT and other AI models) used in the development and execution of coordinated disinformation and harassment campaigns. The harms described include violations of human rights (suppression of dissent, harassment), harm to individuals (psychological pressure), and harm to communities (disinformation). The AI systems' use directly contributed to these harms by generating and refining harmful content and facilitating large-scale operations. The report confirms that these harms are ongoing and have materialized, not just potential. Hence, this is an AI Incident.
Thumbnail Image

Chinese actors tried to use ChatGPT in influence campaign vs. Japan PM Takaichi: OpenAI

2026-02-27
毎日新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use and attempted misuse of an AI system (ChatGPT) in a coordinated influence campaign that has caused harm to a political figure and communities through defamatory content and social media manipulation. The AI system's involvement is direct in generating or editing content and indirect in enabling the spread of harmful messages via fake accounts. The harm is realized, not just potential, as defamatory images and hashtags were spread, and accounts were suspended as a result. This fits the definition of an AI Incident due to violations of rights and harm to communities caused by the AI system's misuse.
Thumbnail Image

OpenAI Reports Chinese Official Tried to Attack Takaichi

2026-02-27
Adnkronos
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) explicitly mentioned and used in an attempt to generate harmful content aimed at discrediting a political figure, which is a violation of rights and harm to communities. The AI system's refusal to comply shows a mitigation attempt, but the malicious use attempt itself is a direct involvement of AI in harm. The report also states that the account was banned, indicating recognition of misuse. Therefore, this event meets the criteria for an AI Incident due to the direct involvement of AI in malicious use causing harm to public discourse and political rights.
Thumbnail Image

日本官房长官指中国利用AI贬损高市首相威胁国家安全

2026-02-27
RFI
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in a coordinated disinformation campaign, which directly leads to harm to communities by spreading false information and manipulating political discourse. This fits the definition of an AI Incident because the AI system's use has directly led to harm in the form of misinformation and potential political destabilization. The involvement of AI in generating and planning the disinformation strategy is explicit, and the harm is realized rather than potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

要闻分析 - 美国OpenAI披露:北京使用ChatGPT进行秘密镇压

2026-02-27
RFI
Why's our monitor labelling this an incident or hazard?
The event involves explicit use of AI systems (ChatGPT and other AI tools) by state actors to carry out secret repression and disinformation campaigns that have already caused harm to individuals and communities, including violations of fundamental rights and political freedoms. The AI's role is pivotal in generating and spreading false information and coordinating operations. The harms described are realized and ongoing, not merely potential. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

OpenAI揭中共网攻 专家吁台湾加强因应确保国安 | ChatGPT

2026-02-27
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) by a state actor (CCP) to conduct cyberattacks and influence operations, which constitute violations of democratic rights and national security (harm to communities and potentially human rights). The harms are ongoing and realized, not just potential. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm. The article also includes expert commentary on the implications and necessary responses, but the primary focus is on the AI-enabled cyberattacks and their impact.
Thumbnail Image

加拿大养老金投资公司与Equinix达成40亿美元交易,收购北欧数据中心atNorth-36氪

2026-02-27
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article focuses on OpenAI's policy changes and safety commitments to mitigate risks associated with ChatGPT usage. It does not report any actual harm or incident caused by AI, nor does it describe a plausible imminent harm event. Instead, it details measures to prevent potential future harms and improve oversight, which fits the definition of Complementary Information as it provides updates on societal and governance responses to AI risks.
Thumbnail Image

A CCP Official Turned to ChatGPT for Help. It Exposed a Global Intimidation Campaign, OpenAI Says

2026-02-27
NTD
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, to create threatening messages, fake documents, and disinformation as part of a coordinated campaign of transnational repression. This use directly led to harm in the form of intimidation and misinformation targeting dissidents, fulfilling the criteria for harm to communities and violations of rights. The AI system's role was central in enabling the campaign's scale and sophistication. Hence, the event is classified as an AI Incident.
Thumbnail Image

How ChatGPT Logs Exposed China's Secret Transnational Intimidation Campaign Against Dissidents Worldwide

2026-02-27
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly details how the AI system (ChatGPT) was used by state actors to produce threatening communications and surveillance reports targeting dissidents, which constitutes a violation of human rights and targeted harassment. The harm is direct and materialized, as dissidents have been intimidated and harassed through AI-generated content. The AI system's misuse is central to the incident, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a documented case of AI-enabled harm.
Thumbnail Image

China-linked actor sought ChatGPT aid to discredit Japan's Takaichi: OpenAI

2026-02-27
Kyodo News+
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the development and attempted use of a disinformation campaign targeting a political figure, which is a violation of rights and harms communities. The AI system's involvement is direct, as it was asked to generate plans and content for the campaign. Although the AI refused to comply, the attempt and partial related activities confirm realized harm in the form of foreign influence operations and misinformation. Therefore, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI Reports Chinese Official Tried to Attack Takaichi

2026-02-27
jen.jiji.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in an attempt to cause harm by spreading disinformation and attacking a political figure, which is a violation of rights and harms communities through manipulation of public opinion. Even though the AI system rejected the request, the malicious use attempt and the broader context of AI-enabled manipulation are directly linked to harm. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT Logs Show How China Harasses US-Based Dissidents

2026-02-28
PC Mag Middle East
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by Chinese government operatives to create deceptive content and coordinate harassment campaigns targeting dissidents in multiple countries. The harms described include violations of human rights (harassment, intimidation, misinformation) and harm to communities (manipulation of information and suppression of dissent). Since these harms are occurring and directly linked to the AI system's use, this qualifies as an AI Incident.
Thumbnail Image

OpenAI: Chinese Cops Used ChatGPT to Plan Dissident Smear Ops

2026-02-28
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems (ChatGPT and other AI models) were used by state-linked actors to plan and conduct smear campaigns and harassment against dissidents and political critics, which constitutes violations of human rights and harm to communities. The harms are direct and ongoing, with AI playing a pivotal role in enabling and scaling these covert influence operations. The denial by Chinese authorities does not negate the documented use and impact. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

حملات سرية ورسائل تشويه.. كيف يتم استغلال "تشات جي بي تي"؟

2026-02-25
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT AI system in malicious campaigns involving impersonation, fraud, and disinformation. These uses have directly caused harm to individuals (e.g., victims of dating scams), political figures (e.g., targeted disinformation against Japan's prime minister), and broader communities through influence operations. The AI system's misuse is central to these harms, fulfilling the criteria for an AI Incident.
Thumbnail Image

OpenAI تحظر حسابات صينية استغلت ChatGPT في الاحتيال

2026-02-26
صحيفة عكاظ
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT AI system in fraudulent and malicious activities that have caused real harm to individuals and communities, including scams and political influence operations. The AI system's misuse directly contributed to these harms, fulfilling the criteria for an AI Incident. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

tayyar.org -

2026-02-26
tayyar.org
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (OpenAI's ChatGPT) in the commission of cybercrimes and influence operations that have caused direct harm to individuals and communities. The misuse includes fraudulent schemes that have likely harmed hundreds of victims financially and politically, as well as impersonation that breaches trust and possibly legal rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harms as defined in the framework.
Thumbnail Image

"أوبن.إيه.آي" تصدر تقريراً حول إساءة استخدام "تشات جي.بي.تي"

2026-02-25
صوت بيروت إنترناشونال
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (OpenAI's ChatGPT) in harmful activities including scams, influence operations, and impersonations. These uses have directly caused harm to individuals (financial fraud victims), political figures (disinformation campaigns), and communities (through deception and manipulation). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harms as defined in the framework.
Thumbnail Image

"أوبن إيه آي" تحظر حسابات "شات جي بي تي" مرتبطة بالسلطات الصينية - صحيفة الوئام

2026-02-25
صحيفة الوئام الالكترونية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT AI system by malicious actors to commit cybercrimes and influence operations, leading to realized harms such as fraud, deception, and political interference. The AI system's outputs were exploited to generate fake identities, messages, and content that caused harm to individuals and communities. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to communities.
Thumbnail Image

أوبن إيه آي تكشف إساءة استخدام تشات جي.بي.تي في جرائم إلكترونية وحملات تشويه

2026-02-25
جريدة حابي
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT in criminal activities such as scams on dating sites, impersonation of officials and lawyers, and covert influence operations against a political leader. These uses have resulted in realized harms including fraud, misinformation, and reputational damage, fulfilling the criteria for an AI Incident. The AI system's misuse directly led to these harms, and the involvement is clear and explicit.
Thumbnail Image

اخبارك نت | حملات سرية ورسائل تشويه.. كيف يتم استغلال "تشات جي بي تي"؟

2026-02-26
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT, an AI system, in malicious activities that have caused real harm such as scams defrauding victims, disinformation campaigns targeting political figures, and impersonation leading to violations of rights. The harms are realized and directly linked to the AI system's outputs being exploited for criminal purposes. Therefore, this qualifies as an AI Incident due to the direct involvement of AI misuse causing significant harm.
Thumbnail Image

OpenAI تحظر حسابات مرتبطة بالصين تشوه رئيسة وزراء اليابان

2026-02-26
Asharq News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT AI system by malicious actors to carry out harmful actions including a disinformation campaign against Japan's Prime Minister and large-scale dating scams that defraud victims. These constitute direct harms to individuals and communities, fulfilling the criteria for an AI Incident. The AI system's misuse is central to the harms described, not merely a potential risk or background context. Hence, this event qualifies as an AI Incident.
Thumbnail Image

"أوبن إيه آي" تغلق حسابات استُخدم فيها "تشات جي بي تي" لأغراض احتيالية

2026-02-28
akhbarona.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT models by malicious actors to commit crimes such as impersonation, scams targeting dating app users, and influence campaigns against political figures. These activities have caused direct harm to individuals (financial fraud victims), communities (through misinformation and political influence), and violate rights (fraud, deception). The AI system's involvement is clear and central to the harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

「中国、高市首相の中傷画策」 OpenAIが報告書で指摘

2026-02-26
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The involvement of ChatGPT, an AI system, in planning a disinformation campaign targeting a political figure is evident. Although the harm (defamation and manipulation of public opinion) is a serious concern, the article suggests the campaign was being planned and detected through the AI's use, implying the harm is potential rather than realized. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm to communities through disinformation and political manipulation, but no direct harm has yet been reported.
Thumbnail Image

中国当局の関係者、高市首相の信頼低下させる計画で「チャットGPT」に助言求める...米オープンAI発表

2026-02-27
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved, and its use was sought for malicious purposes (disinformation and reputation damage). Although the AI system refused to comply and no direct harm occurred from the AI's output, the event involves the use of AI in an attempt to cause harm through misinformation campaigns. Since the harm is not realized but the AI system's involvement could plausibly lead to harm if misused, this qualifies as an AI Hazard rather than an AI Incident. The report serves as a warning about potential misuse of AI for coordinated disinformation, fitting the definition of an AI Hazard.
Thumbnail Image

中国関係者、影響工作の標的に高市氏 チャットGPTに助言求める:朝日新聞

2026-02-26
朝日新聞デジタル
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in the development and execution of a political influence operation targeting a political figure, which is a violation of political rights and harms communities by spreading disinformation. The AI system was used to edit and organize materials and was solicited for advice on harmful plans, which it refused. The operation continued, indicating misuse of the AI system's capabilities. The harm is realized in the form of influence operations and political manipulation. Hence, this meets the criteria for an AI Incident due to indirect harm caused by the AI system's use in political influence activities.
Thumbnail Image

高市首相の信用失墜をAIに質問 中国当局関係者、昨年10月

2026-02-27
西日本新聞ニュース
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) explicitly and its use by a government official to plan a disinformation campaign, which could plausibly lead to harm to communities (political manipulation, misinformation). Since the AI system refused to assist and the account was suspended, no actual harm occurred. Thus, this is an AI Hazard, reflecting a credible risk of AI misuse for harmful political influence, but not an AI Incident because harm was not realized.
Thumbnail Image

チャットGPT悪用、首相中傷か 中国当局関係者の利用停止

2026-02-27
毎日新聞
Why's our monitor labelling this an incident or hazard?
The involvement of ChatGPT, an AI system, is explicit. The misuse of the AI system to generate content aimed at discrediting a political leader constitutes an AI Incident because it directly leads to harm in the form of political manipulation and potential violation of rights. The event reports actual misuse and harm, not just potential or hypothetical risks, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中国当局関係者、チャットGPT悪用し高市氏中傷か 影響工作も画策

2026-02-27
毎日新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use and misuse of an AI system (ChatGPT) by state actors to conduct influence operations and spread defamatory content against a political figure. The AI system's involvement is in its use to generate harmful content and coordinate cyber operations with fake accounts, which has led to actual harm in the form of political defamation and manipulation of social media discourse. The presence of AI-generated content and coordinated fake accounts causing reputational harm and interference in political processes fits the definition of an AI Incident, as it involves violations of rights and harm to communities. The refusal of ChatGPT to comply partially mitigates the harm but does not negate the overall incident, as the coordinated campaign still used AI tools and fake accounts to spread disinformation.
Thumbnail Image

中国当局「ChatGPT!高市首相をネットで中傷する計画を立てて!」→拒否された挙句OpenAIに通報されて工作がバレてしまうwwwwww : オレ的ゲーム速報@刃

2026-02-26
����Ū������®��@��
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved, used by Chinese authorities for influence operations. The AI's use is central to the event, with the system being asked to generate harmful content and being used as a tool in a broader disinformation and harassment campaign. The harm includes defamation, harassment, suppression of dissent, and manipulation of public opinion, which are violations of rights and harm to communities. The AI system's refusal and OpenAI's reporting reveal the incident but do not negate the harm caused by the broader campaign facilitated by AI use. Therefore, this is an AI Incident.
Thumbnail Image

中国当局関係者、高市首相の評判落とす助言をチャットGPTに求める 米社が悪用事例公開

2026-02-26
産経ニュース
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in an attempt to carry out influence operations and harassment, which are forms of harm to communities and political rights. Even though the AI system refused to comply with inappropriate instructions, the malicious use attempt and the account suspension indicate direct involvement of the AI system's use in harmful activities. Therefore, this qualifies as an AI Incident under the definition of harm caused by the use of an AI system.
Thumbnail Image

中国当局者 ChatGPTによりネット工作で高市首相への中傷を画策していたことがバレる : ハムスター速報

2026-02-26
ハムスター速報
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) by Chinese authorities to orchestrate a disinformation campaign and harassment against a political figure, which led to reputational harm and social manipulation. The AI system's involvement in planning and enabling the influence operation, even if partially refused, is a direct factor in the harm caused. The harms include violations of rights (reputational harm, harassment) and harm to communities (manipulation of public opinion). Therefore, this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

オープンAI、安全対策強化へ措置発表 カナダ銃撃事件受け

2026-02-27
ニューズウィーク日本版 オフィシャルサイト
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (ChatGPT) by a suspect in a mass shooting incident, which resulted in significant harm (loss of life). The AI system's role is indirect but pivotal, as the suspect's account activity was related to policy violations potentially linked to the incident. OpenAI's measures to improve detection and cooperation with law enforcement address the misuse of the AI system to prevent future harm. Therefore, this qualifies as an AI Incident due to the realized harm connected to the AI system's use and the company's response to mitigate further risks.
Thumbnail Image

カナダ政府、オープンAIに迅速な安全規定強化を要求 銃撃事件受け

2026-02-26
ニューズウィーク日本版 オフィシャルサイト
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was used by the suspect to discuss harmful content related to the shooting. Although OpenAI suspended the account, it did not report to authorities, which is a failure in the use and governance of the AI system that indirectly relates to the harm caused by the shooting. The Canadian government's demand for stronger safety measures and potential legal enforcement highlights the AI system's involvement in the incident. Therefore, this qualifies as an AI Incident due to the indirect link between the AI system's use and the resulting harm (loss of life), as well as the failure to act appropriately upon detection of harmful content.
Thumbnail Image

中国当局者がチャットGPTに残した記録から世界的な威嚇工作が明らかに 高市首相の中傷も画策

2026-02-26
CNN.co.jp
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) by Chinese law enforcement to facilitate and document influence operations that have directly led to harm, including intimidation of dissidents and spreading false information that affects individuals' reputations and social discourse. The AI system's role is pivotal in enabling these harms, fulfilling the criteria for an AI Incident due to violations of rights and harm to communities through disinformation and harassment.
Thumbnail Image

高市首相の信用失墜をAIに質問 中国当局関係者、昨年10月

2026-02-27
神戸新聞
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly involved, and its use was sought for a malicious disinformation campaign. Although the AI system refused to assist and the account was suspended, the event shows a direct link between AI use and an attempt to cause harm. Since the harm did not materialize due to the AI's refusal and account suspension, this event represents a plausible risk of harm rather than actual harm. Therefore, it qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

加銃乱射犯のチャットGPT利用、現規則なら警察通報対象だった オープンAI明かす

2026-02-27
afpbb.com
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used by the suspect, but the harm (mass shooting) was caused by the human actor, not directly by the AI system's malfunction or outputs. The article focuses on OpenAI's updated safety policies and reporting protocols in response to the incident, which is a governance and societal response. There is no new harm or plausible future harm described as arising from the AI system itself in this article. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

高市首相の信用失墜をAIに質問 中国当局関係者、昨年10月 | 共同通信 ニュース | 沖縄タイムス+プラス

2026-02-27
沖縄タイムス+プラス
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was explicitly involved as the target of a malicious use attempt by a government official seeking to plan a disinformation campaign. Although the AI system refused to comply and the account was suspended, the event shows a credible risk that the AI could have been misused to cause harm to a political figure's reputation and potentially to communities through disinformation. Since no harm actually materialized due to the AI system's refusal and account suspension, this is a plausible future harm scenario, classifying it as an AI Hazard.
Thumbnail Image

【茨城新聞】高市首相の信用失墜、AIに相談

2026-02-27
茨城新聞社
Why's our monitor labelling this an incident or hazard?
The event describes a disinformation campaign targeting a political figure, involving AI-generated or AI-assisted content on social media, which constitutes harm to communities and political rights, fitting the definition of an AI Incident. The AI system (ChatGPT) was solicited to assist but refused, and the account was suspended, indicating AI system involvement in the development and use phases. The disinformation campaign's continuation with AI-generated content confirms realized harm. Therefore, this is an AI Incident with the AI system's refusal as a mitigating factor but not negating the incident itself.
Thumbnail Image

高市首相の信用失墜、AIに相談 中国当局関係者、昨年10月:主要:福島民友新聞社

2026-02-27
福島民友新聞社
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) explicitly mentioned as being solicited to assist in a harmful disinformation campaign. Although the AI refused to cooperate, the disinformation campaign proceeded, indicating indirect AI involvement or misuse attempts. The harm—discrediting a political figure and spreading false narratives—constitutes harm to communities and a violation of rights. The AI system's role is pivotal as it was directly solicited for malicious use, and the incident involves realized harm through the disinformation spread. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

高市首相の信用失墜、AIに相談 中国当局関係者、昨年10月|山形新聞

2026-02-27
やまがたニュースオンライン
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly involved in the development and use phase, as it was consulted for advice on a disinformation campaign. Although the AI refused to cooperate, the disinformation campaign did occur, causing harm to the political figure's reputation and potentially to communities through misinformation. Since the AI system did not directly cause the harm but was involved in the context of the incident, and the harm is realized (disinformation spreading), this qualifies as an AI Incident due to the AI system's pivotal role in the event's context and the harm caused by the related disinformation activities.
Thumbnail Image

高市首相の信用失墜をAIに質問/中国当局関係者、昨年10月 | 四国新聞社

2026-02-27
四国新聞社
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was explicitly involved as the Chinese official sought to use it for a disinformation campaign. The AI system's refusal and account suspension prevented the harm from materializing. The event highlights a credible risk of AI being misused for political disinformation and manipulation, which could harm communities and violate rights if successful. Since no harm occurred, it is not an AI Incident. The event is not merely general AI news or a response update, so it is not Complementary Information. Hence, it is best classified as an AI Hazard due to the plausible future harm that such misuse could cause.
Thumbnail Image

カナダ政府、オープンAIに迅速な安全規定強化を要求 銃撃事件受け

2026-02-26
JP
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used by the perpetrator to discuss the shooting scenario, which is a direct link to the harm caused (mass shooting with fatalities). Although the AI system suspended the account, it did not notify authorities, which is a failure in its safety protocol and could have indirectly contributed to the harm. The Canadian government's response indicates recognition of this harm and the need for stronger AI safety measures. Hence, the event meets the criteria for an AI Incident because the AI system's use and malfunction (failure to report) directly or indirectly led to harm to people.
Thumbnail Image

オープンAI、安全対策強化へ措置発表 カナダ銃撃事件受け

2026-02-27
JP
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used by the suspect who committed a mass shooting, which is a serious harm. The AI system's misuse or policy violation was a factor in the chain of events, but the article does not report new harm caused by AI or a malfunction. Instead, it reports OpenAI's measures to improve safety and cooperation with law enforcement following the incident. This fits the definition of Complementary Information, as it provides updates on responses to a prior AI-related incident rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

オープンAIがチャットGPT悪用事例報告書、関連アカウントを停止

2026-02-25
JP
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, in malicious activities that have directly caused harm to individuals (e.g., victims of romance scams) and communities (e.g., misinformation and covert influence campaigns). The harms include deception, fraud, and reputational damage, which fall under violations of rights and harm to communities. Since these harms have occurred and are linked to the AI system's misuse, this qualifies as an AI Incident.
Thumbnail Image

高市首相を中傷? 中国関係者が世論工作でチャットGPTに相談か (2026年2月26日掲載)|日テレNEWS NNN

2026-02-26
日テレNEWS NNN
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly involved and used in the context of planning a disinformation campaign, which is a form of harm to communities and a violation of rights. Although the AI system refused to assist and the campaign had no significant impact, the event shows a plausible risk of harm from AI misuse. Since no actual harm materialized, this qualifies as an AI Hazard rather than an AI Incident. The account suspension and refusal to cooperate are responses to mitigate the hazard.
Thumbnail Image

中国当局関係者「チャットGPT」に高市首相の信用失墜工作への助言求める?米オープンAI発表 - 政治 : 日刊スポーツ

2026-02-27
nikkansports.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (ChatGPT and other Chinese generative AI tools) in a coordinated disinformation and harassment campaign targeting a political figure and dissidents. The AI system was directly solicited for malicious use, and although ChatGPT refused, other AI tools were used to carry out harmful activities such as spreading negative comments, harassment, and account restrictions. These actions constitute harm to communities and violations of rights, fulfilling the criteria for an AI Incident. The involvement of AI in the development and use of disinformation and harassment campaigns is explicit and linked to realized harm.
Thumbnail Image

中国外務省「いわれのない中傷」 高市首相巡る工作画策の指摘に反論

2026-02-27
毎日新聞
Why's our monitor labelling this an incident or hazard?
The article mentions the use of an AI system (ChatGPT) in a purported disinformation campaign, which could be linked to harm to communities or violation of rights if realized. However, since the article only reports the allegation and the denial without evidence of actual harm or a credible risk of harm, it does not meet the threshold for an AI Incident or AI Hazard. It is primarily reporting on a claim and official response, thus it is best classified as Complementary Information providing context on AI-related geopolitical discourse.
Thumbnail Image

高市首相の信用失墜、AIに相談 中国当局の関係者、昨年10月:中日新聞Web

2026-02-27
中日新聞Web
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was involved in the use phase, where a user sought to misuse it for a disinformation campaign. The AI system's refusal and account suspension prevented the harm. Since no actual harm occurred, but the event shows a credible potential for AI-enabled disinformation harm, it qualifies as an AI Hazard rather than an AI Incident. The denial by the Chinese government does not negate the plausible risk described in the report.
Thumbnail Image

高市首相への影響工作「事実無根だ」と中国否定 チャットGPT悪用の報告書に反発

2026-02-27
産経ニュース
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) for malicious purposes—specifically, influence operations targeting a political leader. This misuse directly relates to harm in the form of political manipulation and potential violation of rights, fitting the definition of an AI Incident. The denial by China does not negate the report of misuse, and the report's publication confirms the incident's occurrence.
Thumbnail Image

高市首相標的に世論工作か 中国当局者、チャットGPT悪用:時事ドットコム

2026-02-27
時事ドットコム
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, in an attempt by Chinese officials to conduct political influence operations targeting a prime minister. The AI system was asked to generate content to damage reputation and spread negative comments, which fits the definition of an AI system being used in a harmful way. Although ChatGPT refused to comply, the attempt and related activities (including editing reports for influence operations) were real and involved AI misuse. This constitutes an AI Incident because the AI system's use was directly linked to an attempt to cause harm to a public figure's reputation and manipulate public opinion, which is harm to communities and a violation of rights. The suspension of the account further confirms the misuse. Hence, this is not merely a potential hazard or complementary information but a realized incident involving AI misuse.
Thumbnail Image

【茨城新聞】高市首相の信用失墜、AIに相談

2026-02-27
茨城新聞社
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly involved, and the event concerns a malicious use attempt to plan a disinformation campaign, which could lead to harm to communities or reputational harm (harm category d). However, since ChatGPT refused to assist and the account was suspended, no realized harm occurred. The event thus describes a plausible future harm that was prevented, fitting the definition of an AI Hazard rather than an AI Incident. The denial by the Chinese Foreign Ministry and the lack of evidence of actual harm further support this classification.
Thumbnail Image

"チャットGPTで世論工作"報告書 中国外務省「事実無根」(2026年2月28日掲載)|日テレNEWS NNN

2026-02-27
日テレNEWS NNN
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and an alleged attempt to use it for disinformation (a form of harm to communities). Since the AI system refused to cooperate and no disinformation was actually spread, no harm has materialized. The event thus describes a plausible future harm scenario (AI Hazard) rather than an actual incident. The denial by the Chinese Foreign Ministry and the AI system's refusal to cooperate support that no harm occurred. Hence, this is best classified as an AI Hazard.
Thumbnail Image

OpenAI 報告示警:AI 淪為跨國操弄輿論、打壓異議人士的工具

2026-02-26
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The report explicitly details how AI models and AI-generated content are used as tools in a large-scale, coordinated campaign to oppress dissenting voices and spread false information. The AI systems' outputs are central to the harm caused, including fake obituaries, forged documents, and social media manipulation. This constitutes a direct AI Incident because the AI system's use has directly led to violations of human rights and harm to communities as defined in the framework.
Thumbnail Image

OpenAI報告「中共利用AI擴大網攻」 學者:台灣應與美日合作反制 | 聯合新聞網

2026-02-26
UDN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of ChatGPT, an AI system, to generate disinformation and conduct cyberattacks, which have already caused harm by manipulating public opinion, intimidating dissidents, and spreading false information. The harms include violations of rights and harm to communities through cognitive warfare and misinformation. The article describes realized harms, not just potential risks, and the AI system's role is pivotal in enabling these attacks at scale and low cost. Hence, this is classified as an AI Incident.
Thumbnail Image

OpenAI報告指中國賬號求助ChatGPT打壓異見人士,要求協助抹黑高市早苗 - BBC News 中文

2026-02-27
BBC
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT and other AI models) used in the development and execution of coordinated disinformation and harassment campaigns. The harms include violations of human rights (suppression of dissent, harassment), psychological harm to individuals, and harm to communities through misinformation and manipulation. The report confirms that these harms have occurred, with real-world consequences for targeted dissidents. The AI systems played a pivotal role in enabling these harms by generating content and assisting in planning and executing the campaigns. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI曝光中國網軍戰術:AI被用於跨境打壓異議 | 聯合新聞網

2026-02-26
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (ChatGPT) by actors linked to Chinese authorities to conduct cross-border repression of dissent, involving thousands of fake accounts and coordinated misinformation campaigns. These actions have directly led to violations of human rights and harm to communities by silencing dissent and spreading false information. The AI system's involvement is central to the operation's scale and effectiveness, fulfilling the criteria for an AI Incident as the harm is realized and directly linked to AI use.
Thumbnail Image

疑透過AI發動網路特戰 OpenAI報告揭露大陸執法部門手法、戰術 | 聯合新聞網

2026-02-26
UDN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT and other AI models) used by Chinese law enforcement to conduct covert influence and harassment campaigns. The use of AI is integral to the scale, automation, and sophistication of these operations. The harms described include violations of human rights (suppression of dissent, harassment, psychological pressure), harm to communities (disinformation, social manipulation), and direct harm to individuals (arrests, intimidation). The article documents realized harms, not just potential risks, and the AI systems' development, use, and malfunction (or misuse) are central to these harms. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OPenAI揭陸用ChatGPT策劃反高市輿論戰 日本推對策

2026-02-27
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event involves the use of ChatGPT, an AI system, in the development and use phases to generate and assist in planning disinformation campaigns targeting a political leader. The misuse of the AI system directly contributed to harm by spreading defamatory and misleading content, which undermines democratic processes and threatens political rights, fitting the definition of violations of human rights and harm to communities. The involvement of AI-generated content and coordinated fake accounts confirms AI system involvement. The harm is realized, not just potential, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI揭中國大規模網攻 矢板明夫示警:最可怕的是這後果 - 政治 - 自由時報電子報

2026-02-26
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) by Chinese operatives to conduct coordinated online attacks, including generating fake content and managing thousands of fake accounts. This AI use has directly caused harm by suppressing free speech, intimidating dissidents, and spreading misinformation, which constitutes violations of human rights and harm to communities. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's use has directly led to significant harm.
Thumbnail Image

中國網路攻擊 - 標籤頁 - 自由時報電子報

2026-02-26
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The involvement of AI (ChatGPT) in coordinated actions that cause harm to individuals (intimidation of dissidents) and communities (disinformation campaigns) constitutes direct harm linked to AI use. The report indicates that these AI-driven activities are ongoing and have led to violations of rights and harm to communities, fitting the definition of an AI Incident.
Thumbnail Image

中國要用ChatGPT抹黑高市早苗 木原稔:採取對策刻不容緩 - 國際 - 自由時報電子報

2026-02-27
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in a malicious manner to plan and execute a disinformation campaign targeting a political figure, which constitutes a violation of democratic rights and potentially harms communities by undermining trust and democratic integrity. Despite the AI system refusing the requests, the attempt itself and the broader coordinated influence operation involving AI-generated content and fake accounts represent an AI Incident due to the direct link between AI use and harm to political rights and democratic processes.
Thumbnail Image

OpenAI報告揭密 中共用AI打壓台灣 | ChatGPT | 大紀元

2026-02-26
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions the use of AI models for content generation, translation, monitoring, and document processing as part of a coordinated disinformation and harassment campaign. The AI system's outputs have directly contributed to online harassment, false reporting leading to account restrictions, and even physical detention of individuals, constituting direct harm to human rights and communities. Therefore, this qualifies as an AI Incident due to the realized harms caused by the AI-assisted operations.
Thumbnail Image

OpenAI揭露中國人士利用ChatGPT策劃抹黑高市早苗 | 國際 | 中央社 CNA

2026-02-26
Central News Agency
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly involved in the development and use phases, assisting in planning and organizing a disinformation campaign. The campaign aims to harm the political reputation of a public figure, which constitutes harm to communities and a violation of political rights. Although ChatGPT refused to generate harmful content directly, its use in editing and organizing reports contributed indirectly to the disinformation effort. The ongoing nature of the campaign and the use of AI in this context meet the criteria for an AI Incident, as harm has occurred or is actively occurring through the AI system's involvement.
Thumbnail Image

中國利用AI擴大網攻 學者:台灣應與美日合作反制 | 政治 | 中央社 CNA

2026-02-26
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in the development and execution of cyberattacks and cognitive warfare, which have directly led to harm to communities through misinformation, intimidation, and manipulation of public opinion. This fits the definition of an AI Incident because the AI system's use has directly caused violations of rights and harm to communities. The article describes realized harms, not just potential risks, and thus it is not merely a hazard or complementary information. The involvement of AI in generating harmful content and the resulting impact on society justifies classification as an AI Incident.
Thumbnail Image

中國疑用AI發動網路特戰 OpenAI報告詳列規劃內容、範圍與影響 | 國際 | 中央社 CNA

2026-02-26
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (e.g., AI models like DeepSeek, Qwen, and others) in the development and execution of covert influence operations that have directly caused harm to human rights, psychological well-being, and community integrity. The article details realized harms such as harassment, suppression of dissent, arrests, and account restrictions linked to AI-enabled operations. The AI systems are integral to the planning, execution, and scaling of these harmful activities. Hence, this is not merely a potential risk but an actual incident where AI use has led to significant harm, fitting the definition of an AI Incident.
Thumbnail Image

要聞分析 - 美國OpenAI披露:北京使用ChatGPT進行秘密鎮壓

2026-02-27
RFI
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (ChatGPT and other AI tools) in the active deployment of misinformation and disinformation campaigns by a state actor, which has directly led to harm to communities and violations of human rights. The AI-generated content is used to impersonate officials, spread false accusations, and manipulate social media, which fits the definition of an AI Incident due to realized harm. The report also highlights the scale and coordination of these AI-enabled operations, confirming the AI system's pivotal role in causing harm.
Thumbnail Image

日本官房長官指中國利用AI貶損高市首相威脅國家安全

2026-02-27
RFI
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in a coordinated disinformation campaign targeting a political leader, which is a clear violation of rights and causes harm to communities by spreading false information and manipulating public opinion. The misuse of AI in this context has already taken place, fulfilling the criteria for an AI Incident. The involvement of AI is direct, and the harm is realized, not merely potential. The government's response is complementary information but does not change the classification of the primary event.
Thumbnail Image

OpenAI揭中共網攻 專家籲台灣加強因應確保國安 | ChatGPT

2026-02-27
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) by the CCP to conduct cyberattacks and manipulate public opinion through AI-generated content and fake accounts. This use has directly led to harm by threatening Taiwan's national security and democratic integrity, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The article describes actual ongoing harmful activities, not just potential risks, and thus it is not merely a hazard or complementary information. The involvement of AI in generating attack content and fake accounts is explicit and central to the harm described.
Thumbnail Image

Open AI:中國當局有關人士試圖用ChatGPT發動網路戰 抹黑高市早苗 | 國際 | Newtalk新聞

2026-02-26
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT and another AI system) being used or attempted to be used for harmful purposes, including manipulation of public opinion and online scams. The misuse attempt of ChatGPT was blocked, but the broader context includes actual harm caused by AI-enabled scams. The harms include violations of rights and harm to communities through misinformation and fraud. The AI systems' development, use, and misuse are central to the event. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI揭中國網攻 矢板明夫:最可怕的是造成大家都這樣想的錯覺 | 政治 | Newtalk新聞

2026-02-27
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by Chinese actors to conduct coordinated online attacks, including generating fake content and managing fake accounts to suppress dissent and manipulate public opinion. The harms described include violations of human rights (freedom of expression) and harm to communities through systemic disinformation and censorship. The AI system's use is central to the harm, making this an AI Incident rather than a hazard or complementary information. The article details ongoing harm, not just potential risk or responses, so it fits the AI Incident classification.
Thumbnail Image

中共官員用ChatGPT 意外曝光其全球恐嚇行動 | 中共執法人員 | 跨國鎮壓 | 人工智能 | 新唐人电视台

2026-02-27
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by Chinese officials to carry out harmful actions such as impersonation, forgery, and spreading disinformation aimed at intimidating dissidents internationally. These actions constitute violations of human rights and international law, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's involvement is central to the harm described. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

中共官員用ChatGPT 意外曝光對台灣認知戰計畫 | OpenAI | AI威脅情報 | 認知作戰 | 新唐人电视台

2026-02-26
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (ChatGPT and other Chinese AI models) in the development and execution of a large-scale cognitive warfare operation. The AI systems are used to generate disinformation, fake accounts, and coordinated harassment, which directly harms overseas dissidents and targeted communities, including Taiwan. The article describes realized harm (intimidation, misinformation, harassment) and violations of rights, meeting the criteria for an AI Incident. The involvement of AI in the use phase and the direct link to harm to communities and rights violations justify this classification.
Thumbnail Image

【禁聞】OpenAI報告 曝光中共網絡跨國鎮壓 | ChatGPT | 新唐人电视台

2026-02-26
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by a Chinese official to document secret repression activities. It also describes the use of AI-generated content and thousands of fake social media accounts operated by hundreds of personnel to carry out cross-border repression and misinformation campaigns. These activities have directly caused harm to individuals' rights and communities, including violations of sovereignty and psychological intimidation. The AI system's role is pivotal in generating and disseminating content and managing fake accounts, which are central to the harm described. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

中共用ChatGPT抹黑日相 日政府:威脅民主根基 | OpenAI | 高市早苗 | 跨國鎮壓 | 新唐人电视台

2026-02-27
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) for malicious purposes, including generating false content and coordinating disinformation campaigns. The harms include threats to democracy, national security, and human rights violations through intimidation and misinformation. Since these harms are occurring and directly linked to the AI system's use, this qualifies as an AI Incident.
Thumbnail Image

中共官員用ChatGPT 意外曝光全球恐嚇行動 | 界立建 | 王丹 | 海外異議人士 | 新唐人电视台

2026-02-26
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (ChatGPT) by CCP operatives to carry out malicious activities such as impersonation, forgery, and spreading false information targeting dissidents and communities abroad. These actions constitute violations of human rights and harm to communities, fulfilling the criteria for an AI Incident. The harms are direct and ongoing, as documented by OpenAI's report and confirmed by multiple sources in the article. Therefore, this event is classified as an AI Incident.
Thumbnail Image

中國利用AI擴大網攻 學者:台灣應與美日合作反制(圖) - 時政評析 -

2026-02-26
看中国
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) by a state actor to generate disinformation and conduct cyberattacks, which have already caused harm to communities by manipulating public opinion and intimidating individuals. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The article does not merely warn of potential harm but describes ongoing attacks and their impacts, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI披露中共跨國鎮壓行動 AI日記成關鍵證據(圖) - 時事 -

2026-02-26
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by Chinese law enforcement officials to plan and execute a coordinated transnational campaign involving threats, impersonation, forgery, and misinformation. The AI system's involvement is both in its use (for generating content and planning) and its development (as the platform enabling these actions). The harms described include violations of human rights (intimidation, misinformation, impersonation) and harm to communities (spread of false rumors, political influence operations). These harms are realized and ongoing, not merely potential. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

快新聞/OpenAI揭中國大規模網攻!他警告「系統性言論壓制」:恐讓人產生錯覺 - 民視新聞網

2026-02-27
民視新聞網
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-generated content and automated account management) in a coordinated campaign that directly leads to systematic suppression of speech and harassment of dissidents, which is a violation of human rights and harms communities. The AI's role is pivotal in enabling the scale and sophistication of these attacks. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

OpenAI揭露中國人士利用ChatGPT策劃抹黑高市早苗 | 國際焦點 | 國際 | 經濟日報

2026-02-26
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly used to assist in planning a political disinformation campaign, which constitutes a violation of political rights and harms communities by spreading misinformation. The AI's involvement in the development and use of the campaign, even if partially mitigated by refusal to generate defamatory content, directly contributed to the harm. The event meets the criteria for an AI Incident because the AI system's use led to realized harm in political manipulation and misinformation dissemination.
Thumbnail Image

中國執法部門疑透過AI發動網路特戰 OpenAI報告全揭露 | 國際焦點 | 國際 | 經濟日報

2026-02-26
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (e.g., AI language models like ChatGPT, local AI models such as DeepSeek and Qwen) used by Chinese law enforcement to plan, coordinate, and execute covert influence operations targeting dissidents and foreign political figures. The harms include violations of human rights (suppression of dissent, harassment, psychological harm), and the AI systems' use is pivotal in enabling these harms at scale. The article documents realized harms, including arrests, psychological pressure, account suspensions, and harassment campaigns. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

華官員用ChatGPT記錄 意外曝跨國鎮壓行動

2026-02-27
on.cc東網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the development and use phases to document and plan harmful actions that have already occurred, including intimidation and disinformation campaigns against dissidents and political figures. These actions constitute violations of human rights and harm to communities, fulfilling the criteria for an AI Incident. The AI system's role is pivotal as it was used to record and strategize these operations, and the harm is realized, not just potential. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

OpenAI報告揭密 中共用AI打壓台灣| 台灣大紀元

2026-02-26
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The report explicitly details the use of AI systems (including named AI models) in a coordinated disinformation and harassment campaign that has caused real harm: online harassment, suppression of speech, account restrictions, and physical detention of a user. The AI system's development and use are central to the harm, fulfilling the criteria for an AI Incident. The harms include violations of human rights and harm to communities. The involvement of AI is explicit and pivotal in the operation's scale and effectiveness.
Thumbnail Image

OpenAI曝中國網攻!矢板明夫揭「系統性言論壓制」:最可怕是這後果 | 政治 | 三立新聞網 SETN.COM

2026-02-27
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (e.g., ChatGPT and other AI tools) by Chinese enforcement personnel to generate fake accounts, produce AI-generated content, and coordinate mass reporting to suppress overseas dissidents' speech. This coordinated AI-enabled manipulation has directly caused harm by suppressing free expression and creating a hostile online environment, fulfilling the criteria for an AI Incident under violations of human rights. The article describes realized harm, not just potential harm, and the AI system's involvement is central to the incident.
Thumbnail Image

OpenAI報告指中國賬號求助ChatGPT打壓異見人士,要求協助抹黑高市早苗

2026-02-27
yahoo-news.com.hk
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions the use of ChatGPT and other AI models to assist in planning and executing coordinated disinformation campaigns and harassment against dissidents and political opponents. The AI systems were used to generate misleading content, fake accounts, and psychological attacks, which have caused real harm to targeted individuals, including loss of followers, reduced speech, and psychological harassment. This meets the criteria for an AI Incident as the AI system's use has directly led to violations of human rights and harm to communities. The involvement is clear, the harm is realized, and the AI's role is pivotal in enabling these actions.
Thumbnail Image

中官員將ChatGPT當日誌用 意外曝光中國跨境鎮壓手法 - 民視新聞網

2026-02-27
民視新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by a Chinese law enforcement officer to assist in planning and executing cross-border repression and disinformation campaigns. These activities have caused harm to individuals' rights and communities abroad, including intimidation and misinformation targeting dissidents and political figures. The AI system's role in facilitating these actions meets the criteria for an AI Incident because the harm is realized and the AI system's involvement is pivotal in enabling these harmful acts.
Thumbnail Image

中共海外網路特戰內幕被曝光 涉兩敏感點 | OpenAI | ChatGPT | 中共網軍 | 大紀元

2026-02-28
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) by a Chinese state security agent to develop and execute coordinated disinformation and harassment campaigns against dissidents and critics. The AI system's outputs were used to generate false narratives, organize large-scale social media attacks, and produce misleading content that caused real psychological harm and suppression of free expression. The harms include violations of human rights (psychological harassment, suppression of dissent), harm to communities (disinformation campaigns), and breaches of fundamental rights. The article describes actual realized harms, not just potential risks, and the AI system's role is pivotal in enabling these operations. Hence, this is classified as an AI Incident.
Thumbnail Image

中國網軍求助ChatGPT 要求協助抹黑日相高市早苗 - 民視新聞網

2026-02-28
民視新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) being used in the development and attempted use of a disinformation campaign targeting a political figure, which constitutes a violation of democratic processes and harms communities by undermining fair elections and free reporting. The AI system's role is pivotal as it was directly solicited to plan and amplify harmful content. Despite the AI refusing to comply, the attempt and partial use of AI in this harmful context meets the criteria for an AI Incident due to the direct link to realized harm in political manipulation and information warfare.
Thumbnail Image

오픈AI "中, 챗GPT 이용해 일본 총리 음해하려 공작" | 연합뉴스

2026-02-26
연합뉴스
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT and other AI models) used in the development and execution of a disinformation campaign targeting a political figure, which is a violation of rights and harms communities by spreading false information. The AI's role in planning and facilitating the campaign, even if limited in effectiveness, directly contributed to the harm. This meets the criteria for an AI Incident as the AI system's use has directly led to harm in the form of misinformation and political manipulation.
Thumbnail Image

오픈AI "중국, 챗GPT 이용해 다카이치 총리 음해 공작 시도"

2026-02-26
경향신문
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT and other AI models) used by a state actor to conduct a disinformation campaign targeting a political figure, which is a violation of rights and harms communities by spreading false information and manipulating public opinion. The AI system's use directly contributed to the harm, even if the direct assistance from ChatGPT was refused, the campaign still involved AI-generated or AI-assisted content. The harm is realized, as negative posts and videos appeared on social media, even if their impact was limited. This fits the definition of an AI Incident because the AI system's use led to harm to communities and violations of rights.
Thumbnail Image

오픈AI "中, 챗GPT 이용해 日총리 음해 시도"

2026-02-27
아시아경제
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly involved in the development and use stages of a disinformation campaign that caused harm to communities by spreading false and manipulative content targeting a political figure. The harm is realized, as the campaign resulted in numerous posts and videos on social media, even if the impact was limited. This fits the definition of an AI Incident because the AI system's involvement directly or indirectly led to harm to communities and violations of rights through misinformation and political manipulation.
Thumbnail Image

"中 사법당국, 日 총리 음해 기획"...오픈AI 악용 정황 공개

2026-02-26
First-Class 경제신문 파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT and other AI models) used by a state actor to orchestrate a disinformation campaign against a political figure, which has led to harm in the form of reputational damage and manipulation of public opinion. The AI's role is pivotal in planning and executing the campaign, even though some attempts were blocked. The harm is realized, not just potential, as negative content was disseminated on social media. This fits the definition of an AI Incident involving violations of rights and harm to communities.
Thumbnail Image

오픈AI "중국, 챗GPT 이용해 일본 총리 음해 공작...실제 효과는 제한적"

2026-02-26
YTN
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in the development and execution of a disinformation campaign that targets a political figure, which constitutes a violation of rights and harm to communities through misinformation and political manipulation. The AI systems were used as tools in the malicious use of AI to spread false or harmful content. Although the actual harm was limited, the disinformation campaign was carried out and caused some level of harm, meeting the criteria for an AI Incident. The involvement of AI in the malicious use and the resulting harm to political discourse and communities justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

오픈AI "中, 챗GPT 이용해 日총리 음해공작"

2026-02-26
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in the development and use phases to facilitate a disinformation campaign targeting a political leader, which is a violation of rights and harms communities. The harm is realized as the disinformation was posted on social media and other platforms, even if the impact was limited. The AI system's role is pivotal in planning and generating content for the campaign. Therefore, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

오픈AI "中, 챗GPT 이용해 일본 총리 음해 공작...효과는 제한적"

2026-02-26
쿠키뉴스
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly used by a state actor to plan and record a disinformation campaign targeting a political figure, which is a violation of rights and harms communities by spreading false information. The AI system's involvement in the development and use of this harmful activity, even if partially blocked, directly contributed to the incident. The harm is realized (disinformation spread), and the AI system's role is pivotal in enabling the planning and execution of the campaign. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

"눈싸움, 장난 아니네"...中 챗GPT로 日총리 공격?

2026-02-27
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The involvement of ChatGPT, an AI system, in a coordinated disinformation campaign targeting a political leader is explicitly described. This use of AI to generate or facilitate harmful content that undermines a political figure and stirs negative public sentiment fits the definition of an AI Incident under violations of rights and harm to communities. The article confirms the AI system was used in the development and use phases of the incident. The physical snowball fight incident is unrelated to AI. Therefore, the overall classification is AI Incident based on the disinformation event involving ChatGPT.
Thumbnail Image

중국, 챗GPT로 다카이치 공작 시도

2026-02-27
채널A
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) to generate content for foreign influence operations, which is a misuse of AI leading to harm to communities and democratic processes. The Japanese government explicitly criticizes this as a security and democracy threat, confirming the harm dimension. The AI system's role is pivotal as it was used to create and disseminate disinformation. Despite limited impact, the harm is realized through the attempt and partial dissemination of misleading content. Hence, this is classified as an AI Incident.
Thumbnail Image

OpenAI revela los usos indebidos de ChatGPT: desde abogados falsos hasta estafa de citas

2026-02-26
infobae
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by malicious actors to conduct illicit activities that have caused real harm, including fraud, disinformation, and identity theft. The harms described fall under violations of rights, harm to communities, and financial harm to individuals. Since these harms have already occurred and are directly linked to the use of ChatGPT, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El régimen chino utiliza ChatGPT para coordinar campañas de represión internacional

2026-02-26
infobae
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by Chinese officials to plan and coordinate campaigns of repression and disinformation against dissidents internationally. The harms include violations of human rights (harassment, intimidation, false accusations), harm to communities (spread of false rumors about a dissident's death), and the use of AI-generated fake documents to manipulate social media and legal processes. These harms have materialized, not just potential risks, fulfilling the criteria for an AI Incident. The AI system's use was central to the harm caused, and the article details concrete examples of these harms occurring.
Thumbnail Image

OpenAI informa de una operación china a nivel mundial al descubrir que un agente usaba ChatGPT para intimidar en el extranjero

2026-02-25
MARCA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by a Chinese agent to carry out intimidation and disinformation against dissidents abroad. This use directly leads to harm in the form of violations of human rights (intimidation, impersonation, spreading false information) and harm to communities (disinformation campaigns). The AI system's role is pivotal as it was used daily to document and generate content for the operation. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

OpenAI descubre una operación global de intimidación china a través del uso de ChatGPT por parte de un funcionario | CNN

2026-02-25
CNN Español
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by a Chinese official to document and assist in a campaign of intimidation and repression against dissidents. The harms include violations of human rights (intimidation, suppression of free expression), harm to communities (disinformation, harassment), and the use of AI-generated content for malicious purposes. The AI system's use directly contributed to these harms, fulfilling the criteria for an AI Incident. The involvement is not speculative or potential but realized, as the campaign is ongoing and documented. Hence, the event is classified as an AI Incident.
Thumbnail Image

OpenAI descubre una operación global de intimidación china a través del uso de ChatGPT por parte de un funcionario

2026-02-25
NewsChannel 3-12
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT and other AI tools) used in the development and execution of a large-scale influence operation that caused harm by intimidating dissidents and spreading false information. The harm includes violations of human rights and harm to communities, meeting the criteria for an AI Incident. The AI system's use was central to the operation's impact, and the harm is realized, not just potential. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI bloquea cuenta vinculada a China por plan contra Takaichi

2026-02-27
International Press - Noticias de Japón en español
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT, DeepSeek, Alibaba Qwen) being used to plan and generate content for a covert influence operation, which is a direct misuse of AI. The harms include violations of political rights, manipulation of public opinion, and harm to communities through disinformation and coordinated harassment. These harms have materialized as the campaign was planned and partially executed, with multiple accounts and platforms involved. Therefore, this qualifies as an AI Incident due to the direct link between AI use and realized harm to communities and rights.
Thumbnail Image

La Revista

2026-02-25
La Revista
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by the AI system ChatGPT or any other AI system. It focuses on governmental restrictions and geopolitical rivalry, which are contextual and policy-related issues rather than direct or indirect AI incidents or hazards. There is no mention of plausible future harm from the AI system itself, only concerns about control and influence. Therefore, this is best classified as Complementary Information, as it provides important context and analysis about AI governance and geopolitical dynamics without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

La campaña "QuitGPT" insta a las personas a cancelar sus suscripciones a ChatGPT

2026-02-28
www.nationalgeographic.com.es
Why's our monitor labelling this an incident or hazard?
The article centers on a user-driven boycott campaign against ChatGPT based on political and ethical objections. While ChatGPT is an AI system, the campaign itself does not describe any realized harm or malfunction caused by the AI system, nor does it present a credible risk of future harm from the AI system's use or development. Instead, it is a societal and governance-related response to concerns about the company's affiliations and technology use. Therefore, this event is best classified as Complementary Information, as it provides context on societal reactions and governance issues related to AI but does not report an AI Incident or AI Hazard.
Thumbnail Image

OpenAI曝光中国网军战术:AI被用于跨境打压异议

2026-02-26
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The report explicitly details how AI technology is used in the development and execution of coordinated online repression campaigns that have already caused harm by silencing dissent and manipulating public opinion. The AI system's involvement is direct and pivotal, as it supports the operational scale and sophistication of these harmful activities. The harms include violations of human rights and harm to communities, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中共官员用ChatGPT 意外曝光全球恐吓行动 | 中共跨国镇压 | OpenAI | 网络行动 | 大纪元

2026-02-25
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) by CCP officials to carry out harmful activities such as impersonation, intimidation, misinformation, and suppression of dissidents, which constitute violations of human rights and harm to communities. The AI system's outputs were directly used to generate false content and coordinate disinformation campaigns, leading to realized harm. The article provides concrete examples of these harms occurring, including fake legal documents, false accusations, and social media account closures. This meets the criteria for an AI Incident as the AI system's use has directly led to significant harm.
Thumbnail Image

OpenAI报告揭密 中共用AI打压台湾 | ChatGPT | 大纪元

2026-02-26
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The report explicitly details the use of AI systems in the development and deployment of a national cognitive warfare operation that has directly led to harms including online harassment, suppression of free speech, and physical detention of individuals. The AI systems are used to generate fake content, manage thousands of fake accounts, and conduct targeted attacks, which have caused real-world consequences such as social media account restrictions and arrests. These harms fall under violations of human rights and harm to communities, meeting the criteria for an AI Incident.
Thumbnail Image

中国利用AI扩大网攻 学者:台湾应与美日合作反制(图) - 时政评析 -

2026-02-26
看中国
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of ChatGPT, an AI system, by Chinese officials to generate harmful content for network attacks and cognitive warfare. The harms described include intimidation, misinformation, and manipulation of political discourse, which constitute harm to communities and violations of rights. The AI system's use is directly linked to these harms, fulfilling the criteria for an AI Incident. The article does not merely warn of potential harm but documents ongoing harmful activities involving AI, thus it is not an AI Hazard or Complementary Information. It is not unrelated as AI involvement and harm are central to the report.
Thumbnail Image

OpenAI披露中共跨国镇压行动 AI日记成关键证据(图) - 时事 -

2026-02-26
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT by Chinese officials to document and plan harmful operations, including impersonation, forgery, and spreading false information, which have been linked to real-world threats and harassment on social media platforms. The harms include violations of human rights, harm to communities, and significant psychological and financial damage to individuals. The AI system's role is pivotal as it was used both to generate harmful content and as a planning tool, directly leading to these harms. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI报告揭中国官员用ChatGPT进行"网络特战";美议员称这是将跨国镇压"工业化"

2026-02-26
美国之音
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by Chinese law enforcement for malicious purposes including creating disinformation, impersonation, and harassment. These activities have resulted in real-world harm such as spreading false rumors about a dissident's death and attempts to suppress political opponents, which are violations of human rights and harm to communities. The AI system's use is central to these harms, fulfilling the criteria for an AI Incident. The report also notes that OpenAI has taken action to stop the user, indicating recognition of the harm caused.
Thumbnail Image

中共官员用ChatGPT 意外曝光全球恐吓行动

2026-02-26
botanwang.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (ChatGPT and other AI tools) by CCP officials to conduct covert operations that have directly led to harm, including violations of human rights (intimidation and suppression of dissidents), harm to communities (spread of misinformation and defamation), and breaches of legal protections. The AI systems are used both in the development and deployment of disinformation and harassment campaigns. The harms are realized and ongoing, not merely potential. Hence, this is an AI Incident rather than a hazard or complementary information. The article also mentions the broader geopolitical context and warnings about AI misuse, but the primary focus is on the concrete harms caused by the AI-enabled operations.
Thumbnail Image

OpenAI披露ChatGPT曾拒绝协助中国进行线上影响力行动 - cnBeta.COM 移动版

2026-02-26
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, in a coordinated online disinformation campaign targeting a political figure, which is a direct misuse of the AI system leading to harm in the form of political manipulation and potential social disruption. The harm is realized as the operation was planned and partially executed, and OpenAI's disclosure confirms the AI system's involvement in this harmful activity. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities and political rights.
Thumbnail Image

中共海外网路特战内幕被曝光 涉两敏感点 | OpenAI | ChatGPT | 中共网军 | 大纪元

2026-02-28
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves explicit use of AI systems (ChatGPT and other AI tools) by CCP cyber operatives to conduct malicious influence operations targeting dissidents and political figures. The harms include psychological attacks, misinformation campaigns, harassment, and suppression of free expression, which constitute violations of human rights and harm to communities. The report confirms that these harms are ongoing and directly linked to the AI system's use, fulfilling the criteria for an AI Incident. The detailed exposure of these operations and their effects goes beyond potential or hypothetical harm, demonstrating realized harm caused by AI-enabled malicious use.
Thumbnail Image

【CDT报告汇】OpenAI报告:中共官员策划"网络特战",抹黑高市早苗,攻击海外异议人士(外二篇)

2026-02-28
China Digital Times
Why's our monitor labelling this an incident or hazard?
The OpenAI report explicitly identifies the use of AI systems (ChatGPT and local AI models) by Chinese officials to plan and conduct malicious cyber operations that harm dissidents and political figures, fulfilling the criteria for an AI Incident due to violations of human rights and harm to communities. The report also notes the scale, coordination, and tactics used, including AI-generated content and influence operations. The other sections describe human rights violations without AI involvement, so they are not classified as AI Incidents or Hazards. Therefore, the overall event classification is AI Incident based on the first part of the article.
Thumbnail Image

中共海外网络特种作战内幕被曝光 涉两敏感点

2026-02-28
botanwang.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of ChatGPT, an AI system, by a Chinese state security agent to produce and refine disinformation and harassment campaigns against dissidents and critics. The AI system's outputs were directly used to carry out coordinated attacks, including spreading false accusations, creating fake social media accounts, and psychological harassment, which constitute violations of human rights and harm to communities. The harm is realized and ongoing, not merely potential. Hence, this is an AI Incident as per the definitions provided.