跳到主要內容區塊
僑務電子報
:::

Moda Is Committed To Promoting Trustworthy AI By Organizing Deliberative Democratic Activities And Inviting International AI Vendors For Exchange

2024-05-12
Ministry of Digital Affairs
分享
分享至Facebook 分享至Line 分享至X
Moda is committed to promoting trustworthy AI by organizing deliberative democratic activities and inviting international AI vendors for exchange.
Moda is committed to promoting trustworthy AI by organizing deliberative democratic activities and inviting international AI vendors for exchange.

The Ministry of Digital Affairs (moda) stated that artificial intelligence (AI) has experienced rapid development in recent times. Moda continues to establish a trustworthy AI development environment that balances technological innovation and risk governance. They are promoting various AI policies and initiatives, including establishing an AI assessment center, issuing AI assessment guidelines, and becoming a partner of the international non-governmental organization(NGO) "Collective Intelligence Project." This year, they have further organized the "Deliberative Democracy Activities on Using AI to Promote Information Integrity" and invited international leading AI vendors for exchanges. Companies such as Meta, Google, Microsoft, and OpenAI, among others, all agree that AI must possess important characteristics such as trustworthiness, accuracy, and security. Moda will continue to engage in dialogues with various sectors, striving to promote trustworthy AI and collaborating to create the AI application services needed by society.

Moda stated that to ensure that AI applications align with the interests of the general public, it officially became a partner of the international NGO "Collective Intelligence Project" in May last year. It participated in alignment conferences to assist our country in gathering consensus among the public on the demand for and risks of artificial intelligence in the global public domain, collectively addressing the "AI alignment problem". Moda is committed to promoting trustworthy AI and a digital trust environment. To ensure the reliability of AI applications and to prevent the dissemination of erroneous, false, and forged messages and images generated by generative AI, it refers to international policies, standards, and industry requirements. It has established an AI assessment center, released AI assessment guidelines, and is currently conducting assessments for large language models (LLMs). It is gradually establishing verification institutions and laboratories to promote AI assessment and verification and to establish international exchange and cooperation.

To explore the societal impacts of AI development from individual, stakeholder community, and national perspectives, moda organized two "AI Democratization" deliberative workshops last year, inviting relevant stakeholders to discuss how to respond to the development of generative AI. Through discussions and feedback from participants, it became clear that from data collection and processing to the utilization of data, it is necessary to address the diverse societal expectations regarding AI ethics.

To further respond the expectations of various sectors for the correct application of AI in public governance, moda collaborated with the Institute of Science Technology and Society, National Yang Ming Chiao Tung University, Industrial Technology Research Institute, and the Deliberation Democracy Center, Stanford University organized the "Utilizing AI to Promote Information Integrity Deliberative Democracy Activity" held on March 23 of this year. Utilizing the government's dedicated”111” short code SMS platform, random text messages were sent out inviting citizens to participate. A total of 450 experts, scholars, citizens, communities, and digital professionals were invited to participate in online discussions on topics such as using AI to identify and analyze the integrity of information. The event received widespread attention, indicating a high level of public expectation for the use of AI in promoting information integrity.

On April 17th, mofda convened international leading AI vendors to discuss topics such as strengthening platform analysis and identification mechanisms and submitting language models (LLMs) to the AI assessment center. Participating vendors included Meta, Google, Microsoft, and OpenAI, all of whom acknowledged the importance of AI possessing trustworthy, accuracy, and security characteristics. They also noted that during the development of language models, their teams are responsible for continuously testing and training the models to meet the expectations of reliable AI systems. They expressed a positive attitude towards providing their developing language models for evaluation by the AI Product and System Evaluation Center (AIEC). Meta and OpenAI also indicated preliminary agreement during the meeting.

Furthermore, regarding the analysis and identification of generative AI content, international vendors presented established or upcoming practices during the meeting. These practices include SynthID technology (used to watermark AI-generated images or detect watermarks within AI-generated images for easier identification of AI-generated content), AI-generated content detection technology, traceability labeling and signatures, user reporting mechanisms, and alerting users to AI-generated content.

Based on the feedback gathered from both the public and international AI vendors during this event, moda will continue to urge online platform operators to promote information identification and analysis mechanisms. Additionally, if users post or broadcast content on online platforms that includes personal images generated using deepfake technology or artificial intelligence, it should be clearly indicated or labeled to ensure information integrity.

Moda emphasizes that addressing the challenges to information integrity posed by AI technology requires collaborative efforts from platforms, governments, and the public. By establishing transparent regulatory mechanisms and enhancing the public's ability to recognize AI-generated content, the promotion of information authenticity and integrity can be facilitated. Democratic deliberation will also strengthen societal resilience. In the future, moda will continue to establish a trustworthy AI development environment that balances technological innovation and risk governance. This includes optimizing and retaining talent, prioritizing AI ethics and legislation, promoting data governance and circulation, and constructing a human-centered and robust digital ecosystem.

相關新聞

top