The rapid progress of artificial intelligence (AI) has significantly impacted industries, particularly finance and financial services, offering remarkable advancements in efficiency, decision-making and service delivery. But AI adoption has also introduced ethical, legal and societal challenges, including concerns over privacy, bias and accountability. Despite – or perhaps because of – the pace of these developments, AI regulation has not always been able to keep pace with what is happening on the ground.
The current scope of global AI legislation is shown in Figure 1. It visually emphasises the global nature of AI governance efforts, showing how governments worldwide are crafting approaches to regulate and guide AI development.

Source: DC Academy.
Figure 2 plots various countries and regions according to two dimensions of AI governance: (1) reliance on formal legislation/regulation (the vertical axis, increasing upward) and (2) use of non-binding guidelines or frameworks (the horizontal axis, increasing to the right). Each yellow dot represents one country or region, positioned according to how strongly it highlights binding laws versus how much it focuses on soft guidelines. For example, the European Union (EU) is located in the top-right corner, indicating that it has both extensive AI regulations and comprehensive guidelines, whereas Singapore is positioned at the bottom-right, suggesting it employs fewer formal laws but a set of voluntary or advisory frameworks.

Source: Freshfields.
Based on Figure 2, the EU’s approach to AI legislation stands out amongst the current efforts. The EU’s landmark Artificial Intelligence Act (AI Act) aspires to be the world’s first comprehensive AI regulation – a risk-based framework that bans certain ‘unacceptable’ AI uses and tightly controls high-risk systems. EU policymakers hope it will set a global standard, harnessing the so-called ‘Brussels Effect‘, where the EU’s strict rules influence laws and practices far beyond its borders.
The EU Artificial Intelligence Act: The First of Its Kind
In December 2023, the European Parliament passed the world’s first comprehensive AI law, a regulatory framework that has the potential to shape the development and implementation of AI systems in the coming decades, both within the EU and globally. The AI Act was introduced as a key part of the general EU Digital Strategy to address broader challenges arising from digitisation and AI. The Act took effect on 1 August 2024, with regulations phased in over time. The result is an ambitious framework with global influence, with the potential to serve as a blueprint for AI regulation worldwide.
The AI Act is widely recognised as the first comprehensive AI regulation in the world. The Act prohibits certain high-risk AI applications while enforcing stringent governance, transparency and risk management standards for others. Additionally, the Act introduces specific regulatory obligations for general-purpose AI (GPAI) models to ensure compliance with data governance and security standards.1 The significance of the AI Act cannot be overstated, as it represents a landmark regulatory milestone in the field of AI. Much like the EU’s General Data Protection Regulation (GDPR), which, as the first of its kind in 2016, revolutionised data protection standards globally, the AI Act can also be expected to influence AI governance beyond the EU, potentially shaping global regulatory standards.
This blog introduces the AI Act; assesses its key features, major provisions and potential impact not only within the EU, but also globally; discusses the trailblazing nature of the Act; and first surveys, and then evaluates, its relevance and possible effect of the AI Act on central banks (and monetary authorities) in Southeast Asia, with a special focus on SEACEN member central banks and monetary authorities.
Some Key Features of the AI Act
The AI Act is a legal framework that regulates the development and use of AI systems within the EU. It follows a risk-based approach, categorising AI applications based on their potential risks and imposing appropriate regulatory requirements accordingly. Overall, there are four risk levels (Figure 3).

Based on their risk levels, the Act imposes proportional regulatory requirements such as:
1. Unacceptable Risk AI (Banned AI Systems)
AI applications that have the potential for severe harm or that pose serious threats to human rights and democratic values are prohibited under the Act. These include:
- AI-powered social scoring systems that categorise individuals based on their behaviour.
- Real-time biometric surveillance in public spaces (except for specific law enforcement purposes).
- AI systems that manipulate human behaviour using subliminal techniques.2
2. High-Risk AI (Strictly Regulated)
AI applications with a significant impact on individuals’ rights and safety, as well as critical sectors, are highly regulated. Examples include:
- AI used in credit scoring and banking for loan approvals.
- AI used in employment screening, such as automated resume/curriculum vita evaluations.
- AI used in healthcare diagnostics and treatment recommendations.
Compliance Requirements for High-Risk AI:- Data transparency and quality assurance to prevent biases.
- Human oversight and accountability in AI decision-making.
- Rigorous risk assessments and regulatory documentation.
3. Limited Risk AI (Transparency Obligations)
AI applications like chatbots, deepfakes and generative AI (e.g., ChatGPT, DALL·E) must comply with transparency rules:
- Users must be informed when interacting with AI.
- AI-generated content (e.g., deepfakes) must be clearly labelled.
4. Minimal Risk AI (Freely Allowed)
AI applications that pose minimal or no risks, such as spam filters, video games and AI-powered customer services, are unregulated, allowing free-market innovation.
This classification is significant as AI systems classified as ‘unacceptable risk’ (e.g., social scoring or biometric categorisation) will be banned from February 2025. Compliance obligations will gradually increase, with full enforcement for high-risk AI systems by 2027. Non-compliance may result in fines “up to EUR35 million or 7 per cent of global turnover”, highlighting the need for early adoption.3 Additionally, those working with GPAI must develop codes of practice by 2025 and adhere to specific provisions for GPAI models and systems.
Financial Services and AI: Managing the Risks, Responsibilities and Challenges
The financial sector is among the most directly affected industries under the AI Act due to the high-risk classification of AI in finance and financial services. The rapid adoption of AI in financial services presents both transformative opportunities and significant challenges, requiring a balanced approach to risk management, responsibilities and regulation. Overall, banks, fintech companies and financial institutions need to adapt to new compliance requirements in key areas.
Several noteworthy features of the AI Act in this regard are summarised in Table 1.
Credit scoring and loan approvals | AI-driven credit risk assessments must be explainable and auditable. Banks must document AI decision-making to prevent biases in lending. |
Fraud detection and anti-money laundering (AML) | AI used in fraud detection must meet strict data governance standards. Regulators will scrutinise false positives and negatives in AI-driven fraud detection.4 |
Algorithmic trading and market risk | High-frequency and algorithmic trading must comply with transparency and risk management obligations. New stress-testing requirements may be imposed on AI-based trading strategies. |
AI-powered chatbots and virtual assistants | Financial institutions must clearly disclose when AI, rather than humans, interacts with customers. AI-driven financial advice tools must be accurate, fair and non-manipulative. |
AI governance and compliance costs | Banks and fintech firms must establish AI governance teams and ensure human oversight of AI-driven decisions. Regulatory compliance costs will rise as institutions adapt to new risk management frameworks. |
Sanctions and compliance obligations | Companies violating the Act could face fines of up to EUR35 million or 7 per cent of global turnover. AI providers and financial institutions must maintain compliance records and audit trails. |
Territorial Scope of the AI Act: Applicability Beyond Borders
Similar to the earlier GDPR, the AI Act has an extraterritorial scope, meaning it applies not only to businesses within the EU, but also to companies and organisations outside the EU under certain conditions.
The Act applies to the parties highlighted in Table 2.
Providers and deployers in the EU | AI systems and GPAI models placed on the EU market. AI systems deployed for use in the EU, even if developed elsewhere. |
Non-EU companies with an EU nexus | AI systems where the output (e.g., predictions, recommendations, decisions) is used within the EU. Companies outside the EU sell, deploy or integrate AI systems into products marketed in the EU. |
Compliance requirements for non-EU companies | Non-EU providers of high-risk AI systems must appoint an authorised representative in the EU, responsible for compliance. Importers and distributors in the EU must ensure AI systems conform to the Act’s standards, including documentation, risk assessments and transparency requirements. |
Exemptions | AI systems developed and used exclusively for military, defence or national security purposes. AI systems used for scientific research and testing before being placed on the market. AI used by international organisations or third-country governments under specific international agreements. |
The AI Act is expected to impact companies worldwide that interact with EU users or operate in the EU market. Businesses outside the EU must assess whether their AI systems have an EU nexus and, if so, implement compliance strategies to avoid regulatory penalties. To effectively navigate compliance under the new AI Act, companies must conduct a thorough evaluation of their AI systems, particularly those classified as high-risk. This requires a structured approach to ensure that every aspect of AI deployment adheres to stringent legal standards.
While AI enhances efficiency, fraud detection and predictive analytics, it also introduces higher compliance costs, as financial institutions must invest in AI risk management to meet evolving regulatory standards. Additionally, regulatory complexity poses a significant challenge, as institutions must ensure compliance across various AI applications, which could hinder innovation if regulations become overly restrictive. Key compliance steps for high-risk AI systems are outlined in Figure 4.

Source: GDPR Local.
But AI also presents opportunities for ethical and responsible growth in the financial services sector. Transparency and accountability in AI-driven finance can enhance consumer trust, leading to greater adoption and confidence in AI-powered solutions. Institutions that embrace responsible AI practices can gain a competitive advantage, strengthening their reputation and positioning themselves as leaders in ethical AI adoption.
Moreover, addressing AI biases and ensuring fair and transparent decision-making contributes to stronger market integrity, reducing the risk of unfair financial outcomes. Striking the right balance between regulation and innovation will be key to leveraging AI’s full potential while maintaining financial stability and trust.
The AI Act: What Are the Ripple Effects Beyond Europe?
A key question is whether the AI Act will trigger the aforementioned ‘Brussels Effect’, whereby EU regulations set de facto global standards. The precedent of the EU’s 2016 GDPR privacy law looms large – within a few years of it taking effect in 2018, tech giants applied GDPR rules worldwide and many countries copied its data protection model. EU leaders similarly hope that the AI Act will export European AI governance norms abroad, particularly in terms of transparency and fundamental rights. There are several channels through which the AI Act could influence Southeast Asia and beyond.
Market Access and Extraterritorial Reach. The AI Act’s provisions apply not only to EU-based companies but also to any organisation selling or operating AI systems in the EU. For instance, a Malaysian AI startup that uses its service in the EU must comply with EU requirements or else risk losing access to that market. This extraterritorial scope effectively requires global companies to meet EU standards if they wish to conduct business in the lucrative European market. As Southeast Asian firms expand their digital operations, many will adjust their practices, including data management and AI model validation, to align with the AI Act, thereby enabling them to serve European clients. Over time, these EU-aligned practices could become standard business norms locally as well.
Corporate Self-Regulation. Even beyond legal mandates, large multinationals may decide it is more efficient to adopt a single global compliance regime. Rather than building one AI system for the EU (with all the safety features) and a different, less-regulated version for Asia, companies might implement AI Act-grade safeguards across all markets for consistency and uniformity. We saw this with GDPR, where firms extended data rights and consent forms to all users globally. A similar dynamic could make the AI Act an international benchmark by default, especially for high-risk AI systems that major cloud and enterprise vendors provide worldwide. Southeast Asian consumers may indirectly benefit from higher standards, such as more transparent AI or opt-out options, even if their governments have not yet imposed such rules.
Raising the Global Bar on AI Ethics. The EU’s move has already put AI governance on the agenda of policymakers everywhere. By flagging issues such as AI-driven bias, surveillance and manipulation as urgent risks, the AI Act is elevating the global discourse on AI ethics. Governments in ASEAN are closely watching the EU debate, drawing lessons on how to address the societal impacts of AI. Even if they do not replicate the exact regulations, the AI Act pushes AI ethics from an abstract ideal into concrete requirements that others can reference. For example, the EU’s ban on social scoring and curbs on facial recognition send a signal that these uses are problematic – giving Asian civil society groups leverage to question similar projects at home.
That said, the Brussels Effect on AI may be limited. Unlike privacy, where GDPR filled a void, in AI there are other competing regulatory models – notably the U.S. approach, which so far favours voluntary frameworks over hard law, and China’s approach, which involves strict state control of AI in some areas and permissiveness in others. Moreover, AI systems are diverse and often profoundly localised (e.g., trained on local languages or data). If an AI application is highly localised to Southeast Asian markets, EU rules may not significantly impact it unless it is exported to the EU. The outcome will also depend on how other major players and regions respond.
A Stocktake of Developments in Asian Countries
On a regional level, ASEAN released the Guide on AI Governance and Ethics in February 2024, promoting principles of transparency, fairness, security and accountability. More generally, Asia is taking a balanced yet proactive approach to AI regulation. While no country has implemented an AI Act like the EU, many nations are actively developing their own national AI strategies.
Table 3 provides an overview of selected Asian jurisdictions and their current AI legislative or strategic developments, illustrating the diversity of approaches across the region.
Jurisdiction | Key AI Legislative / Policy Developments |
Hong Kong | Ethical AI Framework (July 2024) focuses on non-discrimination and explainability in AI decision-making. Privacy Commissioner for Personal Data (PCPD) guidelines on AI procurement and compliance. |
South Korea | AI Act in progress to ensure AI accessibility with reliability requirements. Copyright standards for AI-generated content. National AI Strategy on AI research and development (R&D), data industry and semiconductors. Personal Information Protection Commission (PIPC) AI privacy committee recommendation (July 2024). |
Japan | National AI Strategy (2022) for an agile governance approach. Chair of Hiroshima AI Process (G7) for global governance standards. Draft legislation (May 2024) mandating developer disclosures and human rights protections. |
Vietnam | National AI Strategy (2021–2030) under Prime Minister’s Decision No. 127/QD-TTg. |
Cambodia | AI Ethics Initiative (November 2024) in collaboration with UNESCO and others, led by the Cambodia Academy of Digital Technology and the Ministry of Post and Telecommunications. |
India | Digital India Act (DIA) regulates high-risk AI. National ‘AI for All’ Strategy for inclusive growth. Task Force on ethical and legal issues and to set up AI regulator. G20 and UNESCO AI Principles adopted. |
Indonesia | National AI Strategy (2020) supports Indonesia’s Vision 2045. Five AI priority areas: health, bureaucracy, education, food and smart cities. Non-binding Circular on AI Ethics provides groundwork for future regulation. |
Chinese Taipei | (Draft) AI Basic Act proposes risk-based regulatory approach. Focus on innovation, data protection, consumer rights and transparency. AI Action Plan and Action Plan 2.0 issued. |
This stocktake reveals that while some countries, such as China, have rapidly proposed binding AI legislation, others, including Singapore and Malaysia, are prioritising agile and collaborative governance mechanisms. A closer look at these representative examples follows.
To start with, China has been proactive in adopting legislation around the use of AI, with several national laws currently in place. A draft Artificial Intelligence Law of the People’s Republic of China was proposed in May 2024. This law would establish specific requirements for developers and deployers of AI, applicable to AI in general and to high-risk or critical AI-based systems. At the moment, the laws governing AI in China are specific to AI use cases.5 Finally, China is a party to the G20 AI Principles, which are drawn from the OECD’s AI principles.
Singapore, through its Personal Data Protection Commission (PDPC) and the AI Verify Foundation, has developed voluntary governance frameworks and initiatives aimed at promoting the ethical deployment of AI, effective data management and sector-specific implementation.6 In May 2024, the Infocomm Media Development Authority (IMDA) released the Model AI Framework for Generative AI, promoting digital watermarking and cryptographic provenance to authenticate AI-generated content.
Finally, Malaysia’s Ministry of Science, Technology and Innovation (MOSTI) released National Guidelines on AI Governance and Ethics in September 2024. The Malaysian government supports the National AI Roadmap (2021-2025) to position Malaysia as a high-tech, AI-driven economy. Overall, Malaysia’s AI policies both establish a framework for ethical AI development and deployment and prioritise public interest, safety and fairness in AI adoption.
A Defining Moment for AI Governance: A Central Bank Perspective
The AI Act holds significant implications not only for governments, businesses, and civil society, but also for central banks (and monetary authorities) in Southeast Asia, many of which are increasingly integrating AI into core functions such as monetary policy, financial supervision, payment systems and macroprudential oversight. As stewards of financial stability and key public institutions, central banks operate at the intersection of technological innovation, data governance and systemic risk, making the regulatory shifts in the EU especially relevant to their evolving mandates.
In conclusion, there are five issues to consider.
1. Catalyst for Internal AI Governance Frameworks
The AI Act’s risk-based, human-centric approach provides a reference point for developing internal AI governance standards within central banks. While most Southeast Asian central banks are at an exploratory or pilot stage with AI (e.g., deploying machine learning for fraud detection or nowcasting GDP), the AI Act underscores the need for structured oversight mechanisms – including internal audit trails, explainability standards and human-in-the-loop protocols. These governance practices are particularly relevant as central banks begin to deploy AI in sensitive domains, such as regulatory enforcement, credit risk analysis and stress testing. Drawing on the AI Act, central banks may seek to classify their AI use cases by risk level, ensuring that high-impact applications – such as AI-driven supervisory analytics – undergo rigorous validation, documentation and performance monitoring.
2. Implications for AI Adoption in Financial Supervision
For central banks engaged in financial supervision and regulatory technology, the AI Act raises important considerations about second-order compliance risks. If regulated financial institutions deploy AI models – particularly those with general-purpose or high-risk applications – the burden of oversight could increase. Supervisory authorities will need to understand how these models function, assess whether they meet fairness and transparency standards and determine if additional safeguards are required. Although the AI Act is a European regulation, its global influence may prompt multinational banks and financial firms operating in ASEAN to adopt EU-compliant standards across their jurisdictions. This may, in turn, shift supervisory expectations in Southeast Asia and encourage regional central banks to build up technical capacity for AI auditing, develop AI-specific supervisory guidance, and coordinate with financial institutions on best practices for explainable and fair algorithmic decision-making.
3. Systemic Risk and Macroprudential Oversight
A key motivation of the AI Act is to prevent systemic harm from the unchecked deployment of AI. Central banks, as macroprudential authorities, must now consider how large-scale, cross-sectoral deployment of AI – particularly GPAI models – could amplify systemic vulnerabilities. For instance, if financial institutions adopt similar foundational models (e.g., Llama 3 or GPT-4 derivatives) for risk modelling or trading strategies, this could introduce correlated behaviours or model bias. The AI Act’s emphasis on systemic risk assessments for powerful GPAI models may prompt Southeast Asian central banks to incorporate AI stress-testing scenarios into their macroprudential toolkits. Moreover, collaboration with data protection authorities and AI regulators may become necessary to monitor concentration risks in AI infrastructure, particularly when AI services are outsourced to a limited number of providers.
4. Cross-Border Regulatory Coordination and Data Governance
The Act also promotes interpretability and responsible data practices, aligning with the interests of central banks in cross-border financial data governance and facilitating digital trade. As AI systems increasingly rely on real-time, cross-jurisdictional data flows, central banks will need to engage in regional regulatory dialogues – within ASEAN and beyond – on harmonising standards for data privacy, algorithmic transparency and model validation. This is particularly relevant in the context of digital payment systems, open banking and cross-border settlement platforms, where AI-enabled tools are becoming increasingly prevalent. The AI Act may serve as a reference point in shaping regional principles for AI in financial infrastructure, thereby reinforcing trust in emerging technologies such as central bank digital currencies (CBDCs) and real-time gross settlement systems.
5. Capacity Building and Strategic Foresight
Ultimately, the EU’s regulatory push underscores the need for central banks to invest in AI talent, ethics training and horizon-scanning capabilities. As the AI Act sets a benchmark for the responsible deployment of AI, it also raises expectations for public institutions to lead by example. Central banks may need to formalise AI ethics committees, partner with academic institutions for policy experimentation, and participate in regional knowledge-sharing platforms to keep pace with the evolving AI regulatory landscape. Forward-looking central banks in Southeast Asia could also take the lead in shaping regional AI supervisory norms, potentially influencing ASEAN-wide financial governance frameworks. The lessons drawn from the AI Act can inform adaptive, proportionate and context-sensitive regulation – one that protects financial stability without stifling innovation.
- The AI Act imposes lighter obligations on open-source general-purpose AI (GPAI) models compared to proprietary ones. But if an open-source model is deemed capable of posing systemic risks – such as widespread misuse or significant societal impact – it may trigger stricter oversight, notably under Article 52, which outlines GPAI-specific requirements. For example, IBM’s Granite and Meta’s Llama 3, both versatile and open-source foundational models, could fall within the scope of these rules due to their broad applicability across tasks. The extent of regulatory scrutiny will depend on several factors, including a model’s technical capabilities, its deployment in high-risk contexts and its potential systemic influence within the AI ecosystem. ↩︎
- Technologies that influence individuals’ decisions or actions without their conscious awareness. These systems exploit psychological or neurological vulnerabilities, often through hidden cues or stimuli, to steer behaviour in a predetermined direction. Due to ethical concerns and potential risks to autonomy and decision-making, such AI applications are prohibited under the AI Act. ↩︎
- As outlined here, Annex III – paragraph 1 (5)(b) of the AI Act provides a carve-out for fraud prevention AI systems from the high-risk systems list: “AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score [are high risk], with the exception of AI systems used for the purpose of detecting financial fraud”. ↩︎
- In AI-driven fraud detection, false positives occur when legitimate transactions are incorrectly flagged as fraudulent, while false negatives happen when actual fraudulent transactions go undetected. Regulators will closely examine these errors to ensure AI systems are accurate, fair and do not unfairly impact consumers or businesses. ↩︎
- Example of use case-specific laws in China can be found here, here, here, here and here. ↩︎
- Some references on current initiatives can be found here, here, here, here and here. ↩︎