← Back to Blog

AI Support Agents in Financial Services: What APAC Regulators Actually Require

Regulatory documents, compliance checklist, and Singapore financial district

Financial services is the sector where AI customer support deployments fail most expensively. The technology failure modes are the same as any other industry — intent misclassification, hallucinated information, broken integrations. The regulatory failure modes are unique: unauthorised financial advice, privacy violations that trigger mandatory reporting, and audit trail gaps that make compliance reviews impossible.

This article covers what the three major APAC financial regulators — Singapore's MAS, Indonesia's OJK, and Thailand's BOT — actually require for AI-assisted customer interactions. It is not legal advice. It's an account of what we've encountered in deployments and how the requirements translate into engineering decisions.

What MAS Requires in Singapore

The Monetary Authority of Singapore's framework for AI in financial services is anchored in the FEAT Principles (Fairness, Ethics, Accountability, Transparency), published in 2018 and updated through subsequent guidance documents including the 2022 MAS Model Risk Management guidance.

For AI support agents specifically, three requirements carry the most operational weight:

Explainability of automated decisions. When an AI agent makes a decision that materially affects a customer — declining to process a transaction, flagging an account, or providing product information that could influence a financial decision — the reasoning must be explainable and auditable. This doesn't mean you need to expose your model weights. It means the agent's conversation log must contain enough context to reconstruct why a given response was generated. In practice, this requires logging not just the output but the data inputs the agent accessed (account status, product eligibility flags, previous interaction history) at the moment of each response.

Human oversight for material decisions. MAS expects a human to be in the loop for decisions above defined materiality thresholds. For a bank deploying an AI support agent, this typically means the AI can answer questions about products and handle routine service requests (address changes, statement downloads), but transactions above a threshold value and any activity flagged as potentially suspicious must route to a human immediately. The specific thresholds are negotiated in your MAS Technology Risk Management notification, not mandated at a fixed dollar value.

Customer disclosure. Customers must be informed when they're interacting with an automated system, not a human. This is not a grey area. MAS Notice FAA-N21 is clear that customers must be notified when automated advice or decision-making tools are being used. Hiding the fact that a customer is talking to an AI is a breach, regardless of how natural the conversation feels.

What OJK Requires in Indonesia

Indonesia's Otoritas Jasa Keuangan (OJK) issued POJK 11/POJK.03/2022 (the Digital Banking Regulation) and subsequent circulars on technology risk management that apply to AI systems used by licensed financial institutions.

OJK's focus differs from MAS in a significant way: it places greater weight on data residency and vendor accountability than on algorithmic explainability. The two most operationally significant requirements for AI support deployments:

Data residency. Customer data processed by AI systems must be stored on servers located in Indonesia. For cloud deployments, this means using Indonesian data centres (AWS ap-southeast-3 in Jakarta is the dominant option) and ensuring that any model inference also occurs in-country for sensitive customer data. A Singapore-hosted AI agent processing data from Indonesian customers raises immediate OJK compliance questions. The practical architecture for OJK compliance is either a separate in-country deployment or a data residency guarantee from a cloud provider with Indonesian region availability.

Third-party vendor accountability. OJK requires that financial institutions maintain documented oversight of third-party technology providers. This includes contractual provisions requiring the vendor to notify the institution of material system changes, audit rights, and incident response commitments. For Level3 AI as a vendor, this means our deployment contracts for Indonesian financial institutions include explicit OJK notification clauses, incident response SLAs (we commit to notifying within 4 hours of a material AI system failure), and the right to audit our data handling practices.

What BOT Requires in Thailand

The Bank of Thailand's AI framework is the most recently formalised of the three. The BOT Responsible AI Principles for Financial Institutions (published 2023) are broadly aligned with MAS FEAT but with stronger emphasis on bias monitoring and consumer protection.

The most distinctive BOT requirement is bias testing. Specifically, BOT expects financial institutions to test AI systems for differential treatment across demographic groups — including gender, age, geography, and income level — and to document the results. For a support AI, this means monitoring whether the agent provides materially different service quality or response accuracy across customer segments. A bank deploying an AI that performs significantly worse for customers in rural provinces than urban centres has a BOT compliance exposure.

BOT also requires a complaints mechanism that specifically covers AI-related grievances. If a customer believes an AI made an error that affected them, they must have a clear path to raise a complaint with a human reviewer. The AI can't be the endpoint of the complaints process.

Engineering Requirements That Come From Regulation

Translating these regulatory requirements into engineering decisions:

Audit logs are non-negotiable. Every AI-customer interaction must be logged with sufficient fidelity to reconstruct what happened and why. This means logging conversation transcripts, the data the agent accessed, the actions it took, and any escalation triggers. Logs must be retained for the regulatory minimum period — 5 years in Singapore, 3 years under OJK, 5 years under BOT. Build your data retention architecture before deployment, not after.

Disclosure UX must be designed deliberately. "You are chatting with an automated assistant" is the minimum. Ideally, the disclosure is clear enough that customers understand what the AI can and cannot do. We've found that customers who understand the AI's scope upfront have lower escalation rates — they ask questions the AI can actually answer rather than immediately demanding a human.

Threshold-based escalation must be configurable without code changes. Regulators update their materiality guidance periodically. The escalation thresholds in your AI system must be configurable through your platform's administration interface, not hard-coded. Financial institutions do not want to raise a deployment request every time they need to adjust the transaction limit above which the AI routes to a human.

Bias monitoring dashboards are increasingly required. Not just for BOT deployments — MAS and OJK are moving in this direction as well. Build or integrate monitoring that can segment AI performance metrics (resolution rate, CSAT, escalation rate) by customer segment. If performance drops for a specific demographic, you need to detect it before a regulator does.

Common Mistakes in Financial Services Deployments

Four mistakes that reliably surface in financial services AI deployments, each expensive to fix in production:

Treating financial product questions as support questions. An AI that answers "What are the eligibility criteria for your personal loan?" is providing financial product information. This is different from "Where can I find my account number?" Depending on your institution's licence type and the specifics of the response, product information answers may require the same disclosure obligations as robo-advisory. Flag product-related intent categories explicitly and review the regulatory treatment before deploying those responses.

Not testing the AI's response to sensitive financial situations. Customers in financial distress — behind on loan repayments, facing account restrictions — contact support frequently. AI agents trained primarily on routine support queries often respond poorly in these conversations: too transactional, missing the emotional context, or providing incorrect information about hardship programs. Red-team your AI specifically on sensitive financial scenarios before going live.

Shared model infrastructure across institutions. Some AI vendors run shared model infrastructure across multiple financial institution clients. If your customer interaction data is processed on shared infrastructure, there is a risk that conversation patterns from one institution could theoretically influence model behaviour for another. This is both a privacy concern and a potential competitive intelligence exposure. Verify your vendor's isolation architecture before deployment in any regulated financial services context.

Ignoring the human reviewer's workflow. The AI is only part of the system. The human agents who receive escalations from the AI need to understand what they're receiving. An escalation that arrives with no context — just a transfer from the AI with no summary of the conversation — is nearly as bad as no AI at all. Build the human reviewer workflow as carefully as you build the AI's conversation flows.

A Practical Compliance Checklist

Before deploying an AI support agent in a financial services context in Singapore, Indonesia, or Thailand:

Confirm your data residency architecture. Where are conversation logs stored? Where does model inference occur? Get written confirmation from your vendor.

Document your escalation thresholds. What categories of query does the AI hand off to humans? What transaction values trigger mandatory escalation? Get these in writing and get them reviewed by your compliance team before go-live.

Write and test your disclosure language. Test it with real customers, not just compliance officers. Disclosure that satisfies a regulator but confuses customers creates a different kind of problem.

Build your audit log retrieval process. A regulator will ask you to produce conversation logs within a specific timeframe. Practice retrieving specific conversations from your logging system before you're asked to do it under pressure.

Review vendor contracts for accountability provisions. Notification obligations, audit rights, and incident response SLAs should all be explicitly stated. Generic SaaS terms are not adequate for regulated financial services deployments.

Deploying AI support in financial services?

Level3 AI deploys in MAS-regulated, OJK-regulated, and BOT-regulated financial institutions across APAC. Our platform includes audit logging, configurable escalation thresholds, and data residency options for Singapore and Indonesia. Talk to us about your specific regulatory context.

Schedule a Compliance Discussion