← Back to Blog

PDPA, GDPR, and UU PDP: What Your AI Support Vendor Must Comply With

Legal compliance documentation and secure data handling in a Singapore tech company office

Customer support conversations contain some of the most sensitive personal data your company holds. They contain names, addresses, payment history, account numbers, complaint details, and sometimes medical or legal information depending on your industry. When you deploy an AI support agent, you're giving a third-party vendor access to all of that data in real time. The compliance questions this raises are not theoretical — they have real consequences for Singapore-based enterprises operating across APAC markets where data protection laws are tightening rapidly.

Singapore PDPA: What Applies to AI Support Systems

Singapore's Personal Data Protection Act covers the collection, use, and disclosure of personal data by organizations operating in Singapore. For AI support systems, the key provisions are the notification obligation, the purpose limitation principle, and the data transfer restriction. The notification obligation requires that customers are informed their data may be processed by AI systems — a terms of service update and a chat window disclosure that the support is AI-assisted are generally sufficient, but should be reviewed by legal counsel for your specific customer base.

The purpose limitation principle is more operationally significant. Under PDPA, personal data collected for one purpose — in this case, resolving a customer support query — cannot be used for a different purpose without fresh consent. This principle directly impacts whether your AI support vendor can use your customer conversations to train their shared model. If your vendor uses your customer data to improve a shared model that serves other companies, that constitutes a secondary use that requires explicit consent under PDPA. Most enterprise customers have not given such consent, and most vendors who use customer data for training have not obtained it.

The Training Data Question Every Vendor Needs to Answer

Ask your AI support vendor this question directly: is our customer conversation data used to train or improve your models, including models used by other customers? The answer needs to be yes or no. "We anonymize data before training" is not an acceptable answer — anonymization is not a PDPA exemption for secondary use without consent, and the PDPC has guidance confirming this. "We only use data in aggregate" is similarly insufficient if the aggregate data feeds into model weights that improve performance for other paying customers.

Level3 AI's position: customer conversation data is never used to train shared models. Each customer's model is trained exclusively on their own historical data, and that data is not accessible to our team except during onboarding and debugging sessions explicitly authorized by the customer. This is documented in our DPA (Data Processing Agreement), which we provide as standard with all contracts above the Growth plan tier.

Cross-Border Data Transfer: The Singapore-Indonesia Issue

PDPA imposes restrictions on transferring personal data outside Singapore. Under the Third Schedule of PDPA, transfers to recipient countries are permitted if the recipient provides comparable data protection to Singapore's standards, or if a contractual framework (such as standard contractual clauses) is in place between the transferring organization and the recipient. For most cloud infrastructure in APAC, this is manageable — AWS, Azure, and GCP all have established data protection frameworks.

The issue arises when your vendor's model inference infrastructure is located in a different region from where the data originates. A conversation involving a Singapore customer's account data that routes to model inference servers in the US or Europe technically involves a cross-border transfer. Under PDPA's extraterritorial application, Singapore-based organizations remain responsible for data they transfer to overseas processors. Your vendor's sub-processor locations matter, and you need to know them.

Indonesia UU PDP: The New Regulatory Landscape

Indonesia's Personal Data Protection Law (UU PDP), which took effect in October 2024, introduced a comprehensive data protection framework applicable to all processing of Indonesian personal data, regardless of where the processing organization is located. For enterprises serving Indonesian customers through AI support systems, UU PDP imposes data localization requirements for strategic sector data, consent requirements for automated decision-making, and data subject rights including access, correction, and deletion.

The automated decision-making provisions are particularly relevant for AI support. UU PDP Article 24 requires that data subjects be informed when decisions affecting their rights or interests are made using automated processing, and provides a right to contest those decisions. For AI support agents that make automated decisions about refund eligibility, service escalation, or account access, this creates disclosure and contestation obligations. Systems deployed before October 2024 that haven't been updated for UU PDP compliance are now operating outside the law.

GDPR Applicability for APAC Enterprises

GDPR applies to any processing of personal data belonging to EU residents, regardless of where the processing organization is located. APAC enterprises with European customers, European employees, or European subsidiaries are GDPR-subject for the data of those individuals. GDPR's AI-specific provisions under Article 22 prohibit fully automated decisions that have legal or similarly significant effects, unless the data subject has explicitly consented, the decision is necessary for a contract, or a Union or Member State law authorizes it.

For customer support AI, "significantly affects" is a judgment call that depends on what the AI can do. An AI agent that answers product questions does not trigger Article 22. An AI agent that makes automated decisions about refund eligibility, account suspension, or fraud flagging likely does, at least for EU-resident customers. The safeguard is generally to ensure human review capability for adverse automated decisions affecting EU customers — which maps well to the escalation tier design described in Level3 AI's action API permission framework.

The Pre-Signing Compliance Checklist

Before signing with any AI support vendor for deployment in APAC markets, run through the following questions and require written answers. First: where is customer conversation data stored, and in which geographic regions does processing occur? Second: is customer data used to train or improve models used by other customers? Third: does the vendor have a DPA that covers PDPA, UU PDP, and GDPR obligations as applicable? Fourth: what are the vendor's sub-processor relationships, and what data protection frameworks govern each? Fifth: how does the vendor handle data subject access, correction, and deletion requests — specifically, can they execute a deletion that removes customer data from model weights as well as from logs?

That fifth question — deletion from model weights — is the hardest for vendors to answer, and most cannot answer it accurately without checking with their ML team. Model unlearning is a technically complex operation that few production AI systems have implemented. The practical alternative is data minimization during training: only use data that the customer has explicitly consented to include in training, and don't train on data that might later be subject to deletion requests. This is the approach Level3 AI takes — the customer approves the training dataset at project kickoff, and data added after deployment is not included in model training without a separate authorization step.

What Good Vendor Compliance Documentation Looks Like

A vendor with a credible compliance posture should be able to provide, without significant delay: a Data Processing Agreement that references specific legal frameworks applicable to your jurisdiction, a sub-processor list with locations and data protection frameworks, a data retention and deletion policy with specific timelines, and a description of their security controls (encryption at rest and in transit, access controls, audit logging). SOC 2 Type II certification or equivalent third-party audit is the baseline standard for enterprise security claims. A vendor that responds to compliance questions with marketing language rather than specific documentation has not done the compliance work.

Data protection compliance in AI support is not a one-time checklist — it's an ongoing operational requirement. As the regulatory landscape in APAC continues to develop, the companies that have built compliance into their vendor selection process from the start will be in a significantly better position than those who discover the gaps during a regulatory review. The time to ask these questions is before deployment, when your options include changing vendors. Not after, when changing vendors means disrupting a live production system.