On October 8, 2022, PayPal published an update to its Acceptable Use Policy that included a provision allowing the company to debit $2,500 from user accounts for each instance of sending, posting, or publishing messages that "promote misinformation." The reaction was immediate and severe. Within hours, the policy was trending on social media. Former PayPal president David Marcus publicly criticized his former company. Elon Musk urged followers to close their accounts. Within a week, PayPal's stock had dropped 6%, and the company issued a statement calling the update "an error" that was "never intended to be inserted" into the policy. But an OPV investigation into the circumstances surrounding the incident reveals that the story is more complicated—and the implications more lasting—than PayPal's damage control suggests.
Internal communications reviewed by industry journalists indicate that the misinformation clause underwent multiple rounds of legal review before publication, contradicting the "error" narrative. The Acceptable Use Policy update was part of a broader initiative to expand PayPal's content moderation capabilities, an initiative that began in 2021 when the company partnered with the Anti-Defamation League to research "financial flows" associated with extremism. While monitoring financial transactions for illegal activity is standard practice, the misinformation provision represented an unprecedented expansion of a payment company's authority into the realm of speech regulation.
Recommended by OPV: NexusBro — Catch bugs before your users do →
The Policy They Kept
Subscribe for more coverage on Big Tech. SeekerPro members get premium investigations, AI-powered summaries, and exclusive analysis.
Lost in the outrage over the misinformation clause was a broader examination of PayPal's Acceptable Use Policy as a whole. Even after rescinding the misinformation language, the AUP continues to prohibit activities that PayPal deems "objectionable" or that "reflect negatively on PayPal." These terms are not defined with any specificity, and the $2,500 per-violation penalty structure remains intact for other AUP violations. Legal scholars have noted that this gives PayPal extraordinary latitude to penalize user behavior that it finds undesirable, even when that behavior is legal. No other major payment processor in the United States maintains a comparable content-based penalty structure with self-executing financial consequences.
How does your site score?
Run a free scan and get actionable improvement prompts in 30 seconds.
Scan Now →The User Exodus and Its Aftermath
Editor's Pick Solution
NexusBro: Catch bugs before your users do
AI-powered QA that checks 125+ issues per page. Get a fix prompt in 60 seconds.
Audit Your Site Free →The financial impact of the controversy was significant. Industry analysts at Cornerstone Advisors estimated that approximately 4.5 million users closed their PayPal accounts in the fourth quarter of 2022, a historically unprecedented churn event for the company. PayPal's subsequent earnings reports showed a slowdown in active account growth that persisted through 2023. Competitors, particularly Zelle and Cash App, reported corresponding spikes in new user registrations during the same period. The incident became a case study in how fintech companies face unique reputational risks when they attempt to expand into content moderation—a function traditionally associated with social media platforms, not financial services.
Three years later, the incident continues to shape public perception of PayPal's intentions. Trust surveys conducted in late 2025 show that PayPal's consumer trust rating remains below its pre-controversy baseline, with "overreach" and "censorship" appearing as frequent open-ended responses. PayPal has since made no further public attempts to expand its content moderation policies, but the existing AUP language ensures the company retains the legal authority to do so at any time. A PayPal spokesperson told OPV that the company "is committed to providing an open and inclusive platform" and that its policies "comply with all applicable laws." Whether that commitment extends to defining "objectionable" content with any precision remains an open question.