Facebook-Cambridge Analytica Data Scandal: 87 Million Profiles, $5.7B in Penalties, and the API Permissions Failure That Redefined Privacy Regulation
Breach Summary
The Facebook-Cambridge Analytica incident reset the global regulatory treatment of consumer data exposure in three irreversible ways: the FTC began enforcing consent failures with penalties orders of magnitude larger than any prior consumer-protection action, the European Union accelerated the General Data Protection Regulation's adoption and enforcement posture, and U.S. state legislators introduced and passed the California Consumer Privacy Act and its successors. The $5 billion FTC settlement in July 2019 was the largest consumer-protection penalty in U.S. history at the time of imposition and remains the structural reference point for every subsequent privacy enforcement action against major U.S. platforms. The case established that consent-based data exposure — data collected through legitimate API access that was nonetheless used outside the scope users would have understood — is a regulatory category equivalent to technical breach, and that platforms enabling that exposure are accountable for the downstream uses regardless of whether they directly executed them. The total disclosed cost across regulatory penalties, class action settlements, and securities enforcement exceeded $5.7 billion.
What Happened
In March 2018, The Guardian, The New York Times, and Channel 4 News published the account of former Cambridge Analytica employee Christopher Wylie, who disclosed that Cambridge Analytica had obtained personal data and inferred psychographic profiles for approximately 87 million Facebook users. The data had been collected by researcher Aleksandr Kogan via a personality-quiz Facebook app called "This Is Your Digital Life," operating through Kogan's company Global Science Research (GSR). The app used Facebook's then-permitted Friends API to collect data not just from quiz takers — approximately 270,000 individuals — but from their entire friend networks. Kogan transferred the data to Cambridge Analytica in 2014 in violation of Facebook's platform terms.
The data and its uses
The exposed data included names, locations, "likes," and other Facebook profile attributes. The novelty was not in the data itself but in what Cambridge Analytica did with it: the company combined the Facebook data with commercial consumer datasets and with Cambridge University Psychometrics Centre research methods to build what it characterized as psychographic profiles — predictions about each user's openness, conscientiousness, extraversion, agreeableness, and neuroticism (the "OCEAN" or "Big Five" personality dimensions), and the resulting predictions about which political and consumer messages would be most persuasive to that user.
Cambridge Analytica's psychographic targeting was deployed in U.S. political campaigns including Ted Cruz's 2016 Republican primary campaign and the 2016 Trump general election campaign. The company's parent SCL Group worked with the official Vote Leave campaign and other Brexit-aligned organizations during the 2016 U.K. referendum, though the precise scope of those engagements was subsequently disputed. The combined disclosures — psychographic targeting of voters in two of the most consequential political events of the decade, executed using data obtained without the consent of the users — created the most severe consumer-data-related public reaction since the dawn of social media.
The Kogan-SCL transfer
Aleksandr Kogan was a Moldovan-born researcher at the University of Cambridge Department of Psychology. The Psychometrics Centre at Cambridge had published research on inferring personality traits from Facebook likes since 2012. Kogan, who also held a part-time appointment at St. Petersburg State University in Russia, was approached by SCL Group (Cambridge Analytica's parent) in 2014 to provide datasets for political-campaign psychographic modeling. He developed "This Is Your Digital Life" as a vehicle for collecting the necessary data. Facebook approved the app for the Friends API access that the collection required. Kogan paid users $1-$2 each to take the personality quiz and consent to data collection. Approximately 270,000 individuals participated; through the Friends API, the app collected data on approximately 87 million people.
Kogan transferred the dataset to Cambridge Analytica in mid-2014 in exchange for payment of approximately $800,000. The transfer violated Facebook's platform terms, which permitted apps to collect user data for the app's own purposes but prohibited transfer to third parties for monetization or other uses outside the user's understanding. Cambridge Analytica had been founded in 2013 by SCL Group with funding from Robert Mercer and strategic direction from Steve Bannon, and was at that time scaling its political-campaign psychographic targeting capability for U.S. operations.
Facebook's response timeline
Facebook learned of the Kogan-Cambridge Analytica data transfer in December 2015 through reporting in The Guardian. Facebook demanded that Cambridge Analytica certify deletion of the data and that Kogan certify deletion of his copy. Facebook did not effectively verify the deletions, did not notify the approximately 87 million affected users, and did not disclose the incident publicly. Wylie's March 2018 disclosures revealed that Cambridge Analytica had retained at least some of the data and had used it operationally in the intervening years.
The public disclosures triggered Mark Zuckerberg's testimony before the U.S. Senate Commerce and Judiciary Committees and the House Energy and Commerce Committee in April 2018. The European Parliament held parallel hearings. The U.K. Parliament Digital, Culture, Media and Sport Committee published an exhaustive report on disinformation and fake news with Cambridge Analytica as a central case study. Cambridge Analytica entered administration in the U.K. and filed for Chapter 7 bankruptcy in the U.S. in May 2018. The FTC opened the investigation that culminated in the July 2019 $5 billion settlement.
Attack Vector Detail
The technical mechanism was not a vulnerability or breach in the traditional sense. Facebook's Friends API — formally part of the Open Graph API v1 — operated from 2007 through 2015 and allowed installed apps to collect data not just about the user who installed the app but about all of that user's Facebook friends. Names, locations, "likes," birthday, education, work history, religious views, political views, relationship status, and other profile data were available depending on what permissions the installing user granted and what the user's friends had set as visible to friends.
The Open Graph v1 permissions architecture
The Open Graph v1 permissions model was a foundational architectural decision with consequences that took years to fully manifest. Apps could request granular permissions at install time: user_likes, user_friends, friends_likes, friends_education_history, friends_relationships, friends_religion_politics, read_stream (which allowed reading the user's News Feed), and read_mailbox (which allowed reading the user's Facebook messages), among many others. The friends_* permissions were the most consequential: they allowed apps to collect data on people who had never installed the app and had no opportunity to consent.
The privacy-economic logic of this architecture was that users' privacy settings determined what their friends' apps could see. A user who had set "likes" to "Friends Only" would have those likes shared with apps installed by any of their friends. The architecture treated app installation as transitive consent: by accepting an app's permissions, the installing user implicitly consented to expose their friends' data to that app, on the theory that the installing user was acting as a delegate for what their friends would consent to. The architecture had no mechanism for friends to be notified, no mechanism for them to opt out, and no mechanism for them to learn which apps had collected their data.
The v1-to-v2 migration
Facebook introduced Open Graph API v2 at the f8 developer conference on April 30, 2014. The v2 release removed the friends_* permissions and restricted apps to data about users who had installed the app. Apps using v1 had a one-year migration window — through April 30, 2015 — during which they could continue to use the friends_* permissions. Kogan's "This Is Your Digital Life" was approved and active during this window, which is why the collection was technically compliant with Facebook's platform terms at the moment of data extraction. The platform-terms violation was the subsequent transfer to Cambridge Analytica.
The one-year migration window is a critical operational detail. Facebook had recognized by 2014 that the friends_* permissions created systemic privacy risk; it nonetheless permitted continued collection during the migration. The data collected during that window persisted indefinitely in the apps' control unless the apps voluntarily deleted it. Facebook's December 2015 demand that Kogan certify deletion was an attempt to address this gap through contractual mechanisms; the failure to verify the deletion is the operational failure on which the FTC's enforcement action ultimately rested.
The Six4Three discovery cache
The Six4Three v. Facebook litigation, ongoing in California state court from 2015 onward over a wholly different developer-platform dispute, produced a discovery cache of internal Facebook communications that — when disclosed by U.K. Parliament's DCMS Committee in December 2018 — substantially deepened public understanding of Facebook's internal awareness of Friends API risks before 2014. The disclosed documents indicated that Facebook executives had internal discussions about the data-exposure risks of the Friends API and had considered restricting it earlier than 2014, and that the timing of the v1-to-v2 migration reflected competitive and commercial considerations as well as privacy concerns. The Six4Three documents are part of the documentary record on which subsequent regulatory and academic analysis of the case has relied.
The psychographic-profiling layer
Cambridge Analytica's contribution to the operation was not the data collection — that was Kogan's — but the analytical layer built on top of the data. The Psychometrics Centre at the University of Cambridge had published research starting in 2012 demonstrating that Facebook "likes" could be used to predict personality traits along the OCEAN dimensions with accuracy approaching that of human acquaintances and, in some dimensions, spouses. Cambridge Analytica licensed this approach (through Kogan, who had collaborated with the Psychometrics Centre) and operationalized it: the firm took the Facebook data, computed OCEAN predictions for each profile, joined those predictions to commercial consumer data (purchase history, voter registration, demographic data), and produced targeted persuasion content tailored to each predicted personality profile.
The empirical effectiveness of the psychographic-targeting layer remains disputed in the academic literature. Multiple studies have concluded that the persuasion effects of psychographic targeting are smaller than Cambridge Analytica's own marketing materials suggested. The regulatory and political significance of the case nonetheless does not turn on the effectiveness of the targeting; it turns on the fact that the targeting was attempted, that users had no opportunity to consent to or refuse it, and that Facebook's platform had enabled the collection that made it possible.
The detection and notification failure
The detection failure is the operational core of the FTC's enforcement action. Facebook's December 2015 awareness of the Kogan-Cambridge Analytica transfer did not lead to (1) public disclosure to affected users, (2) public regulatory notification, (3) verification of the certified deletion through technical means, or (4) audit of other developers who had accessed the Friends API for similar policy violations. The FTC's 2019 enforcement action treated each of these failures as a continuing violation of the 2011 FTC consent order requiring Facebook to obtain user consent for privacy-impacting changes and to submit to ongoing privacy assessments. The four-year gap between Facebook's December 2015 awareness and Wylie's March 2018 public disclosures is the period during which the violations accumulated.
Breach Pattern Timeline
2007
Facebook launches its developer Platform with the Friends API, allowing apps to collect data from users and their friends. Privacy advocates raise concerns; Facebook continues with the architecture.
2009
Facebook implements default-public settings for previously default-private profile fields. Public reaction is significant. The change becomes the predicate for the FTC's 2011 enforcement action.
November 2011
FTC and Facebook announce settlement requiring Facebook to obtain user consent for privacy-impacting changes and submit to 20 years of independent privacy assessments. The settlement is intended to prevent exactly the kind of incident that becomes Cambridge Analytica. The 20-year duration of the consent order is critical: it remains in effect throughout the Cambridge Analytica period and is the basis for the FTC's 2019 enforcement action.
2012
The Cambridge University Psychometrics Centre publishes early research demonstrating that Facebook "likes" can be used to predict personality traits along the OCEAN dimensions. The research establishes the analytical foundation that Cambridge Analytica will subsequently operationalize.
2013
SCL Group founds Cambridge Analytica with funding from Robert Mercer and strategic direction from Steve Bannon. The firm is designed to operationalize psychographic targeting for U.S. political campaigns.
2013-2014
Researcher Aleksandr Kogan develops "This Is Your Digital Life" personality quiz Facebook app through his company Global Science Research (GSR). Approximately 270,000 users install it; via the Friends API, the app collects data on approximately 87 million people. Kogan pays participants $1-$2 each to take the quiz.
April 30, 2014
Facebook announces Open Graph API v2 at the f8 developer conference. The v2 release removes the friends_* permissions and restricts apps to data about users who have installed the app. Existing apps using v1 have a one-year migration window through April 30, 2015. Kogan's app collects data under v1 during this window.
Mid-2014
Kogan transfers the collected dataset to Cambridge Analytica in exchange for approximately $800,000. The transfer violates Facebook's platform terms.
2015
Cambridge Analytica deploys psychographic targeting using the data in U.S. political campaigns, including the Ted Cruz Republican primary campaign.
April 30, 2015
End of the Open Graph v1 migration window. Friends API permissions removed entirely for new apps. Data previously collected under v1 remains in apps' control.
December 2015
The Guardian publishes the first reporting on the existence of the Cambridge Analytica dataset and its use in U.S. political campaigns. Facebook demands certification that the data has been deleted; does not effectively verify the deletion or notify affected users.
2016
Cambridge Analytica's data and methods deployed in the 2016 U.S. presidential election (Trump campaign) and the U.K. Brexit referendum.
March 17, 2018
The Guardian, The New York Times, and Channel 4 News publish whistleblower Christopher Wylie's account of Cambridge Analytica's operations. Public reaction is immediate and severe. The #DeleteFacebook movement begins.
March-April 2018
Facebook stock declines approximately $100 billion in market capitalization over several weeks following the disclosures. Multiple senior Facebook executives depart in subsequent months.
April 10-11, 2018
Mark Zuckerberg testifies before the U.S. Senate Commerce and Judiciary Committees and the House Energy and Commerce Committee. European Parliament holds parallel hearings.
May 2, 2018
Cambridge Analytica enters administration in the U.K. and files for Chapter 7 bankruptcy in the U.S. Operations cease.
May 25, 2018
General Data Protection Regulation (GDPR) takes effect in the European Union. Cambridge Analytica is cited extensively by Data Protection Authorities and regulators as the defining example of the consent-failure category that GDPR is intended to address.
June 28, 2018
California enacts the California Consumer Privacy Act (CCPA). The act is the first comprehensive state privacy law in the United States and is directly downstream of the Cambridge Analytica disclosures' effect on legislative attention.
October 25, 2018
U.K. Information Commissioner's Office imposes £500,000 fine on Facebook — the maximum under pre-GDPR law and widely cited as an example of why GDPR's penalty regime needed to be more substantial than its predecessor.
December 5, 2018
U.K. Parliament Digital, Culture, Media and Sport Committee publishes its final Disinformation and Fake News report. The report includes substantial internal Facebook documents disclosed through the Six4Three v. Facebook litigation.
July 24, 2019
FTC announces $5 billion settlement with Facebook — the largest consumer protection penalty in U.S. history at the time. Settlement requires the establishment of an Independent Privacy Committee of the Board of Directors, individual quarterly certifications by Mark Zuckerberg and Facebook's Chief Privacy Officer, and other governance changes.
July 24, 2019
SEC concurrently announces $100 million settlement with Facebook for misleading disclosures to investors about the misuse of user data. The SEC action establishes that consent-failure incidents are also material securities-law disclosures.
February 2020
Facebook agrees to $550 million class action settlement (subsequently increased to $650 million) in Illinois Biometric Information Privacy Act litigation over facial recognition photo tagging. The settlement is a parallel state-law privacy enforcement track.
December 22, 2022
Federal court in California approves $725 million consumer class action settlement covering U.S. Facebook users affected by Cambridge Analytica and related data exposures.
July 30, 2024
Texas Attorney General Ken Paxton announces $1.4 billion settlement with Meta over Texas biometric privacy law violations — the largest state-level privacy enforcement action in U.S. history. The settlement extends the post-Cambridge Analytica enforcement trajectory to state AG actions.
The forward trajectory
The unresolved next steps in the post-Cambridge Analytica regulatory arc are federal comprehensive privacy legislation in the United States (which has been repeatedly proposed but not enacted), federal regulation of AI-based personalization and targeting (where the substantive concerns of Cambridge Analytica are most directly extended), and the harmonization or pre-emption of the state-law patchwork. Each of these developments would alter the operational compliance landscape for consumer platforms in ways that consumer-platform PE investments must anticipate.
Total impact: 87 million users affected, $5 billion FTC penalty (largest consumer protection penalty in U.S. history at the time), $725 million class action settlement, $100 million SEC penalty, £500,000 U.K. ICO fine, foundational precedent for consent-based data exposure as regulatory category equivalent to technical breach, accelerator of GDPR enforcement and the state privacy law cascade.
Executive Lessons
The Facebook-Cambridge Analytica case established several governance precedents that now define how consumer platforms, developer ecosystems, and any business with user-data APIs are regulated and operated. Read together, they are the operating manual for every CEO, CFO, CISO, Chief Privacy Officer, and audit committee member at a company that exposes user data through APIs or third-party integrations.
The consent-failure-as-breach precedent
The $5 billion FTC settlement established that user-consent failures and platform-policy enforcement failures are material privacy violations producing penalties equivalent to — or larger than — those imposed for technical breaches. The framework distinguishes between three categories of incident: technical compromise (unauthorized access despite the platform's intended controls), consent failure (data exposure within the platform's intended controls but outside what users would understand themselves to have consented to), and enforcement failure (the platform's failure to monitor and act on violations of its own policies). Cambridge Analytica involved categories two and three. The $5 billion penalty established that the regulatory cost of categories two and three is in the same range as the regulatory cost of category one and may exceed it for major platforms with FTC consent-order history.
For executives, the operational implication is that platform-policy enforcement is now a regulated function with material liability exposure. Companies operating developer platforms, APIs, or third-party integrations that expose user data should implement active monitoring for policy violations, audit logs for third-party data access, periodic independent reviews of policy compliance, and breach-response procedures that include consent-failure scenarios alongside traditional technical-breach scenarios.
The 2011 consent order and the recidivism premium
The $5 billion penalty's size was not driven solely by the severity of the Cambridge Analytica incident itself. It was driven by Facebook's status as a repeat offender under the 2011 FTC consent order, which had been imposed in connection with Facebook's earlier privacy-setting changes and which required Facebook to obtain user consent for privacy-impacting changes and submit to 20 years of independent privacy assessments. The FTC's 2019 action treated the Cambridge Analytica incident as a violation of that prior consent order. The lesson for companies with existing FTC consent orders, GDPR DPA decisions, state attorney general settlements, or sector-specific regulatory consent decrees is that the next material privacy incident will be priced against the prior order's expectations and against the cumulative compliance history — not as an isolated incident. Recidivism is now an enforcement multiplier.
The named-officer accountability precedent
The 2019 FTC settlement required Mark Zuckerberg and Facebook's Chief Privacy Officer to provide individual quarterly certifications that Facebook was in compliance with the order. False certifications would expose them to personal civil and criminal liability. The named-officer accountability framework had appeared in prior FTC settlements (LabMD, Equifax, others) but the Facebook settlement extended it to a CEO of a Fortune 500 company at a scale that altered subsequent regulatory practice. Boards of consumer platforms should expect that significant future privacy enforcement actions will include named-officer certification requirements as standard.
The board-level privacy oversight precedent
The 2019 settlement required Facebook to establish an Independent Privacy Committee of the Board of Directors with specific governance requirements: the committee's members had to be independent under heightened standards, the committee had to meet at a defined cadence with documented agendas, and the committee had to engage outside privacy expertise. The committee structure was modeled loosely on Sarbanes-Oxley audit committee requirements. The implication for boards of consumer platforms and other data-rich companies is that privacy oversight at the board level is now a substantive governance function comparable to financial audit oversight, not an aspect of general risk oversight delegated to a standing risk committee.
The GDPR parallel and post-CA regulatory acceleration
The Facebook-Cambridge Analytica disclosures came in March 2018, two months before the General Data Protection Regulation took effect on May 25, 2018. The temporal coincidence shaped GDPR's early enforcement posture: Data Protection Authorities across the European Union treated the case as a defining example of the consent-failure category and aligned their early enforcement on platforms accordingly. The U.K. Information Commissioner's Office's £500,000 fine — the maximum permitted under pre-GDPR law and a fraction of what GDPR would have permitted — became a frequently cited example of why GDPR's penalty regime needed to be more substantial than its predecessor. Subsequent GDPR enforcement actions against Facebook and other major platforms have routinely cited Cambridge Analytica as the precedent for the analytical framework distinguishing technical breach from consent failure.
The state-level privacy law cascade
California enacted the California Consumer Privacy Act (CCPA) in June 2018 — three months after Wylie's disclosures and approximately one year before the FTC settlement. The CCPA established consumer rights to know what personal information is collected, to delete personal information, and to opt out of the sale of personal information. The act has been amended and extended by the California Privacy Rights Act (CPRA), and similar legislation has been enacted in Virginia, Colorado, Connecticut, Utah, Iowa, Indiana, Tennessee, Texas, Oregon, Montana, and additional states. The state-by-state cascade is directly downstream of the Cambridge Analytica disclosures' effect on public and legislative attention to platform data practices. Companies operating consumer platforms now contend with a patchwork of state privacy laws whose substantive requirements and penalty regimes vary, and whose enforcement is increasingly active.
The 2024 state AG action
Texas Attorney General Ken Paxton announced in July 2024 a $1.4 billion settlement with Meta in connection with Texas's enforcement of its biometric privacy law against Facebook's facial-recognition photo tagging feature. The settlement is the largest state-level privacy enforcement action in U.S. history. The Texas action and Illinois's prior BIPA litigation against Facebook (settled for $650 million in 2020) demonstrate that state AGs are now substantive privacy enforcement actors with penalty capabilities approaching FTC scale. For companies operating in privacy-sensitive sectors, the operational implication is that compliance must be designed for the highest-enforcement state, not for federal-level expectations.
The post-CA platform-governance evolution
The Cambridge Analytica case accelerated several platform-governance trends that were nascent before 2018: app-store privacy labels (Apple iOS 14, December 2020), the deprecation of third-party cookies in major browsers (Safari Intelligent Tracking Prevention, Firefox Enhanced Tracking Protection, Chrome's Privacy Sandbox), the App Tracking Transparency framework (Apple iOS 14.5, April 2021), and the Digital Services Act and Digital Markets Act in the European Union (effective 2023-2024). Each of these regulatory and product changes responds in substantive ways to the architectural critique that Cambridge Analytica crystallized: that the original Open Graph and adjacent advertising-technology architectures permitted data exposure at a scale that users could not meaningfully understand or control.
Related Reading
- What is Data Loss Prevention (DLP)?
- What is API Security?
- What is a Cyber Risk Assessment?
- What is Cyber Due Diligence?
- Yahoo Data Breach — the SEC-enforcement precedent for disclosure failures
Private Equity Implications
The Cambridge Analytica case is foundational for private equity diligence on consumer platforms, social applications, marketing technology, advertising technology, data brokers, and any business with developer ecosystems or third-party data integrations. The case surfaced diligence dimensions — platform policy enforcement, API access governance, third-party data flow monitoring, user-consent architecture, and regulatory consent-order history — that are now standard considerations for technology and consumer-facing targets in privacy-sensitive jurisdictions.
API security and third-party integration diligence
Pre-Cambridge Analytica, API security in PE diligence was treated primarily as an application security concern: vulnerability scanning, authentication review, rate-limiting review. Post-Cambridge Analytica, API security diligence has expanded to include the governance dimension: which third parties have API access, what data they can access, what their contractual restrictions are, how their compliance with those restrictions is monitored, and what audit trails exist for actual access. For targets with developer ecosystems or B2B API products, the diligence question is no longer just "is the API secure" but "how does the target verify that the parties accessing the API are doing so in accordance with the terms they agreed to."
For targets that operate as data processors or data brokers — companies whose business model includes selling, sharing, or facilitating third-party access to user data — the diligence scope extends to the entire downstream chain of recipients. The Kogan-to-Cambridge Analytica transfer is the prototype: the target's direct customer may comply with terms, but a customer of that customer may not, and the target's regulatory exposure under FTC and GDPR enforcement standards extends down the chain. Standard practice for sophisticated technology M&A now includes review of downstream data-flow contracts and, where possible, audit rights for the buyer to verify post-closing compliance.
The FTC consent-order workstream
For acquirers of consumer platforms, social applications, or any company with FTC enforcement history, diligence on existing FTC consent orders is now a discrete workstream. The diligence questions are: does the target have an active FTC consent order; what does the order require; is the target in compliance; what is the cost of compliance; what is the residual exposure of any pending FTC inquiry; and what is the effect of the proposed transaction on the order (assignment, modification, termination, or new obligations).
The Facebook 2011 consent order's 20-year duration is a significant operational detail. Companies with FTC consent orders may carry those orders for time horizons that exceed typical PE hold periods. Acquirers may inherit the obligation and the enforcement risk; the obligation does not extinguish with the change of ownership. Standard practice now includes specific FTC consent-order representations in M&A agreements, indemnification structures sized against the consent order's penalty exposure, and post-closing audit committee oversight of consent-order compliance.
GDPR and DPA enforcement exposure
For targets operating in or marketing to the European Union, the United Kingdom, or other GDPR-adjacent jurisdictions, diligence must include review of the target's GDPR compliance posture, prior DPA inquiries and enforcement actions, the adequacy of the target's data protection impact assessments, the existence and competency of the Data Protection Officer function, and the structure of cross-border data transfer mechanisms (Standard Contractual Clauses, adequacy decisions, Binding Corporate Rules). GDPR Article 83 permits administrative fines up to 4% of global annual turnover or €20 million, whichever is greater, for serious violations. For consumer platforms with substantial EU operations, this exposure category alone can rival or exceed the deal value in extreme cases.
State privacy law patchwork and the compliance-by-jurisdiction model
The state-by-state privacy law cascade that followed Cambridge Analytica — CCPA, CPRA, Virginia CDPA, Colorado CPA, Connecticut CTDPA, Utah UCPA, and successors across nearly twenty additional states — creates a compliance environment in which targets must operate to the highest-enforcement standard rather than to federal-level expectations. Diligence must therefore include review of the target's jurisdictional analysis: which state laws apply, how the target's compliance program addresses each, and what residual non-compliance risk exists. The diligence is materially complicated by the fact that the laws are not harmonized and that state Attorneys General have begun active enforcement that varies substantially in posture.
The July 2024 Texas Attorney General settlement with Meta — $1.4 billion for biometric privacy violations under Texas state law — established state AGs as substantive privacy enforcement actors. For targets with biometric data processing, geolocation data, or other categories that state laws specifically address, diligence must include review of state-specific exposure that may not appear in standard federal-level privacy reviews.
The board-level privacy oversight inheritance
For acquirers planning to maintain target operations under existing FTC consent orders or settlements that include board-level privacy oversight requirements (the Facebook Independent Privacy Committee being the prototype), diligence must confirm that the target's board governance structures meet the order's requirements and that the post-closing governance structure will continue to meet them. PE acquirers that intend to take controlled positions with majority board representation must specifically evaluate whether their planned board composition satisfies the independent-director standards the consent order may impose. The structural question can affect the deal's governance plan in ways that affect sponsor return profiles.
Cyber insurance, privacy-specific coverage, and post-CA underwriting
The cyber and privacy insurance markets evolved substantially after Cambridge Analytica to address consent-failure scenarios distinctly from technical-breach scenarios. Standard cyber policies pre-2018 typically excluded consent-failure scenarios or treated them as ambiguous coverage. Post-CA underwriting has produced policy structures that explicitly address regulatory penalties under privacy regimes (FTC, FCC, state AGs, GDPR DPAs), class action settlements for privacy violations, and the costs of regulatory consent-order compliance. For targets in privacy-sensitive sectors, diligence must include review of policy adequacy under post-CA underwriting standards and the adequacy of sublimits for regulatory penalty exposure.
The IPO-readiness implications
For PE sponsors planning IPO exits of consumer platforms, marketing technology, advertising technology, or data-related businesses, the post-CA regulatory environment substantially affects S-1 risk-factor disclosure, the substance of the company's privacy program at the time of registration, and the post-IPO compliance infrastructure. The SEC's 2023 Cybersecurity Disclosure Rules formalized expectations for cybersecurity disclosure; the parallel evolution in privacy-disclosure expectations has been less formal but no less real. Sophisticated underwriters and prospective public-market investors now ask about FTC consent-order history, GDPR DPA enforcement posture, state AG settlements, and the company's privacy-program maturity in ways that the post-CA environment specifically informs. Building this infrastructure during the S-1 process is too late; the audit-committee-level privacy oversight, the independent privacy assessments, and the documented compliance procedures must be in place well before the IPO process begins.
How Cloudskope Can Help
Cloudskope's Cyber Risk Assessment evaluates platform governance, API access controls, and third-party data flow monitoring for consumer-facing platforms. Our M&A Cyber Due Diligence specifically examines developer policy enforcement and user-consent architecture for technology targets where these dimensions represent material risk under FTC and GDPR enforcement standards.
Frequently Asked Questions
What was the Facebook-Cambridge Analytica scandal?
The Facebook-Cambridge Analytica incident exposed personal data and inferred psychographic profiles for approximately 87 million Facebook users without their knowledge or consent. The data was collected in 2014 by researcher Aleksandr Kogan via a Facebook-permitted personality-quiz app called "This Is Your Digital Life," then transferred to Cambridge Analytica in violation of Facebook's platform terms. Cambridge Analytica used the data in political microtargeting operations associated with the 2016 U.S. presidential election and the U.K. Brexit referendum.
How many people were affected?
Approximately 87 million Facebook user profiles were harvested. The collection mechanism is the key detail: Kogan's app was installed by only about 270,000 users, but Facebook's then-permitted Friends API allowed installed apps to collect data from those users' entire friend networks — names, locations, "likes," and other profile data — without explicit consent from the friends themselves.
How much did Facebook pay in fines and settlements?
Total direct cost exceeded $5.7 billion. The Federal Trade Commission imposed a $5 billion settlement in July 2019 — the largest consumer protection penalty in U.S. history. A class action settlement in December 2022 added $725 million for affected users. The U.K. Information Commissioner's Office imposed a £500,000 fine — the maximum permitted under pre-GDPR law. The SEC imposed a $100 million penalty for misleading investors about the risks. Facebook also agreed to ongoing privacy assessments and a board-level privacy oversight committee under the FTC settlement.
Was Facebook itself hacked?
No — and this is the foundational lesson of the case. Facebook's platform was not technically compromised; the data was collected through legitimate API access that Facebook had explicitly enabled and documented. The $5 billion FTC penalty established that consent-based data exposure can produce regulatory consequences equivalent to technical breach. Companies operating developer platforms or APIs that expose user data are now expected to actively monitor for policy violations, not rely on terms-of-service compliance as a control.
How did the data get to Cambridge Analytica?
Researcher Aleksandr Kogan transferred the dataset he had collected through "This Is Your Digital Life" to Cambridge Analytica in 2014, in violation of Facebook's platform terms which prohibited that transfer. Facebook learned of the transfer in December 2015, requested certifications of deletion, and did not effectively verify the deletion or notify affected users. Whistleblower Christopher Wylie's March 2018 testimony to The Guardian, The New York Times, and Channel 4 News revealed the operation publicly.
What did Cambridge Analytica do with the data?
Cambridge Analytica constructed psychographic profiles intended to predict political preferences and persuasion susceptibility. The profiles were used in political microtargeting operations associated with the 2016 Ted Cruz primary campaign, the 2016 Trump general election campaign, and the U.K. Brexit referendum. Cambridge Analytica entered administration in the U.K. and filed for Chapter 7 bankruptcy in the U.S. in May 2018, ceasing operations.
What does the Cambridge Analytica case mean for technology investments?
The case is foundational for PE diligence on consumer platforms, social applications, and any business with developer ecosystems. Diligence dimensions it surfaced — platform policy enforcement, API access governance, third-party data flow monitoring, and user-consent architecture — are now standard considerations for technology targets in privacy-sensitive jurisdictions. Targets that cannot demonstrate active enforcement of their platform policies carry structural privacy risk that may not appear in standard compliance audits but materializes in regulatory action under FTC, GDPR, and state privacy law.
.png)