At the end of July, New York Governor Andrew Cuomo signed into law the Stop Hack and Improve Electronic Data Security Act (SHIELD Act). The SHIELD Act amends and expands New York’s current data breach notification law, which may affect persons or companies that do not even conduct business in New York. Here’s what you need to know ahead of the March 21, 2020, effective date:
Who must comply?
Any person or business that owns or licenses computerized data, which includes private information of New York residents, must comply with the SHIELD Act, regardless of whether that person or business even conducts business in New York.
An expanded definition of "private information."
New York’s data breach notification law has always varied from similar laws in other states in that it includes definitions for both “personal information” and “private information.” Under the SHIELD Act, “personal information” remains “any information concerning a natural person which, because of name, number, personal mark, or other identifier, can be used to identify such natural person.” “Private information” captures the information that, if breached, could trigger a notification requirement. The SHIELD Act expands “private information” to include:
Personal information consisting of any information in combination with one or more of the following data elements, when either the data element or the combination of personal information plus the data element is not encrypted, or is encrypted with an encryption key that has also been accessed or acquired:
Social security number,
Driver’s license number or non-driver identification card number,
Financial account numbers with required security codes or access codes, or
A user name or e-mail address in combination with a password or security question and answer that would permit access to an online account.
Access alone constitutes a "breach of the security of the system."
The SHIELD Act broadens the phrase “breach of the security of the system,” which consequently broadens the circumstances under which notification is required. Notably, the SHIELD Act includes in the definition of “breach of the security of the system” incidents that involve “access” to private information, regardless of whether the access led to “acquisition” of the information. Under the original New York data breach notification law, data must have been acquired to constitute a breach. The SHIELD Act keeps intact certain exceptions to the definition of “breach” including the “good faith employee” exception and provides factors for determining whether there has been unauthorized access to private information.
Notably, companies that are already subject to the data breach notification requirements under certain applicable state or federal laws, including HIPAA, GLBA, and the NYS DFS Regulation 500, are not required to further notify affected individuals. However, notifications to the New York Attorney General, the New York State Department of Consumer Protection, and the New York State Police are still required.
The SHIELD Act does not require notification of the breach if “exposure of private information” was an “inadvertent disclosure and the individual or business reasonably determines such exposure will not likely result in misuse of such information, or financial harm to the affected persons or emotional harm in the case of unknown disclosure of online credentials.” This risk assessment should be memorialized in writing.
A risk assessment is now permitted.
Reasonable data security requirements are imposed.
The SHIELD Act also imposes data security requirements on any person or business that owns or licenses computerized data that includes private information of New York residents. These security requirements must be designed to protect the security, confidentiality, and integrity of the private information. The SHIELD Act provides examples of practices that are considered reasonable, including: (i) risk assessments, (ii) employee training, (iii) due diligence for vendor selection, and (iv) data retention and disposal policies.
Companies subject to HIPAA and the GLBA are already deemed to be in compliance with these requirements. While this requirement applies to businesses of all sizes, data security safeguards may be implemented and maintained that are “appropriate for the size and complexity of the small business, the nature and scope of the small business’s activities, and the sensitivity of the personal information the small business collects from or about consumers.” For purposes of the SHIELD Act, a small business is any business with fewer than 50 employees, less than $3 million in gross annual revenue in each of the last three years, or less than $5 million in year-end total assets.
There are potential penalties.
While the SHIELD Act does not provide for a private right of action, the attorney general may bring an action to enjoin violations of the law and obtain civil penalties. For data breach notification violations that are not reckless or knowing, the court may award damages for actual costs or losses incurred by a person entitled to notice. For data breach notification violations that are knowing and reckless, the court may impose penalties of the greater of $5,000 or up to $20 per instance with a cap of $250,000. For violations of the reasonable security measures, the court may impose penalties of not more than $5,000 per violation.
If you have further questions about the SHIELD Act and how it may impact your business, employees, or consumers, please contact a member of our team.
At this point, over 15 states have proposed privacy legislation similar to the California Consumer Privacy Act (CCPA). Most recently, however, New York has drafted a proposed law that, in some instances, goes well beyond the protections afforded by the CCPA. Sponsored by Kevin Thomas, the NY Privacy Act gives New York residents more control over their data than any other state and funds a new office of privacy and data protection.
Like the CCPA, the NY Privacy Act would allow residents to see what data companies are collecting on them, where it is being shared, request that it be corrected or deleted and opt-out of having their data shared or sold to third parties. But as currently drafted, the bill offers residents more protections than the CCPA. Here are some of the key distinctions:
Private Right of Action. While the CCPA did attempt to include such a right of action, it failed, leaving the state’s attorney general the responsibility of enforcement actions. As written, the bill would give individuals the right to sue companies directly for violations of the law.
Expanded Coverage. The CCPA applies to businesses that make more than $25 million annual gross revenue. The New York bill applies to companies of any size.
Data Fiduciaries. In the biggest departure from other state privacy laws, the bill would require businesses to act as “data fiduciaries.” Similar to the way other professions, like an attorney for example, are required to hold data and not share it without a purpose, companies would be banned from using data in a way that causes residents financial or physical harm or in a way that would be “unexpected and highly offensive to a reasonable consumer.” The bill also states that this duty supersedes companies’ fiduciary duties to shareholders.
As expected, this bill isn’t sitting well with many companies. While the requirements under this bill are increasingly more restrictive than other states, the simple burden of having to track and comply with various state laws is motivating many companies to push for a federal law.
The full text of the bill can be read here.
As we’ve reported, the California Consumer Privacy Act of 2018 (the “CCPA”) was facing an amendment that would have seriously strengthened its enforcement power. The amendment, introduced on February 22, 2019, by California State Senator Hannah Beth-Jackson, sought to expand the CCPA’s private right of action and remove the thirty-day cure period required for enforcement actions brought by the state’s attorney general. However, the amendment did not receive a vote in the Senate Appropriations Committee, effectively blocking the bill.
Specifically, the bill sought to allow consumers whose rights were violated under the CCPA to bring a private right of action. As the CCPA currently stands, the private right of action is limited to circumstances where a consumer’s non-encrypted or non-redacted personal information is part of a data breach that occurs as a result of a business’s failure to maintain reasonable security measures. Enforcement actions for other violations can only be brought by the Attorney General’s Office.
While SB 561 is blocked and no longer threatens to expand the consumers’ private right of action, penalties under the CCPA will still be powerful. Penalties for violations of the Act range from $100–$750 per consumer per violation or actual damages, whichever is greater. Penalties also can include injunctive or declaratory relief. For actions for statutory damages, a consumer must provide a business with thirty days’ written notice and an opportunity to cure the violation. If the business cures, then the consumer cannot bring an action for statutory damages. For actual damages, a consumer is not required to provide thirty days’ notice and opportunity to cure.
Canada’s comprehensive privacy law, the Personal Information Protection and Electronic Documents Act (PIPEDA), has permitted companies in receipt of individuals’ personal information to transfer such data outside Canada for processing or storage without the express consent of the individuals. That may change, however.
This potential change arises from the 2017 Equifax data breach. In its wake, Canada’s Office of the Privacy Commissioner (OPC) determined that the personal information of over 19,000 Canadians had been compromised. They had provided their personal information to Equifax Canada, which had transferred their information for processing and storage to its U.S.-based affiliate, Equifax, Inc., the subject of the subsequent breach. Because the cross-border transfer for processing was consistent with the purpose for which the individuals originally provided their data, their express consent to that transfer was not required, pursuant to OPC guidance in place since 2009.
As a direct result of the compromise of the Canadians’ personal information, last month the OPC issued a proposal that would require Canadians’ consent to similar cross-border transfers in the future. It would accomplish this by reclassifying such transfers from “uses” to “disclosures.” A “use” of personal information by a recipient is something consistent with the original purpose for which it was given – e.g., processing or storage – whereas “disclosure” is for a different purpose altogether – e.g., sending it to marketing research or advertising agencies. The former does not require express consent of individuals, whereas the latter does. Thus under the OPC’s proposal, even transfer of data to a U.S.-based affiliate or vendor for storage would require the individual’s express consent. Obtaining express consent would include providing individuals with alternatives to the transfer of their information outside Canada.
U.S. companies that receive personal data of Canadians should be aware that the proposed changes could increase the cost and complexity of cross-border transfers. Their Canadian affiliates may demand more burdensome arrangements and compliance procedures for handling such information.
It remains to be seen whether this proposal will take effect. A comment period on the OPC’s proposal remains open until June 28, 2019.
In March 2018, shortly after it had been revealed that Facebook had allowed Cambridge Analytica to collect data from millions of users without their knowledge, the Federal Trade Commission (“FTC”) announced that it planned to investigate Facebook’s data privacy practices. A year later, the social media giant is preparing for the FTC to impose a series of fines that could reach up to $5 billion, which would be the largest penalty the FTC has ever imposed on a technology company. Facebook had annual revenue of approximately $56 billion last year and, as such, many believe the upcoming penalty to be relatively lenient given the gravity of the charges levied against Facebook. This is especially true in light of the fact that Facebook breached a settlement that it had reached with the FTC seven years earlier. As part of the earlier settlement, Facebook was required to obtain permission from users before distributing data beyond the privacy settings set by each user.
Although relatively limited in its enforcement power with respect to consent decrees, the FTC has been able to leverage the support of the public in its investigation of Facebook. Indeed, lawmakers have been calling for increased scrutiny of tech companies, an area in which the United States is decidedly behind its European counterparts. Despite the record-setting fine set to be imposed, though, many lawmakers believe the penalty to amount to nothing more than a slap on the wrist given Facebook’s financial power. Many lawmakers and other political activists believe that regulators should impose reforms aimed at the ability of technology companies to share data with business partners from the outset, which would have more of a lasting impact on consumer privacy practices in the technology industry.
We’ve previously discussed the ambiguities throughout California’s landmark privacy legislation, the California Consumer Privacy Act (the “CCPA”). The CCPA, passed in June 2018, creates several privacy rights for Californians. However, as the January 1, 2020 effective date looms ahead, many hoped that the California Assembly Privacy and Consumer Protection Committee (the “Committee”) would clarify several compliance provisions. Fortunately, this past Tuesday, April 23, the Committee did just that. Significantly, the Committee clarified the following:
Employees are not “consumers” for purposes of the CCPA, as long as the personal data is collected and used only in the employment context. In the case of contractors, a written agreement must be in place.
Personal Information does not include all “information that is … capable of being associated” with a particular individual or household, but just information that is “reasonably capable” of being associated.
Information found in the public record is exempted from the definition of “personal information.”
De-identified data means data that does not identify and is not reasonably linkable, directly or indirectly, to a specific consumer, so long as the business makes no attempt to re-identify the data and takes reasonable measures to: (1) ensure that the data remains de-identified; (2) publicly commit to maintain and use the data in its de-identified form; and (3) require by contract that any recipients of the data maintain the de-identified form. This clarification will likely motivate businesses to maintain data in a de-identified form to limit liability under the CCPA.
Loyalty programs are exempt from the CCPA’s “non-discrimination” restrictions.
These bills now must be considered by the Senate Judiciary Committee before they become law and are incorporated into the CCPA.
In the days following the 2016 United States presidential election, many were left wondering how the country had become so divided. Never before had the voters on either side of the aisle come to the polls with not only different opinions, but different facts upon which those opinions where based. This realization led to the ongoing period of reflection that still envelopes the country. News sources have not been impervious to such reflection, and have begun to look into the vetting process they currently employ with respect to the “news” they publish. Aside from obvious ethical and professional standards, the 2016 presidential election provided perhaps the most jarring example of the effect of inadequately vetted, partisan news.
One such news source that has received scrutiny is Facebook, Inc. The social media giant had to reconcile the fact that it had become a primary source of news for millions of individuals, a role for which it was decidedly ill-equipped. As a potential solution, Facebook launched a “global fact-checking initiative” in December 2016. This initiative involves, in part, employing groups of fact-checkers to review news published on the site. When an article has been deemed to be uncorroborated or misleading, the fact-checkers are tasked with publishing an explanatory article, notifying the user that posted the misleading article and ensuring the misleading article is then shown less prominently on the site.
Facebook currently has 43 fact-checking organizations across the world, covering news in 24 different languages. However, the fact-checkers themselves are uncertain as to whether they are having a material impact. Facebook requires fact-checkers to sign non-disclosure agreements, but this has not stopped many from anonymously speaking up about Facebook’s lackluster procedure with respect to the fact-checking process. Editors have reported feeling underutilized, and admitted that fact-checking, despite outward appearances, is not a priority for the Facebook brass. In fact, editors have noted that certain fact-checking groups cease operations when nearing the payment cap, which is a cap on the number of fact-checks for which Facebook has agreed to pay in a given month. This cap on explanatory articles results in a backlog from month-to-month, and the current cap is not nearly enough to provide a thorough fact-check of many of the articles posted to Facebook each month. Indeed, one fact-checker noted that its firm had nearly 500 articles in queue to be checked at the end of a certain month.
The current initiative will have to undergo an overhaul, especially when the number of articles to be reviewed are combined with articles posted to Facebook’s other networks, such as WhatsApp and Instagram. Facebook is acutely aware of the shortcomings of the current process, but as Mark Zuckerberg and other executives begin to explore alternatives, it appears a solution to this Herculean task remains far off.
On March 15, 2019, the Federal Trade Commission (FTC) released its Privacy & Data Security Update: 2018, a report summarizing its work on these topics in calendar year 2018.
The FTC, as the body charged with enhancing competition and protecting consumers, detailed its efforts over the past year to attempt to stop privacy and security violations and to require companies to remediate any unlawful practices.
The report highlights the FTC’s notable enforcement actions last year, including a settlement with PayPal, Inc. addressing allegedly deceptive privacy settings in its Venmo service line, as well as a judgment exceeding $700,000 against Alliance Law Group for alleged collection of fake debts by individuals posing as attorneys. It also summarizes settlements obtained with VTech Electronics Limited and Explore Talent for alleged violations of the Children’s Online Privacy Protection Act (COPPA).
In addition to enforcement actions, the Update discusses the FTC’s outreach efforts last year, including the various types of guidance and educational materials promulgated by the FTC in 2018, addressing topics such as cybersecurity tips for small businesses and items for consumers to consider prior to using Virtual Private Network (VPN) apps. It also mentions hearings hosted by the FTC on data security, competition and consumer protection issues surrounding the use of artificial intelligence, algorithms and predictive analytics and privacy and competition issues related to big data.
The Update also discusses reports issued by the FTC in 2018, including one addressing the complex nature of patching mobile operating systems and one highlighting key points from the FTC and National Highway Traffic Safety Administration’s workshop on connected cars.
On an international level, the Update details the FTC’s engagement with international organizations, privacy authorities in other countries and global privacy authority networks on mutual enforcement of privacy and security requirements, as well as investigation cooperation.
The Update illustrates that 2018 was an active year for the FTC. Given enforcement action and FTC community outreach thus far in 2019, we do not expect that trend to decrease this year. Businesses should ensure that their privacy and security practices remain compliant with the FTC Act and any other applicable laws and regulations governing their industry. In particular, entities should review their privacy policies to ensure that the terms of these documents remain in line with their privacy practices and are not misleading to consumers.
The FTC Privacy & Data Security Update: 2018 can be found here.
The Fair and Accurate Credit Transactions Act of 2003 (FACTA) prohibits anyone who accepts credit or debit cards as payment from printing more than the last five digits of a customer’s credit card number on a receipt. A plaintiff, Ahmed Kamal, sued several J. Crew entities after receiving three receipts that included both the first six and last four digits of his credit card number. The United States District Court for the District of New Jersey dismissed the lawsuit for lack of standing based upon its determination that Kamal did not suffer a concrete injury from the alleged violation. On appeal, the United States Court of Appeals affirmed the determination that Kamal lacked standing to litigate his FACTA claims. Kamal v. J. Crew Group, Inc., et. al, Nos. 17-2345 and 17-2453 (3rd Cir. Mar. 8, 2019).
Kamal pled a technical violation of FACTA’s ban on printing more than the last five digits of a consumer’s credit card number, but the Third Circuit addressed whether the alleged resulting harm is sufficiently concrete to create case or controversy under Article III of the United States Constitution. The United States Supreme Court held in Spokeo, Inc. v. Robins, 136 S. Ct. 1540, 1549 (2016), that “Article III standing requires concrete injury even in the context of a statutory violation.” A procedural violation must yield or risk actual harm to meet the requirements of Article III. Interpreting Spokeo, the Third Circuit held that an alleged procedural violation manifests a concrete injury if the violation actually harms or presents a material risk of harm to the underlying concrete interest.
Kamal pled two alleged concrete injuries: the printing of the prohibited receipts and the increased risk of identity theft resulting from that printing. Kamal failed to allege the actual disclosure of his information to a third party. The Third Circuit held that Kamal failed to plausibly allege how J. Crew’s printing of the six digits presented a material risk of concrete, particularized harm. Absent a sufficient degree of risk, J. Crew’s alleged violation of FACTA was a “bare procedural violation” that is insufficient to confer Article III standing. The Third Circuit noted that its analysis would have differed if Kamal had alleged that the receipt included all sixteen digits of his credit card number, making the potential for fraud significantly less conjectural. The appellate court also rejected Kamal’s contention that his alleged injuries were sufficiently concrete because they are similar to common law privacy torts or breach of confidence actions that have been recognized by courts, concluding that those common law causes of action require that the actionable harm occurs when a third party gains unauthorized access to a plaintiff’s personal information, which Kamal had not shown.
Overall, the Third Circuit concluded that Kamal’s speculative chain of alleged potential events does not satisfy the requisite showing of material risk of harm. The Third Circuit concluded that its conclusions were consistent with sister federal circuit courts of appeals that have addressed similar FACTA issues.
As we have discussed on our blog, federal courts interpreting Spokeo have often reached differing results that can often turn on nuances. We will continue to analyze and report on how federal courts interpret the requisite showing of an Article III case and controversy in light of Spokeo.
On February 25, 2019, the California Attorney General Xavier Becerra and Senator Hannah-Beth Jackson introduced proposed amendments (SB 561) to the California Consumer Privacy Act (CCPA), which was enacted in June 2018.
We previously discussed the breadth and novelty of the CCPA. SB 561 proposes to expand and strengthen the CCPA. Specifically, SB 561 would:
- Expand the consumer’s right to bring a private cause of action if their rights under the CCPA are violated. As written currently, the CCPA only gives a consumer a private right of action if their non-encrypted or non-redacted personal information is subject to “unauthorized access and exfiltration, theft or disclosure as a result of the business’ violation of the duty to implement and maintain reasonable security procedures...."
- Remove language allowing businesses 30 days to cure an alleged violation of non-compliance.
- Remove language permitting a business or other third party to seek the opinion of the attorney general for guidance on how to comply with the CCPA. Rather, the proposed amendment specifies that the attorney general may publish materials that provide general guidance on compliance.
If enacted, this would be the second amendment to the CCPA, which is set to become effective on January 1, 2020.