Special thanks to Vincent Tennant for his contributions to this post.
The Electronic Frontier Foundation (“EFF”) and the American Civil Liberties Union (“ACLU”) have ended six years of litigation with the Los Angeles County Sheriff’s Department and the Los Angeles Police Department over the automated collection of license plate data. On October 3, 2019, the parties reached a settlement where the EFF and ACLU will receive a limited amount of the de-identified data for the purposes of reviewing how this data could be used by the government and to educate the public.
Throughout the city and county of Los Angeles, automated license plate reader (ALPR) systems have been implemented with the capacity to collect the images of up to 1,800 license plates per minute. California’s ALPR systems include fixed cameras as well as cameras mounted on police vehicles. The cameras scan every license plate that crosses their field of view.
Most recently, prior to the settlement, the EFF and ACLU won at the California Supreme Court, which ruled that the ALPR data are not “records of law enforcement investigations” and therefore not protected against disclosure requests under the California Public Records Act.
Throughout the litigation, the ACLU and EFF had requested one week’s worth of de-identified data from the ALPR system “so that the legal and policy implications of the government’s use of ALPRs to collect vast amounts of information on almost exclusively law-abiding [citizens of Los Angeles] may be fully and fairly debated.” The EFF reports that it will receive exactly the requested amount in the settlement.
Government agencies are not covered entities in the California Consumer Privacy Act (“CCPA”), coming into effect on January 1, 2020. That leaves government agencies under the same privacy and transparency regimes currently in effect. The EFF hails its victory at the California Supreme Court and the subsequent settlement as an important precedent for future challenges of broad-based data collection and surveillance by government agencies as the CCPA will be enforcing privacy regulations on private actors.
Special thanks to James Ingram for his contributions to this post.
Is the use of automated “data-scraping” bots to collect information from public LinkedIn profiles fair game under the Computer Fraud and Abuse Act (CFAA)? According to the Ninth Circuit’s recent ruling in hiQ Labs, Inc. v. LinkedIn Corporation, No. 17-16783, 2019 WL 4251889 (9th Cir. Sept. 9, 2019), the answer is likely “yes.”
In hiQ Labs, LinkedIn sent data analytics company hiQ a cease-and-desist letter demanding that hiQ stop scraping data from LinkedIn users’ public profiles and asserting that continuation of the practice would constitute a violation of the CFAA. hiQ, in turn, sought a preliminary injunction to enjoin LinkedIn from invoking the CFAA against it.
The CFAA, codified at 18 U.S.C. § 1030, prohibits the intentional accessing of a protected computer “without authorization” in order to obtain information from it. The Ninth Circuit considered the meaning of the phrase “without authorization” and determined that its use in the statute is meant to protect against the digital equivalent of “breaking and entering.” As such, simply collecting publicly available data from a website like LinkedIn does not give rise to a CFAA violation. The court rather indicated that the CFAA is violated only “when a person circumvents a computer’s generally applicable rules regarding access permissions, such as username and password requirements, to gain access to a computer.”
Applying this framework, the court found that there is a serious question as to whether hiQ’s data-scraping practices violate the CFAA, and granted hiQ’s motion for a preliminary injunction. It noted that LinkedIn does not claim to own the information that its users share on their public profiles and that such information is available without a username or password to anyone with access to a web browser. The court also rejected LinkedIn’s argument that an injunction would threaten the privacy of its members, finding “little evidence that LinkedIn users who choose to make their profiles public actually maintain an expectation of privacy with respect to the information that they post publicly . . .”
The court’s decision at this stage of litigation is certainly encouraging for hiQ and others engaged in similar data collection practices. The NP Privacy Partner team will continue to monitor developments in this case, but in the meantime: (i) companies seeking to protect user data should ensure that protective measures, such as required usernames and passwords, are in place to create a clear barrier between public data and that which is accessed without authorization, and (ii) LinkedIn users should be aware that information posted to their public profiles may very well end up in the hands of third-party data collectors.
On September 16, 2019, we reported on a number of bills passed by the California Legislature in the final days of the session, amending the California Consumer Privacy Act. On October 13, 2019, Governor Gavin Newsom signed those bills into law. To recap briefly, they are:
AB 25: Exempts from the scope of the Act information collected in an employment context, i.e., information collected in a job application, or from employees, directors, business owners, medical staff, or contractors. However, the private right of action for negligently allowing the disclosure of such information in Civil Code 1798.150 still applies.
AB 874: Simplifies the definition of “publicly available information,” which does not count as “personal information” under the Act. Eliminates the restriction that information obtained from a public source is only exempt from the definition of personal information if it is used for the same purpose that it was gathered by the public entity.
AB 1146: Exempts information maintained or exchanged between an auto dealer and a manufacturer for warranty or recall purposes from certain obligations under the Act. Such information cannot be the subject of a request to delete, and sharing of the information between a dealer and manufacturer does not trigger an obligation to disclose it as a “sale” of such information.
AB 1202: Adds new sections Civil Code 1798.99.80-82. Requires all data brokers to register with the attorney general. A data broker is any business that knowingly collects and sells (broadly defined) personal information regarding persons with which it has no direct relationship.
AB 1355: Exempts deidentified and aggregate information from the definition of “consumer information” in the Act; also clarifies the interrelationship of the Act and the Fair Credit Reporting Act.
AB 1564: Streamlines the methods businesses must make available to consumers to make requests to disclose their personal information. A business that operates exclusively online and has a relationship with the consumer is only required to make a single online method available for such requests. However, a business that maintains a website must include the website as one of the methods to receive such requests.
In addition, the governor signed AB 1130, which amends the state’s data breach notification law. It revises the definition of personal information for breach notification purposes to add specified unique biometric data and tax identification numbers, passport numbers, military identification numbers, and unique identification numbers issued on a government document in addition to the existing categories that already include driver’s licenses and California identification cards. Upon a breach of biometric data, the breach notice now must include instructions on how the consumer can notify entities who may be relying on such data for identification purposes to let them know that it is no longer secure.
A California Court of Appeal recently affirmed a lower court ruling in favor of Williams-Sonoma in a case under the Song Beverly Credit Card Act of 1971 (the “Act”) challenging the store’s practice in soliciting consumer personal information at checkout. Williams-Sonoma Song-Beverly Act Cases, 2019 DJDAR 9435 (Ct. App, 1st Dist. September 30, 2019).
The Act makes it illegal, in a credit card transaction, to “request, or require as a condition to accepting the credit card as payment …, the cardholder to provide personal information which the [merchant] causes to be written, or otherwise records, upon the credit card transaction form or otherwise.” Civil Code § 1747.08(a)(2). Plaintiffs brought a class action alleging that William-Sonoma broke the law by asking customers for their zip code and other personal information in the middle of processing their credit card at checkout.
William-Sonoma countered that the practice of store employees in regard to asking of the information at checkout was not uniform, that providing the information was voluntary, and that signs were prominently posted at checkout advising customers that they did not have to provide the information as a condition to making a purchase.
Following a long line of cases under the Act, the court affirmed the lower court’s determination that the applicable standard was whether a reasonable person would believe he or she was compelled to provide the information as a condition to completing the transaction based on all the circumstances. It declined to adopt plaintiffs’ proffered rule that asking for the information in the middle of processing the transaction was a per se violation. The court also affirmed the lower court’s order decertifying the class, based on plaintiffs’ failure to establish that the circumstances at checkout were sufficiently uniform so as to constitute a common issue.
A California merchant asking for personal information at checkout for marketing purposes may want to review the policies and procedures Williams-Sonoma put in place, as described in the opinion, including employee training, which allowed the company to prevail in this case.
Special thanks to Tevin Hopkins for his contributions to this post.
Over the past several months, the 2020 Census has been a growing concern for many—from the Trump administration’s efforts to include a citizenship question to concerns that the process of counting every single person living in the country may not receive the proper funding it needs. However, there is another issue that should be just as alarming. Recently, the U.S. Census Bureau conducted an experiment with previously acquired census data to determine if the information people provide to the Bureau could threaten their privacy. The agency used this information, along with other publicly available records, and discovered that they were able to infer the identities of 52 million Americans. To try to combat this privacy issue, the Bureau is going to use a technique called “differential privacy,” which changes certain numbers in the statistics to protect identities, but retains the survey’s primary findings. How effective this strategy will be remains to be seen. If the results from the Census are too diluted, it can lead to issues with redistricting and the dilution of minority voting power, possibly violating the Voting Rights Act.
To most people, however, their primary concern will be with their own identity and who will be able to access it with the public information released by the 2020 Census. With people putting more and more of their information on the web via social media or signing up for various other online accounts, it only gets easier for cyber predators to combine all this information, learn identities and other personal information about people, and use it to their detriment.
While bypassing the 2020 Census may not be an option, there are a few simple steps you can take to protect your identity and it mainly has to do with your online profile. Keep your online accounts to a minimum, only sign up for accounts that you will actually use and be beneficial to you, never provide information that was solicited via a suspicious email or other suspicious websites, and keep close track of those online accounts that use or save your credit card information.
A “badvert” is a false advertisement that has been coded to redirect the user to malicious content. Known as maladvertising in the infosecurity community, badverts generate revenue for the attacker by redirecting the user to a page that delivers genuine advertisements that the coders behind the original, legitimate advertisement did not otherwise intend the user to see. It is also quite common for the page to which the user is redirected to contain malicious software (also known as malware), which is a term used to generally refer to computer viruses or software that enables a user to obtain covert information about another’s computer activities by transmitting data covertly from the victim’s hard drive.
One particularly prolific badvertising attacker is eGobbler, which has undertaken several wildly successful badvertising campaigns. The first truly newsworthy badvertising campaign by eGobbler resulted in roughly 500 million legitimate advertisements being compromised on the iPhone in only ten days in April 2019. The attacker, or more likely attackers, found a vulnerability in the Google Chrome application for iOS that allowed them to bypass pop-up blockers and redirect unsuspecting users to the badvert sites. Security researchers later concluded that eGobbler had been behind a campaign that resulted in the corruption of over 1.1 billion advertisements. Security researchers believe that eGobbler may be an organized criminal venture, as the attacker has been able to locate software vulnerabilities specific only to certain applications on certain devices and quickly exploit those vulnerabilities with expert efficiency. Researchers are attempting to run test environments on various devices to spot eGobbler campaigns in the early stages. This is an increasingly difficult task as the attackers have begun exploiting software loopholes that render “sandboxing” measures useless as a defense against badvert campaigns.
How can you protect yourself?
Security research teams constantly monitor applications and devices for potential maladvertising threats. Once discovered, these teams report the vulnerabilities to in-house security teams at companies such as Google and Apple. The Google and Apple teams then develop protections to the vulnerabilities and release those protections in patches. Therefore, you should ensure that your operating systems and browsers are completely up to date and capturing the latest patches released by the development teams. For example, the eGobbler loophole discussed above was corrected in the iOS 13 release on September 19.
A recent case before the United States Court of Federal Claims provides a good reminder to keep track of images on your website and all webpages, because copyright infringement claims may be lurking. We address this issue in detail in an Alert available here
On September 12 and 13, 2019, the California Legislature passed a number of amendments to the 2018 California Consumer Privacy Act. The amendments leave the overall scope of the Act unchanged, but are generally favorable to business. They make the Act clearer, carve out certain conduct from its coverage, and, in some cases, make compliance easier. No further legislative clarification of the Act is expected in the near future. As a result, businesses should start preparing now to be in compliance by the Act’s effective date of January 1, 2020, based on these new amendments. Our latest Privacy Alert analyzes the impacts, which can be viewed on our website.
At the end of July, New York Governor Andrew Cuomo signed into law the Stop Hack and Improve Electronic Data Security Act (SHIELD Act). The SHIELD Act amends and expands New York’s current data breach notification law, which may affect persons or companies that do not even conduct business in New York. Here’s what you need to know ahead of the March 21, 2020, effective date:
Who must comply?
Any person or business that owns or licenses computerized data, which includes private information of New York residents, must comply with the SHIELD Act, regardless of whether that person or business even conducts business in New York.
An expanded definition of "private information."
New York’s data breach notification law has always varied from similar laws in other states in that it includes definitions for both “personal information” and “private information.” Under the SHIELD Act, “personal information” remains “any information concerning a natural person which, because of name, number, personal mark, or other identifier, can be used to identify such natural person.” “Private information” captures the information that, if breached, could trigger a notification requirement. The SHIELD Act expands “private information” to include:
Personal information consisting of any information in combination with one or more of the following data elements, when either the data element or the combination of personal information plus the data element is not encrypted, or is encrypted with an encryption key that has also been accessed or acquired:
Social security number,
Driver’s license number or non-driver identification card number,
Financial account numbers with required security codes or access codes, or
A user name or e-mail address in combination with a password or security question and answer that would permit access to an online account.
Access alone constitutes a "breach of the security of the system."
The SHIELD Act broadens the phrase “breach of the security of the system,” which consequently broadens the circumstances under which notification is required. Notably, the SHIELD Act includes in the definition of “breach of the security of the system” incidents that involve “access” to private information, regardless of whether the access led to “acquisition” of the information. Under the original New York data breach notification law, data must have been acquired to constitute a breach. The SHIELD Act keeps intact certain exceptions to the definition of “breach” including the “good faith employee” exception and provides factors for determining whether there has been unauthorized access to private information.
Notably, companies that are already subject to the data breach notification requirements under certain applicable state or federal laws, including HIPAA, GLBA, and the NYS DFS Regulation 500, are not required to further notify affected individuals. However, notifications to the New York Attorney General, the New York State Department of Consumer Protection, and the New York State Police are still required.
The SHIELD Act does not require notification of the breach if “exposure of private information” was an “inadvertent disclosure and the individual or business reasonably determines such exposure will not likely result in misuse of such information, or financial harm to the affected persons or emotional harm in the case of unknown disclosure of online credentials.” This risk assessment should be memorialized in writing.
A risk assessment is now permitted.
Reasonable data security requirements are imposed.
The SHIELD Act also imposes data security requirements on any person or business that owns or licenses computerized data that includes private information of New York residents. These security requirements must be designed to protect the security, confidentiality, and integrity of the private information. The SHIELD Act provides examples of practices that are considered reasonable, including: (i) risk assessments, (ii) employee training, (iii) due diligence for vendor selection, and (iv) data retention and disposal policies.
Companies subject to HIPAA and the GLBA are already deemed to be in compliance with these requirements. While this requirement applies to businesses of all sizes, data security safeguards may be implemented and maintained that are “appropriate for the size and complexity of the small business, the nature and scope of the small business’s activities, and the sensitivity of the personal information the small business collects from or about consumers.” For purposes of the SHIELD Act, a small business is any business with fewer than 50 employees, less than $3 million in gross annual revenue in each of the last three years, or less than $5 million in year-end total assets.
There are potential penalties.
While the SHIELD Act does not provide for a private right of action, the attorney general may bring an action to enjoin violations of the law and obtain civil penalties. For data breach notification violations that are not reckless or knowing, the court may award damages for actual costs or losses incurred by a person entitled to notice. For data breach notification violations that are knowing and reckless, the court may impose penalties of the greater of $5,000 or up to $20 per instance with a cap of $250,000. For violations of the reasonable security measures, the court may impose penalties of not more than $5,000 per violation.
If you have further questions about the SHIELD Act and how it may impact your business, employees, or consumers, please contact a member of our team.
At this point, over 15 states have proposed privacy legislation similar to the California Consumer Privacy Act (CCPA). Most recently, however, New York has drafted a proposed law that, in some instances, goes well beyond the protections afforded by the CCPA. Sponsored by Kevin Thomas, the NY Privacy Act gives New York residents more control over their data than any other state and funds a new office of privacy and data protection.
Like the CCPA, the NY Privacy Act would allow residents to see what data companies are collecting on them, where it is being shared, request that it be corrected or deleted and opt-out of having their data shared or sold to third parties. But as currently drafted, the bill offers residents more protections than the CCPA. Here are some of the key distinctions:
Private Right of Action. While the CCPA did attempt to include such a right of action, it failed, leaving the state’s attorney general the responsibility of enforcement actions. As written, the bill would give individuals the right to sue companies directly for violations of the law.
Expanded Coverage. The CCPA applies to businesses that make more than $25 million annual gross revenue. The New York bill applies to companies of any size.
Data Fiduciaries. In the biggest departure from other state privacy laws, the bill would require businesses to act as “data fiduciaries.” Similar to the way other professions, like an attorney for example, are required to hold data and not share it without a purpose, companies would be banned from using data in a way that causes residents financial or physical harm or in a way that would be “unexpected and highly offensive to a reasonable consumer.” The bill also states that this duty supersedes companies’ fiduciary duties to shareholders.
As expected, this bill isn’t sitting well with many companies. While the requirements under this bill are increasingly more restrictive than other states, the simple burden of having to track and comply with various state laws is motivating many companies to push for a federal law.
The full text of the bill can be read here.