Special thanks to James Ingram for his contributions to this post.
Is the use of automated “data-scraping” bots to collect information from public LinkedIn profiles fair game under the Computer Fraud and Abuse Act (CFAA)? According to the Ninth Circuit’s recent ruling in hiQ Labs, Inc. v. LinkedIn Corporation, No. 17-16783, 2019 WL 4251889 (9th Cir. Sept. 9, 2019), the answer is likely “yes.”
In hiQ Labs, LinkedIn sent data analytics company hiQ a cease-and-desist letter demanding that hiQ stop scraping data from LinkedIn users’ public profiles and asserting that continuation of the practice would constitute a violation of the CFAA. hiQ, in turn, sought a preliminary injunction to enjoin LinkedIn from invoking the CFAA against it.
The CFAA, codified at 18 U.S.C. § 1030, prohibits the intentional accessing of a protected computer “without authorization” in order to obtain information from it. The Ninth Circuit considered the meaning of the phrase “without authorization” and determined that its use in the statute is meant to protect against the digital equivalent of “breaking and entering.” As such, simply collecting publicly available data from a website like LinkedIn does not give rise to a CFAA violation. The court rather indicated that the CFAA is violated only “when a person circumvents a computer’s generally applicable rules regarding access permissions, such as username and password requirements, to gain access to a computer.”
Applying this framework, the court found that there is a serious question as to whether hiQ’s data-scraping practices violate the CFAA, and granted hiQ’s motion for a preliminary injunction. It noted that LinkedIn does not claim to own the information that its users share on their public profiles and that such information is available without a username or password to anyone with access to a web browser. The court also rejected LinkedIn’s argument that an injunction would threaten the privacy of its members, finding “little evidence that LinkedIn users who choose to make their profiles public actually maintain an expectation of privacy with respect to the information that they post publicly . . .”
The court’s decision at this stage of litigation is certainly encouraging for hiQ and others engaged in similar data collection practices. The NP Privacy Partner team will continue to monitor developments in this case, but in the meantime: (i) companies seeking to protect user data should ensure that protective measures, such as required usernames and passwords, are in place to create a clear barrier between public data and that which is accessed without authorization, and (ii) LinkedIn users should be aware that information posted to their public profiles may very well end up in the hands of third-party data collectors.
A California Court of Appeal recently affirmed a lower court ruling in favor of Williams-Sonoma in a case under the Song Beverly Credit Card Act of 1971 (the “Act”) challenging the store’s practice in soliciting consumer personal information at checkout. Williams-Sonoma Song-Beverly Act Cases, 2019 DJDAR 9435 (Ct. App, 1st Dist. September 30, 2019).
The Act makes it illegal, in a credit card transaction, to “request, or require as a condition to accepting the credit card as payment …, the cardholder to provide personal information which the [merchant] causes to be written, or otherwise records, upon the credit card transaction form or otherwise.” Civil Code § 1747.08(a)(2). Plaintiffs brought a class action alleging that William-Sonoma broke the law by asking customers for their zip code and other personal information in the middle of processing their credit card at checkout.
William-Sonoma countered that the practice of store employees in regard to asking of the information at checkout was not uniform, that providing the information was voluntary, and that signs were prominently posted at checkout advising customers that they did not have to provide the information as a condition to making a purchase.
Following a long line of cases under the Act, the court affirmed the lower court’s determination that the applicable standard was whether a reasonable person would believe he or she was compelled to provide the information as a condition to completing the transaction based on all the circumstances. It declined to adopt plaintiffs’ proffered rule that asking for the information in the middle of processing the transaction was a per se violation. The court also affirmed the lower court’s order decertifying the class, based on plaintiffs’ failure to establish that the circumstances at checkout were sufficiently uniform so as to constitute a common issue.
A California merchant asking for personal information at checkout for marketing purposes may want to review the policies and procedures Williams-Sonoma put in place, as described in the opinion, including employee training, which allowed the company to prevail in this case.
On March 15, 2019, the Federal Trade Commission (FTC) released its Privacy & Data Security Update: 2018, a report summarizing its work on these topics in calendar year 2018.
The FTC, as the body charged with enhancing competition and protecting consumers, detailed its efforts over the past year to attempt to stop privacy and security violations and to require companies to remediate any unlawful practices.
The report highlights the FTC’s notable enforcement actions last year, including a settlement with PayPal, Inc. addressing allegedly deceptive privacy settings in its Venmo service line, as well as a judgment exceeding $700,000 against Alliance Law Group for alleged collection of fake debts by individuals posing as attorneys. It also summarizes settlements obtained with VTech Electronics Limited and Explore Talent for alleged violations of the Children’s Online Privacy Protection Act (COPPA).
In addition to enforcement actions, the Update discusses the FTC’s outreach efforts last year, including the various types of guidance and educational materials promulgated by the FTC in 2018, addressing topics such as cybersecurity tips for small businesses and items for consumers to consider prior to using Virtual Private Network (VPN) apps. It also mentions hearings hosted by the FTC on data security, competition and consumer protection issues surrounding the use of artificial intelligence, algorithms and predictive analytics and privacy and competition issues related to big data.
The Update also discusses reports issued by the FTC in 2018, including one addressing the complex nature of patching mobile operating systems and one highlighting key points from the FTC and National Highway Traffic Safety Administration’s workshop on connected cars.
On an international level, the Update details the FTC’s engagement with international organizations, privacy authorities in other countries and global privacy authority networks on mutual enforcement of privacy and security requirements, as well as investigation cooperation.
The Update illustrates that 2018 was an active year for the FTC. Given enforcement action and FTC community outreach thus far in 2019, we do not expect that trend to decrease this year. Businesses should ensure that their privacy and security practices remain compliant with the FTC Act and any other applicable laws and regulations governing their industry. In particular, entities should review their privacy policies to ensure that the terms of these documents remain in line with their privacy practices and are not misleading to consumers.
The FTC Privacy & Data Security Update: 2018 can be found here.
Following a report of a breach of protected health information, on August 29, 2018, the New York Attorney General announced a settlement with Arc of Erie County, a social services agency that serves persons with developmental disabilities and their families. Arc of Erie County received a $200,000 financial penalty plus a Corrective Action Plan, which requires Arc of Erie County to conduct a HIPAA-required security risk assessment and submit a report of that assessment to the attorney general’s office.
Under HIPAA, Arc of Erie County and other covered entity health care providers are required to implement appropriate physical, technical and administrative safeguards to protect clients’ protected health information. In March 2018, Arc of Erie County notified impacted clients and the attorney general of a breach of client health information involving a website designed for internal staff access that was visible online, with information from that site found through search engines as well. The data that was available to the public included full names, social security numbers, addresses and dates of birth. A forensic investigation demonstrated that individuals outside the United States accessed the links to the sensitive data many times. The data breach impacted 3,751 New York residents.
In addition to the Department of Health and Human Services, Office for Civil Rights, the HIPAA regulations provide state attorneys’ general with HIPAA enforcement authority. The New York Attorney General’s office concluded that Arc of Erie County failed to implement appropriate physical, technical and administrative safeguards to protect its clients’ health information, as required by HIPAA. The attorney general’s office determined that this resulted in an impermissible disclosure of electronic protected health information.
This enforcement action emphasizes the need for all organizations, even not-for-profit, community-based providers, to conduct enterprise-wide security risk assessments. Data gleaned from such assessments should be the basis for the organization’s risk management plan, which is also a HIPAA requirement. These items are fundamental parts of a covered entity or business associate’s HIPAA compliance program and elements that will be requested in any governmental audit or investigation of HIPAA compliance.
The California Supreme Court has ruled that colleges and universities have a legal duty to protect or warn their students from foreseeable violence in the classroom or during “curricular activities.” Recognizing that courts traditionally have not found a “special relationship” between colleges and their adult students warranting the imposition of a duty to protect, the court distinguished cases involving alcohol-related injuries, off-campus behavior and social activities unrelated to school, in which colleges have little control over student behavior. But, the court held such a special relationship existed when students “are engaged in activities that are part of the school’s curriculum or closely related to its delivery of educational services.” In these settings, the court reasoned: “[s]tudents are comparatively vulnerable and dependent on their colleges for a safe environment. Colleges have a superior ability to provide that safety with respect to activities they sponsor or facilities they control. Moreover, this relationship is bounded by the student’s enrollment status. Colleges do not have a special relationship with the world at large, but only with their enrolled students. The population is limited, as is the relationship’s duration.”
As to foreseeability, the court stated the operative inquiry was “whether a reasonable university could foresee that its negligent failure to control a potentially violent student, or to warn students who were foreseeable targets of his ire, could result in harm to one of those students.” The court further stated, “[w]hether a university was, or should have been, on notice that a particular student posed a forseeable risk of violence is a case-specific question, to be examined in light of all the surrounding circumstances.” In this regard, relevant considerations included: 1) prior threats or acts of violence by the student, particularly if targeted at an identifiable victim; 2) opinions of examining mental health professionals; and 3) observations of students, faculty, family members and others in the school community. The court noted, in an appropriate case, a college’s duty to protect its students from foreseeable harm “may be fully discharged if adequate warnings are conveyed to the students at risk.”
The court rejected several public policy arguments that were advanced against imposition of a new duty to protect related to mental health treatment of students. For example, colleges now may be discouraged from offering comprehensive mental health and crisis management services, and rather than become engaged in the treatment of their mentally ill students, have an incentive to expel anyone who might pose a remote threat to others. The court acknowledged that colleges would now be forced “to balance competing goals and make sometimes difficult decisions,” and the duty might “give some schools a marginal incentive to suspend or expel students who display a potential for violence.” The court further allowed that its duty to protect “might make schools reluctant to admit certain students, or to offer mental health treatment.” But, pointing to laws such as the Americans with Disabilities Act (42 U.S.C. 12101 et seq.), the court said colleges were restricted in this area and suggested schools might “have options short of expelling or denying admission to deal with potentially violent students.” The court did not address federal privacy laws, which prevent the disclosure of students’ medical and mental health history, or how colleges could operate within the confines of those laws to “warn” students of potential threats.
The court also discounted the concern that legal recognition of a duty might deter students from seeking mental health treatment, or being candid with treatment providers, for fear that their confidences would be disclosed. The court pointed to the long-standing duty in California of psychotherapists to warn about patient threats, the initial fears the special duty would deter patients from seeking treatment and being open with therapists, and subsequent empirical studies that showed no evidence patients had been discouraged from going to therapy or discouraged from speaking freely once there.
The court was careful to clarify that the duty to protect it had articulated did not automatically create liability for a college and its holding was not to be interpreted “to create an impossible requirement that colleges prevent violence on their campuses.” The court stated: “[c]olleges are not the ultimate insurers of all student safety. We simply hold that they have a duty to act with reasonable care when aware of a foreseeable threat of violence in a curricular setting. Reasonable care will vary under the circumstances of each case. Moreover, some assaults may be unavoidable despite a college’s best efforts to prevent them. Courts and juries should be cautioned to avoid judging liability based on hindsight.”
A concurring justice wrote the majority opinion was “likely to create confusion” as it offered “no guidance as to which non-classroom activities qualify as either ‘curricular’ or ‘closely related to the delivery of educational services’ or what factors were relevant to that determination.”
The full opinion may be found here.
Last year we reported on CareFirst, Inc.’s win at the district court level—the court found that the class action plaintiffs lacked standing to sue because they inadequately alleged actual injury stemming from a June 2014 data breach. Earlier this month, however, the D.C. Circuit reversed that decision on appeal, concluding that “the district court gave the complaint an unduly narrow reading. Plaintiffs have cleared the low bar to establish their standing at the pleading stage.” Attias, et al. v. CareFirst, Inc., et al., No. 16-7108 (D.C. Cir. Aug. 1, 2017).
The circuit courts are split on the issue of whether class action plaintiffs in data breach suits have standing to sue simply by virtue of the breach and the nature of the data that was potentially exposed. With this decision, the D.C. Circuit joins others that have answered this question in the affirmative, such as the Third, Sixth, Seventh and Eleventh Circuits. In other words, class action plaintiffs claiming injury stemming from a data breach have standing to sue in those circuits based merely on the imminent threat of identity fraud, rather than needing to allege actual harm.
This decision once again highlights the need for companies to remain vigilant about their data security policies and practices to hopefully avoid a potential breach. Such safeguards should also be tailored to minimize the risk of any negligence claims in the event of a breach.
A plastic surgeon is being sued in Cook County for allegedly disclosing protected health information. After performing a breast augmentation surgery on the plaintiff, the surgeon allegedly posted before and after pictures of the plaintiff’s breasts on the doctor’s website. The doctor did not include the plaintiff’s name or face.
The plaintiff claims that she has a distinctive freckle pattern on her chest allowing her to be identified. She asserts that she has a fear that friends or family would find these photos and know it was her. The surgeon took the photos down immediately.
The plaintiff had been given two forms. The first was to allow the surgeon to take photos for medical use. The other was a release allowing the photos to be used for promotional and marketing purposes. She claims she only signed the first release. The complaint alleges the photos were up for nearly a year before the plaintiff discovered them.
The complaint seeks more than $50,000 in damages for violating Illinois’ privacy law and for negligent and intentional infliction of emotional distress.
It is well established in both state and federal law that a patient’s private health information must be vigorously protected. However, it seems that as of now, what constitutes a patient’s private health information is just as limitless as, say, freckles.
On April 18, 2017, named plaintiff Joan Richards brought a class action suit in the Southern District of Florida against defendant MDLive, Inc., a Florida-based company that provides virtual access to doctors and therapists through the use of an application that can be accessed using a computer or mobile phone. Richards alleged that, unbeknownst to app users, MDLive designed the app to capture screenshots throughout its first fifteen minutes of use, which is when users enter their personal health information into the app. Richards claimed that the captured screenshots, which contained users’ sensitive medical information, were sent by MDLive to TestFairy, an Israeli-based mobile application testing company. The complaint alleged that MDLive’s failure to restrict third-party access to patient health information unreasonably intruded upon patient privacy and placed the confidentiality of protected health information at risk.
MDLive filed a motion to dismiss the complaint on May 2, 2017. It argued that its terms-of-service contract specifically alerts users to its practice of data sharing. MDLive maintained that this explicit notice was sufficient to defeat Richards’ claims for breach of contract, intrusion upon seclusion, fraud and unjust enrichment, as well as Richards’ state-based claims.
In response to the motion to dismiss, Richards notified the court of her intent to move to amend her complaint. However, on June 2, 2017, she instead provided the court with a Notice of Voluntary Dismissal. The court granted dismissal with prejudice on June 5, 2017. Following dismissal, MDLive released a statement on its website that “[n]o settlement payment or any other consideration was paid by, or on behalf of, MDLIVE or its management in connection with the lawsuit’s dismissal.”
Special thanks to David Kaye for his contributions.
Christopher Porco was convicted in 2006 of murdering his father and severely injuring his mother with a fireman’s ax as they slept in their home in Bethlehem, New York. After Porco’s crime grabbed headlines across the country, the Lifetime network set out to produce a fictionalized made-for-TV movie about his arrest and conviction.
After learning of the film, Porco filed suit under New York’s Right of Privacy statute, N.Y. Civ. Rights §§ 50, et seq., against Lifetime Entertainment Services LLC to prevent it from broadcasting the movie. Porco v. Lifetime Entm’t Servs., LLC, No. 522707, 2017 WL 703034 (NY. Ct. App. 3d Dep’t Feb. 23, 2017). Section 51 of the Right of Privacy statute allows a person whose “name, portrait, picture or voice” is used “for advertising purposes or for the purposes of trade” without their consent to obtain an injunction restraining the use of their name, portrait, picture or voice, as well as to sue for damages. N.Y. Civ. Rights § 51.
After initiating suit, Porco successfully obtained a temporary restraining order preventing the airing of the film pending a decision in his motion for a preliminary injunction. However, Porco’s victory was short lived, as the Appellate Division, Third Judicial Department, vacated the order. Under the tagline, “the movie Chris Porco doesn’t want you to see,” the movie aired nationally on March 23, 2013. Thereafter, the trial court granted Lifetime’s motion to dismiss Porco’s complaint for failure to state a cause of action.
On appeal, the Third Department was asked to decide whether Porco had adequately alleged that Lifetime’s movie was sufficiently fictionalized so as to take it out of the newsworthiness exception to liability under § 51. Reviewing New York jurisprudence on the issue, the Third Department explained that liability may attach to a “materially and substantially fictitious biography where a knowing fictionalization amounts to an all-pervasive set of imaginary incidents . . . .” Porco, 2017 WL 703034, at *2 (internal quotation marks and citations omitted). In holding that Porco had met this burden, the Third Department noted that one of the film’s producers had written a letter to Porco’s mother explaining that she was involved in the production of a documentary intended to accompany the film and to “provide the platform for the family to state their position in a non-fictional program” after the film aired. Id. The court concluded this was strong evidence that even Lifetime considered the film to be a fictitious program.
Because the Third Department was only considering a motion to dismiss, the film was not before the court and it made no ruling on whether the film was actually sufficiently fictionalized so as to avoid the newsworthiness exception. Therefore, Lifetime may still succeed at trial if it can show, its letter notwithstanding, that its movie was an accurate portrayal of Porco and his crime. Thus, for the time being, Christopher Porco’s case against Lifetime will proceed in the Supreme Court where it will be decided if the right of privacy in New York protects the unauthorized telling of a convicted murderer's story.
We have been following NFL player Jason Pierre Paul’s lawsuit regarding the posting of his medical chart on social media after suffering a hand injury from an accident with fireworks. In August 2016, Judge Cook ruled that the invasion of privacy claims would not be dismissed. Now in early 2017, the parties recently decided to put an end to the litigation and settle the case. The settlement amount and the terms of such settlement have not been disclosed. Regardless, ESPN continues to vigorously defend its position in this case.