NP Privacy Partner
Search Nixon Peabody's Data Privacy and Cybersecurity blog  Nixon Peabody on Twitter Nixon Peabody on YouTube
Subscribe:Nixon Peabody's Data Privacy and Cybersecurity blog  Nixon Peabody's Data Privacy and Cybersecurity blog
Share Print View
Data Privacy Blog > Categories
Massachusetts District Court Halts “Suspicionless” Searches of Electronic Devices at U.S. Border

Special thanks to James Ingram for his contributions to this blog post.

Last week, the U.S. District Court in Massachusetts put an end to “suspicionless” searches of international travelers’ smartphones and laptops at the U.S. border. With its decision in Alasaad v. Duke, case no. 1:17-cv-11730., the court held that border officials must have at least a reasonable suspicion that an international traveler is carrying some sort of contraband on a smartphone or laptop before searching such devices.

The case was brought by eleven plaintiffs, each of whom alleged that their phones were taken and searched without cause at U.S. ports of entry. The searches had revealed private information about the plaintiffs, including social media postings, photos, and in one instance, attorney-client communications.

The government pushed back against the plaintiffs’ claim that the searches violated their constitutional privacy rights, arguing that it had authority to conduct such searches under the “border search exception” to the Fourth Amendment. The court disagreed, however, stating that although the border search exception recognizes the government’s compelling interest in border security, it does not allow unfettered discretion in conducting searches at the border, especially with respect to smartphones and laptops.

Leaning on Supreme Court precedent set forth in Riley v. California (2014), the court noted that searching electronic devices fundamentally differs from searching other items, due to the former’s capacity to store vast amounts of personal information. As such, requiring border officials to have a particularized suspicion prior to searching electronic devices is a necessary measure to protect privacy rights, despite the government’s heightened interest in the area of border security.

Although the court stopped short of requiring border officials to obtain a warrant supported by probable cause prior to searching smartphones and laptops, its decision is being hailed as a major victory by privacy rights advocates. The ruling will serve as an important check on the rising number of electronic device searches conducted at the border each year.

 

Ubisoft sues Dutch 17-year-old over video game cheats

Special thanks to Vincent Tennant for his contributions to this post.

In a complaint filed in the Central District of California, video game publisher Ubisoft, Inc. (Ubisoft) claims Digital Millennium Copyright Act (DMCA) violations, interference in contractual relations, and unfair competition against a Dutch 17-year-old and his mother. The complaint alleges that the defendants produced and distributed “cheat” software for Ubisoft’s online multiplayer game Rainbow Six: Siege. The lawsuit raises issues for copyright, contracts, and the growing esports industry.

The Cheat

The defendants are alleged to have sold a single product, titled “Budget Edition Rainbow Six: Siege Cheat” (Cheat). Rainbox Six: Siege is a “first person shooter” game with both casual and competitive online multiplayer matches. The Cheat altered the game to the Cheat user’s advantage by making enemies easier to detect and increasing the amount of damage Cheat users inflict. Ubisoft claims that fair gameplay is vital to the success of its game and that “thousands of hours” are spent to detect and thwart cheaters.

The defendants are alleged to have sold and distributed the software through the family’s web design business in the Netherlands. 

Trafficking in circumvention devices

The audiovisual elements and the computer program that run a video game are copyrightable. Ubisoft alleges that defendants’ conduct is unlawful under a DMCA provision concerning bypassing technological safeguards to access a copyrighted work, 17 U.S.C.§ 1201(a)(2). Anyone found to be distributing a service or tool that is “primarily designed for the purpose of circumventing a technological measure that effectively controls access to a work” is liable under this provision. Ubisoft will argue that there are numerous technological safeguards in the code of the game to protect the copyrighted elements from being accessed and that the Cheat circumvents them.

Intentional interference with contractual relations

Ubisoft also claims that the defendants intentionally interfered with the contractual relationship between Ubisoft and the other players of the game. Each player must agree to Terms of Use, which include a prohibition on modification of the game. Ubisoft argues that they have suffered damages due to the defendants’ inducement of others to violate the Terms of Use of the game.

Unfair competition

Finally, Ubisoft alleges unlawful conduct by defendants under California’s unfair competition laws. “Unfair” is not well defined under this law, allowing courts broad discretion in applying it to claims of fraud. Ubisoft will likely point to aspects of the Cheat that “trick” the game to avoid detection in order to satisfy its unfair competition claim.

 

 

Ninth Circuit Indicates that Information on Public LinkedIn Profiles is Fair Game for Automated Data Scraping Bots Under the CFAA

Special thanks to James Ingram for his contributions to this post.

Is the use of automated “data-scraping” bots to collect information from public LinkedIn profiles fair game under the Computer Fraud and Abuse Act (CFAA)? According to the Ninth Circuit’s recent ruling in hiQ Labs, Inc. v. LinkedIn Corporation, No. 17-16783, 2019 WL 4251889 (9th Cir. Sept. 9, 2019), the answer is likely “yes.”

In hiQ Labs, LinkedIn sent data analytics company hiQ a cease-and-desist letter demanding that hiQ stop scraping data from LinkedIn users’ public profiles and asserting that continuation of the practice would constitute a violation of the CFAA. hiQ, in turn, sought a preliminary injunction to enjoin LinkedIn from invoking the CFAA against it.

The CFAA, codified at 18 U.S.C. § 1030, prohibits the intentional accessing of a protected computer “without authorization” in order to obtain information from it. The Ninth Circuit considered the meaning of the phrase “without authorization” and determined that its use in the statute is meant to protect against the digital equivalent of “breaking and entering.” As such, simply collecting publicly available data from a website like LinkedIn does not give rise to a CFAA violation. The court rather indicated that the CFAA is violated only “when a person circumvents a computer’s generally applicable rules regarding access permissions, such as username and password requirements, to gain access to a computer.”

Applying this framework, the court found that there is a serious question as to whether hiQ’s data-scraping practices violate the CFAA, and granted hiQ’s motion for a preliminary injunction. It noted that LinkedIn does not claim to own the information that its users share on their public profiles and that such information is available without a username or password to anyone with access to a web browser. The court also rejected LinkedIn’s argument that an injunction would threaten the privacy of its members, finding “little evidence that LinkedIn users who choose to make their profiles public actually maintain an expectation of privacy with respect to the information that they post publicly . . .”

The court’s decision at this stage of litigation is certainly encouraging for hiQ and others engaged in similar data collection practices. The NP Privacy Partner team will continue to monitor developments in this case, but in the meantime: (i) companies seeking to protect user data should ensure that protective measures, such as required usernames and passwords, are in place to create a clear barrier between public data and that which is accessed without authorization, and (ii) LinkedIn users should be aware that information posted to their public profiles may very well end up in the hands of third-party data collectors.

California court affirms win for Williams-Sonoma regarding gathering personal data at checkout

A California Court of Appeal recently affirmed a lower court ruling in favor of Williams-Sonoma in a case under the Song Beverly Credit Card Act of 1971 (the “Act”) challenging the store’s practice in soliciting consumer personal information at checkout. Williams-Sonoma Song-Beverly Act Cases, 2019 DJDAR 9435 (Ct. App, 1st Dist. September 30, 2019).

The Act makes it illegal, in a credit card transaction, to “request, or require as a condition to accepting the credit card as payment …, the cardholder to provide personal information which the [merchant] causes to be written, or otherwise records, upon the credit card transaction form or otherwise.” Civil Code § 1747.08(a)(2). Plaintiffs brought a class action alleging that William-Sonoma broke the law by asking customers for their zip code and other personal information in the middle of processing their credit card at checkout.

William-Sonoma countered that the practice of store employees in regard to asking of the information at checkout was not uniform, that providing the information was voluntary, and that signs were prominently posted at checkout advising customers that they did not have to provide the information as a condition to making a purchase.

Following a long line of cases under the Act, the court affirmed the lower court’s determination that the applicable standard was whether a reasonable person would believe he or she was compelled to provide the information as a condition to completing the transaction based on all the circumstances. It declined to adopt plaintiffs’ proffered rule that asking for the information in the middle of processing the transaction was a per se violation. The court also affirmed the lower court’s order decertifying the class, based on plaintiffs’ failure to establish that the circumstances at checkout were sufficiently uniform so as to constitute a common issue.

A California merchant asking for personal information at checkout for marketing purposes may want to review the policies and procedures Williams-Sonoma put in place, as described in the opinion, including employee training, which allowed the company to prevail in this case.

FTC releases summary of 2018 privacy and data security enforcement and outreach

On March 15, 2019, the Federal Trade Commission (FTC) released its Privacy & Data Security Update: 2018, a report summarizing its work on these topics in calendar year 2018.

The FTC, as the body charged with enhancing competition and protecting consumers, detailed its efforts over the past year to attempt to stop privacy and security violations and to require companies to remediate any unlawful practices.

The report highlights the FTC’s notable enforcement actions last year, including a settlement with PayPal, Inc. addressing allegedly deceptive privacy settings in its Venmo service line, as well as a judgment exceeding $700,000 against Alliance Law Group for alleged collection of fake debts by individuals posing as attorneys. It also summarizes settlements obtained with VTech Electronics Limited and Explore Talent for alleged violations of the Children’s Online Privacy Protection Act (COPPA).

In addition to enforcement actions, the Update discusses the FTC’s outreach efforts last year, including the various types of guidance and educational materials promulgated by the FTC in 2018, addressing topics such as cybersecurity tips for small businesses and items for consumers to consider prior to using Virtual Private Network (VPN) apps. It also mentions hearings hosted by the FTC on data security, competition and consumer protection issues surrounding the use of artificial intelligence, algorithms and predictive analytics and privacy and competition issues related to big data.

The Update also discusses reports issued by the FTC in 2018, including one addressing the complex nature of patching mobile operating systems and one highlighting key points from the FTC and National Highway Traffic Safety Administration’s workshop on connected cars.

On an international level, the Update details the FTC’s engagement with international organizations, privacy authorities in other countries and global privacy authority networks on mutual enforcement of privacy and security requirements, as well as investigation cooperation.

The Update illustrates that 2018 was an active year for the FTC. Given enforcement action and FTC community outreach thus far in 2019, we do not expect that trend to decrease this year. Businesses should ensure that their privacy and security practices remain compliant with the FTC Act and any other applicable laws and regulations governing their industry. In particular, entities should review their privacy policies to ensure that the terms of these documents remain in line with their privacy practices and are not misleading to consumers.

The FTC Privacy & Data Security Update: 2018 can be found here.

New York Attorney General settles with nonprofit social services agency over HIPAA violation

Following a report of a breach of protected health information, on August 29, 2018, the New York Attorney General announced a settlement with Arc of Erie County, a social services agency that serves persons with developmental disabilities and their families. Arc of Erie County received a $200,000 financial penalty plus a Corrective Action Plan, which requires Arc of Erie County to conduct a HIPAA-required security risk assessment and submit a report of that assessment to the attorney general’s office.

Under HIPAA, Arc of Erie County and other covered entity health care providers are required to implement appropriate physical, technical and administrative safeguards to protect clients’ protected health information. In March 2018, Arc of Erie County notified impacted clients and the attorney general of a breach of client health information involving a website designed for internal staff access that was visible online, with information from that site found through search engines as well. The data that was available to the public included full names, social security numbers, addresses and dates of birth. A forensic investigation demonstrated that individuals outside the United States accessed the links to the sensitive data many times. The data breach impacted 3,751 New York residents.

In addition to the Department of Health and Human Services, Office for Civil Rights, the HIPAA regulations provide state attorneys’ general with HIPAA enforcement authority. The New York Attorney General’s office concluded that Arc of Erie County failed to implement appropriate physical, technical and administrative safeguards to protect its clients’ health information, as required by HIPAA. The attorney general’s office determined that this resulted in an impermissible disclosure of electronic protected health information.

This enforcement action emphasizes the need for all organizations, even not-for-profit, community-based providers, to conduct enterprise-wide security risk assessments. Data gleaned from such assessments should be the basis for the organization’s risk management plan, which is also a HIPAA requirement. These items are fundamental parts of a covered entity or business associate’s HIPAA compliance program and elements that will be requested in any governmental audit or investigation of HIPAA compliance.

Landmark case imposes duty on colleges to protect or warn students against threats of violence despite student privacy concerns

The California Supreme Court has ruled that colleges and universities have a legal duty to protect or warn their students from foreseeable violence in the classroom or during “curricular activities.” Recognizing that courts traditionally have not found a “special relationship” between colleges and their adult students warranting the imposition of a duty to protect, the court distinguished cases involving alcohol-related injuries, off-campus behavior and social activities unrelated to school, in which colleges have little control over student behavior. But, the court held such a special relationship existed when students “are engaged in activities that are part of the school’s curriculum or closely related to its delivery of educational services.” In these settings, the court reasoned: “[s]tudents are comparatively vulnerable and dependent on their colleges for a safe environment. Colleges have a superior ability to provide that safety with respect to activities they sponsor or facilities they control. Moreover, this relationship is bounded by the student’s enrollment status. Colleges do not have a special relationship with the world at large, but only with their enrolled students. The population is limited, as is the relationship’s duration.”

As to foreseeability, the court stated the operative inquiry was “whether a reasonable university could foresee that its negligent failure to control a potentially violent student, or to warn students who were foreseeable targets of his ire, could result in harm to one of those students.” The court further stated, “[w]hether a university was, or should have been, on notice that a particular student posed a forseeable risk of violence is a case-specific question, to be examined in light of all the surrounding circumstances.” In this regard, relevant considerations included: 1) prior threats or acts of violence by the student, particularly if targeted at an identifiable victim; 2) opinions of examining mental health professionals; and 3) observations of students, faculty, family members and others in the school community. The court noted, in an appropriate case, a college’s duty to protect its students from foreseeable harm “may be fully discharged if adequate warnings are conveyed to the students at risk.”

The court rejected several public policy arguments that were advanced against imposition of a new duty to protect related to mental health treatment of students. For example, colleges now may be discouraged from offering comprehensive mental health and crisis management services, and rather than become engaged in the treatment of their mentally ill students, have an incentive to expel anyone who might pose a remote threat to others. The court acknowledged that colleges would now be forced “to balance competing goals and make sometimes difficult decisions,” and the duty might “give some schools a marginal incentive to suspend or expel students who display a potential for violence.” The court further allowed that its duty to protect “might make schools reluctant to admit certain students, or to offer mental health treatment.” But, pointing to laws such as the Americans with Disabilities Act (42 U.S.C. 12101 et seq.), the court said colleges were restricted in this area and suggested schools might “have options short of expelling or denying admission to deal with potentially violent students.” The court did not address federal privacy laws, which prevent the disclosure of students’ medical and mental health history, or how colleges could operate within the confines of those laws to “warn” students of potential threats.

The court also discounted the concern that legal recognition of a duty might deter students from seeking mental health treatment, or being candid with treatment providers, for fear that their confidences would be disclosed. The court pointed to the long-standing duty in California of psychotherapists to warn about patient threats, the initial fears the special duty would deter patients from seeking treatment and being open with therapists, and subsequent empirical studies that showed no evidence patients had been discouraged from going to therapy or discouraged from speaking freely once there.

The court was careful to clarify that the duty to protect it had articulated did not automatically create liability for a college and its holding was not to be interpreted “to create an impossible requirement that colleges prevent violence on their campuses.” The court stated: “[c]olleges are not the ultimate insurers of all student safety. We simply hold that they have a duty to act with reasonable care when aware of a foreseeable threat of violence in a curricular setting. Reasonable care will vary under the circumstances of each case. Moreover, some assaults may be unavoidable despite a college’s best efforts to prevent them. Courts and juries should be cautioned to avoid judging liability based on hindsight.”

A concurring justice wrote the majority opinion was “likely to create confusion” as it offered “no guidance as to which non-classroom activities qualify as either ‘curricular’ or ‘closely related to the delivery of educational services’ or what factors were relevant to that determination.”

The full opinion may be found here.

D.C. Circuit finds standing to sue based on risk of future harm in data breach suit

Last year we reported on CareFirst, Inc.’s win at the district court levelthe court found that the class action plaintiffs lacked standing to sue because they inadequately alleged actual injury stemming from a June 2014 data breach. Earlier this month, however, the D.C. Circuit reversed that decision on appeal, concluding that “the district court gave the complaint an unduly narrow reading. Plaintiffs have cleared the low bar to establish their standing at the pleading stage.” Attias, et al. v. CareFirst, Inc., et al., No. 16-7108 (D.C. Cir. Aug. 1, 2017).

The circuit courts are split on the issue of whether class action plaintiffs in data breach suits have standing to sue simply by virtue of the breach and the nature of the data that was potentially exposed. With this decision, the D.C. Circuit joins others that have answered this question in the affirmative, such as the Third, Sixth, Seventh and Eleventh Circuits. In other words, class action plaintiffs claiming injury stemming from a data breach have standing to sue in those circuits based merely on the imminent threat of identity fraud, rather than needing to allege actual harm.

This decision once again highlights the need for companies to remain vigilant about their data security policies and practices to hopefully avoid a potential breach. Such safeguards should also be tailored to minimize the risk of any negligence claims in the event of a breach.

Freckles: mere sunspots or protected health information?

A plastic surgeon is being sued in Cook County for allegedly disclosing protected health information. After performing a breast augmentation surgery on the plaintiff, the surgeon allegedly posted before and after pictures of the plaintiff’s breasts on the doctor’s website. The doctor did not include the plaintiff’s name or face.

The plaintiff claims that she has a distinctive freckle pattern on her chest allowing her to be identified. She asserts that she has a fear that friends or family would find these photos and know it was her. The surgeon took the photos down immediately.

The plaintiff had been given two forms. The first was to allow the surgeon to take photos for medical use. The other was a release allowing the photos to be used for promotional and marketing purposes. She claims she only signed the first release. The complaint alleges the photos were up for nearly a year before the plaintiff discovered them.

The complaint seeks more than $50,000 in damages for violating Illinois’ privacy law and for negligent and intentional infliction of emotional distress.

It is well established in both state and federal law that a patient’s private health information must be vigorously protected. However, it seems that as of now, what constitutes a patient’s private health information is just as limitless as, say, freckles.

 

MDLive class action suit voluntarily dismissed with prejudice

On April 18, 2017, named plaintiff Joan Richards brought a class action suit in the Southern District of Florida against defendant MDLive, Inc., a Florida-based company that provides virtual access to doctors and therapists through the use of an application that can be accessed using a computer or mobile phone. Richards alleged that, unbeknownst to app users, MDLive designed the app to capture screenshots throughout its first fifteen minutes of use, which is when users enter their personal health information into the app. Richards claimed that the captured screenshots, which contained users’ sensitive medical information, were sent by MDLive to TestFairy, an Israeli-based mobile application testing company. The complaint alleged that MDLive’s failure to restrict third-party access to patient health information unreasonably intruded upon patient privacy and placed the confidentiality of protected health information at risk.

MDLive filed a motion to dismiss the complaint on May 2, 2017. It argued that its terms-of-service contract specifically alerts users to its practice of data sharing. MDLive maintained that this explicit notice was sufficient to defeat Richards’ claims for breach of contract, intrusion upon seclusion, fraud and unjust enrichment, as well as Richards’ state-based claims.

In response to the motion to dismiss, Richards notified the court of her intent to move to amend her complaint. However, on June 2, 2017, she instead provided the court with a Notice of Voluntary Dismissal. The court granted dismissal with prejudice on June 5, 2017. Following dismissal, MDLive released a statement on its website that “[n]o settlement payment or any other consideration was paid by, or on behalf of, MDLIVE or its management in connection with the lawsuit’s dismissal.”

Special thanks to David Kaye for his contributions.

1 - 10 Next

Privacy Policy | Terms of Use and Conditions | Statement of Client Rights
This website contains attorney advertising. Prior results do not guarantee a similar outcome. © 2018 Nixon Peabody LLP
Categories
Category