Special thanks to Christian Albano for his contributions to this post.
Today, smart devices are increasingly making their way into our everyday lives. While the smart phone may be the first example that comes to mind for many, so called “smart” technology has made its way into our cars, watches, and even our homes. Amazon, one of the world’s largest technology companies, has a “smart home” store on its website where it offers many devices, including light bulbs, microwaves, printers, televisions, and speakers.
Another smart device finding a place in today’s homes is the Wi-Fi powered “smart doorbell.” Ring, a company recently bought by Amazon, provides its own version of the smart doorbell on Amazon’s online marketplace. The Ring Video Doorbell is a device that monitors homes with HD video and uses sensors to send alerts to homeowners when motion is detected. The device even provides on-demand video, allowing homeowners to check-in on their homes even when the sensors have not been activated.
Although the features of the Ring Video Doorbell have provided its users with a greater sense of security, they have also recently raised concerns relating to privacy. Ring has agreed with police forces throughout the United States to provide access to homeowners’ video footage, allowing law enforcement to request footage from specific times and places. Ring users have the option to deny police requests to access footage captured by their devices. However, this partnership between law enforcement and technology exposes visitors and even innocent passersby to increased government surveillance, and poses a potential threat to civil liberties.
In September, Senator Edward J. Markey (D-MA) wrote to Amazon, in a letter addressed to the company’s CEO and president Jeff Bezos, with questions addressing the privacy concerns surrounding smart doorbell technology. In response to a question regarding facial recognition, Amazon indicated that adding this feature to Ring products has been contemplated and said that privacy would be considered should it be implemented in the future. In making a decision on incorporating facial recognition into smart doorbell devices going forward, Amazon and Ring will have to find a balance between security and privacy.
Special thanks to James Ingram for his contributions to this blog post.
Last week, the U.S. District Court in Massachusetts put an end to “suspicionless” searches of international travelers’ smartphones and laptops at the U.S. border. With its decision in Alasaad v. Duke, case no. 1:17-cv-11730., the court held that border officials must have at least a reasonable suspicion that an international traveler is carrying some sort of contraband on a smartphone or laptop before searching such devices.
The case was brought by eleven plaintiffs, each of whom alleged that their phones were taken and searched without cause at U.S. ports of entry. The searches had revealed private information about the plaintiffs, including social media postings, photos, and in one instance, attorney-client communications.
The government pushed back against the plaintiffs’ claim that the searches violated their constitutional privacy rights, arguing that it had authority to conduct such searches under the “border search exception” to the Fourth Amendment. The court disagreed, however, stating that although the border search exception recognizes the government’s compelling interest in border security, it does not allow unfettered discretion in conducting searches at the border, especially with respect to smartphones and laptops.
Leaning on Supreme Court precedent set forth in Riley v. California (2014), the court noted that searching electronic devices fundamentally differs from searching other items, due to the former’s capacity to store vast amounts of personal information. As such, requiring border officials to have a particularized suspicion prior to searching electronic devices is a necessary measure to protect privacy rights, despite the government’s heightened interest in the area of border security.
Although the court stopped short of requiring border officials to obtain a warrant supported by probable cause prior to searching smartphones and laptops, its decision is being hailed as a major victory by privacy rights advocates. The ruling will serve as an important check on the rising number of electronic device searches conducted at the border each year.
A class action complaint, filed on November 7, 2019, alleges that Juul Labs, Inc. violated the Illinois Biometric Information Privacy Act (the “BIPA”), by failing to provide consumers with the necessary disclaimers related to the use of facial recognition technology in Juul’s age-verification process.
To purchase Juul products through the company’s website, customers must follow certain procedures to verify their age. One option allows customers to upload a “real-time” photograph, which is uploaded into Juul’s facial recognition database. The photograph is scanned for facial geometry and compared to the photo on the customer’s government-issued identification.
However, Juul allegedly failed to inform customers how their biometric facial data would be used or stored, or that it would be shared with third-party data processors. The plaintiff in this suit, Michelle Flores, alleges that Juul’s failure to obtain consumer consent constitutes a violation of the BIPA. Flores also accuses Juul of improperly disclosing its customer’s biometric data to third parties, including Jumio Corp., an identity verification company. Flores seeks to represent a class of Illinois consumers whose biometric information has been scanned, stored, used, and disclosed by Juul.
When Illinois passed the BIPA in 2008, it was the first state to enact legislation regulating the collection of biometric information. The BIPA is unique in that it created a private right of action, allowing consumers to bring suit directly against companies that violate the privacy law. In Rosenbach v. Six Flags Ent. Corp., No. 123186, 2019 IL 123186, ¶ 33 (Jan. 25, 2019), the Illinois Supreme Court considered whether a plaintiff must demonstrate actual harm to have standing to pursue a claim under the BIPA’s private right of action, or whether the statutory loss of privacy alone was sufficient to meet the statute’s “aggrieved person” standing requirement. The Illinois Supreme Court determined that the BIPA does not require proof of actual damages, finding that the violation of a plaintiff’s “right to privacy in and control over their biometric identifiers and biometric information” was sufficient to establish standing.
Special thanks to Tevin Hopkins for his contributions to this post.
Have you ever wondered how much information and personal data companies have about you? The data could range from your email address to your social security number. Beginning on January 1, 2020 it may become easier for consumers to discover this information, after the California Consumer Privacy Act (the “CCPA”) goes into effect. The CCPA, which includes various protections against the collection and disclosure of consumers’ personal information, was signed into law in June 2018.
The CCPA will require many businesses to allow California consumers to direct the company to delete all information collected about them or prohibit the company from selling their personal information to third parties. The law also allows individuals to ask companies exactly what kind of information has been collected, why their data is being collected and sold, to learn about the types of third-party companies buying and using the data and to find out about the financial incentives the company receives for selling the data. If a company is subject to this law, the fines can add up quickly. Under the statute, penalties for noncompliance levied by the government can reach up to $7,500 for each intentional violation, or $2,500 per violation without the requisite intent. Consumers themselves can also collect between $100 and $750 for each violation, under the private right of action established in the CCPA.
While the law is on the books in California, its impact is not limited to companies based in California. The CCPA directly applies to many out-of-state companies that do business in California. A company must comply with the CCPA if it meets at least one of three requirements: (1) has $25 million or more in gross revenue; (2) buys, sells, or shares personal data of 50,000 or more Californians; or (3) makes 50% or more of its revenue from selling personal data.
The CCPA applies a broad definition of “personal data,” covering any information that identifies, relates to, describes, or could reasonably be linked, directly or indirectly, with a particular consumer or household. This includes data such as IP addresses, browsing history, records of purchases, biometrics, geolocation, and employment- or education-related information. As a result, many out-of-state companies may be subject to the CCPA because they buy, sell or share personal data of over 50,000 residents of California.
Even if a company is not currently subject to the CCPA, it is anticipated that other states may follow California in enacting similar legislation. The cost of compliance could be substantial depending on the size of the company and how much consumer data it possesses. Working toward compliance before a company’s home state enacts similar legislation could streamline and potentially reduce the costs of compliance.
Special thanks to Martha Medina for her contributions to this post.
On October 22, 2019, the Federal Trade Commission (“FTC”) settled its case against a Florida company, Retina-X Studios, LLC and its owner, James N. Johns, Jr. (“Johns”). The company sold “stalkerware” that allowed people to tap into others’ phones and track their calls, texts, photos, physical movements, and browser history.
According to the FTC’s complaint, Retina-X failed to ensure that its three applications (“apps”) were being properly used by those who purchased them. The three apps – MobileSpy, PhoneSheriff, and TeenShield – were all marketed as apps that would allow the “purchaser to monitor, often surreptitiously, another person’s activities on that person’s mobile device or computer.” For example, TeenShield was marketed as an app that would help parents monitor their children’s activities.
The apps would allow the user to delete the apps’ icons from the phone’s home screen, allowing them to run in the background and preventing the phone’s owner from knowing that his/her movements were being monitored. Additionally, installing the app software often required the user to “jailbreak” or “root” their phones – an action that would allow users to circumvent the operating system’s security features and would likely invalidate the manufacturer’s warranty. Once the app is installed, a purchaser could remotely monitor the owner’s phone activity without having physical access to it.
All three apps claimed to keep its users’ private information confidential. In reality, however, Retina-X failed to secure users’ personal information and exposed their information to disclosure and improper use. In fact, in 2017 and in 2018, hackers were able to access unencrypted credentials on the TeenShield and PhoneSheriff apps. The hackers collected photos and other sensitive consumer data, including passwords, text messages, and GPS locations. According to the FTC, Retina-X’s failure to properly secure this information when it claimed to protect users’ personal information constituted an unfair or deceptive act in violation of the FTC Act, as well as the Children’s Privacy Protection Rule.
Pursuant to the settlement agreement, Retina-X is now banned from selling monitoring products that require purchasers to bypass security protections on their devices. Retina-X and Johns must also require purchasers to state that they will only use the app to monitor a child or an employee, or another adult who has provided written consent. Additionally, the icon with the name of the app cannot be removed unless it is done by a parent or legal guardian who has installed the app on their minor child’s phone.
Retina-X and Johns will be required to destroy all data that has already been collected from their monitoring services. The settlement also required Retina-X and Johns to establish and maintain a comprehensive information security program that protects the information they collect and addresses the security issues identified in the FTC’s complaint.
Special thanks to Vincent Tennant for his contributions to this post.
The Electronic Frontier Foundation (“EFF”) and the American Civil Liberties Union (“ACLU”) have ended six years of litigation with the Los Angeles County Sheriff’s Department and the Los Angeles Police Department over the automated collection of license plate data. On October 3, 2019, the parties reached a settlement where the EFF and ACLU will receive a limited amount of the de-identified data for the purposes of reviewing how this data could be used by the government and to educate the public.
Throughout the city and county of Los Angeles, automated license plate reader (ALPR) systems have been implemented with the capacity to collect the images of up to 1,800 license plates per minute. California’s ALPR systems include fixed cameras as well as cameras mounted on police vehicles. The cameras scan every license plate that crosses their field of view.
Most recently, prior to the settlement, the EFF and ACLU won at the California Supreme Court, which ruled that the ALPR data are not “records of law enforcement investigations” and therefore not protected against disclosure requests under the California Public Records Act.
Throughout the litigation, the ACLU and EFF had requested one week’s worth of de-identified data from the ALPR system “so that the legal and policy implications of the government’s use of ALPRs to collect vast amounts of information on almost exclusively law-abiding [citizens of Los Angeles] may be fully and fairly debated.” The EFF reports that it will receive exactly the requested amount in the settlement.
Government agencies are not covered entities in the California Consumer Privacy Act (“CCPA”), coming into effect on January 1, 2020. That leaves government agencies under the same privacy and transparency regimes currently in effect. The EFF hails its victory at the California Supreme Court and the subsequent settlement as an important precedent for future challenges of broad-based data collection and surveillance by government agencies as the CCPA will be enforcing privacy regulations on private actors.
Special thanks to James Ingram for his contributions to this post.
Is the use of automated “data-scraping” bots to collect information from public LinkedIn profiles fair game under the Computer Fraud and Abuse Act (CFAA)? According to the Ninth Circuit’s recent ruling in hiQ Labs, Inc. v. LinkedIn Corporation, No. 17-16783, 2019 WL 4251889 (9th Cir. Sept. 9, 2019), the answer is likely “yes.”
In hiQ Labs, LinkedIn sent data analytics company hiQ a cease-and-desist letter demanding that hiQ stop scraping data from LinkedIn users’ public profiles and asserting that continuation of the practice would constitute a violation of the CFAA. hiQ, in turn, sought a preliminary injunction to enjoin LinkedIn from invoking the CFAA against it.
The CFAA, codified at 18 U.S.C. § 1030, prohibits the intentional accessing of a protected computer “without authorization” in order to obtain information from it. The Ninth Circuit considered the meaning of the phrase “without authorization” and determined that its use in the statute is meant to protect against the digital equivalent of “breaking and entering.” As such, simply collecting publicly available data from a website like LinkedIn does not give rise to a CFAA violation. The court rather indicated that the CFAA is violated only “when a person circumvents a computer’s generally applicable rules regarding access permissions, such as username and password requirements, to gain access to a computer.”
Applying this framework, the court found that there is a serious question as to whether hiQ’s data-scraping practices violate the CFAA, and granted hiQ’s motion for a preliminary injunction. It noted that LinkedIn does not claim to own the information that its users share on their public profiles and that such information is available without a username or password to anyone with access to a web browser. The court also rejected LinkedIn’s argument that an injunction would threaten the privacy of its members, finding “little evidence that LinkedIn users who choose to make their profiles public actually maintain an expectation of privacy with respect to the information that they post publicly . . .”
The court’s decision at this stage of litigation is certainly encouraging for hiQ and others engaged in similar data collection practices. The NP Privacy Partner team will continue to monitor developments in this case, but in the meantime: (i) companies seeking to protect user data should ensure that protective measures, such as required usernames and passwords, are in place to create a clear barrier between public data and that which is accessed without authorization, and (ii) LinkedIn users should be aware that information posted to their public profiles may very well end up in the hands of third-party data collectors.
On September 16, 2019, we reported on a number of bills passed by the California Legislature in the final days of the session, amending the California Consumer Privacy Act. On October 13, 2019, Governor Gavin Newsom signed those bills into law. To recap briefly, they are:
AB 25: Exempts from the scope of the Act information collected in an employment context, i.e., information collected in a job application, or from employees, directors, business owners, medical staff, or contractors. However, the private right of action for negligently allowing the disclosure of such information in Civil Code 1798.150 still applies.
AB 874: Simplifies the definition of “publicly available information,” which does not count as “personal information” under the Act. Eliminates the restriction that information obtained from a public source is only exempt from the definition of personal information if it is used for the same purpose that it was gathered by the public entity.
AB 1146: Exempts information maintained or exchanged between an auto dealer and a manufacturer for warranty or recall purposes from certain obligations under the Act. Such information cannot be the subject of a request to delete, and sharing of the information between a dealer and manufacturer does not trigger an obligation to disclose it as a “sale” of such information.
AB 1202: Adds new sections Civil Code 1798.99.80-82. Requires all data brokers to register with the attorney general. A data broker is any business that knowingly collects and sells (broadly defined) personal information regarding persons with which it has no direct relationship.
AB 1355: Exempts deidentified and aggregate information from the definition of “consumer information” in the Act; also clarifies the interrelationship of the Act and the Fair Credit Reporting Act.
AB 1564: Streamlines the methods businesses must make available to consumers to make requests to disclose their personal information. A business that operates exclusively online and has a relationship with the consumer is only required to make a single online method available for such requests. However, a business that maintains a website must include the website as one of the methods to receive such requests.
In addition, the governor signed AB 1130, which amends the state’s data breach notification law. It revises the definition of personal information for breach notification purposes to add specified unique biometric data and tax identification numbers, passport numbers, military identification numbers, and unique identification numbers issued on a government document in addition to the existing categories that already include driver’s licenses and California identification cards. Upon a breach of biometric data, the breach notice now must include instructions on how the consumer can notify entities who may be relying on such data for identification purposes to let them know that it is no longer secure.
A California Court of Appeal recently affirmed a lower court ruling in favor of Williams-Sonoma in a case under the Song Beverly Credit Card Act of 1971 (the “Act”) challenging the store’s practice in soliciting consumer personal information at checkout. Williams-Sonoma Song-Beverly Act Cases, 2019 DJDAR 9435 (Ct. App, 1st Dist. September 30, 2019).
The Act makes it illegal, in a credit card transaction, to “request, or require as a condition to accepting the credit card as payment …, the cardholder to provide personal information which the [merchant] causes to be written, or otherwise records, upon the credit card transaction form or otherwise.” Civil Code § 1747.08(a)(2). Plaintiffs brought a class action alleging that William-Sonoma broke the law by asking customers for their zip code and other personal information in the middle of processing their credit card at checkout.
William-Sonoma countered that the practice of store employees in regard to asking of the information at checkout was not uniform, that providing the information was voluntary, and that signs were prominently posted at checkout advising customers that they did not have to provide the information as a condition to making a purchase.
Following a long line of cases under the Act, the court affirmed the lower court’s determination that the applicable standard was whether a reasonable person would believe he or she was compelled to provide the information as a condition to completing the transaction based on all the circumstances. It declined to adopt plaintiffs’ proffered rule that asking for the information in the middle of processing the transaction was a per se violation. The court also affirmed the lower court’s order decertifying the class, based on plaintiffs’ failure to establish that the circumstances at checkout were sufficiently uniform so as to constitute a common issue.
A California merchant asking for personal information at checkout for marketing purposes may want to review the policies and procedures Williams-Sonoma put in place, as described in the opinion, including employee training, which allowed the company to prevail in this case.
Special thanks to Tevin Hopkins for his contributions to this post.
Over the past several months, the 2020 Census has been a growing concern for many—from the Trump administration’s efforts to include a citizenship question to concerns that the process of counting every single person living in the country may not receive the proper funding it needs. However, there is another issue that should be just as alarming. Recently, the U.S. Census Bureau conducted an experiment with previously acquired census data to determine if the information people provide to the Bureau could threaten their privacy. The agency used this information, along with other publicly available records, and discovered that they were able to infer the identities of 52 million Americans. To try to combat this privacy issue, the Bureau is going to use a technique called “differential privacy,” which changes certain numbers in the statistics to protect identities, but retains the survey’s primary findings. How effective this strategy will be remains to be seen. If the results from the Census are too diluted, it can lead to issues with redistricting and the dilution of minority voting power, possibly violating the Voting Rights Act.
To most people, however, their primary concern will be with their own identity and who will be able to access it with the public information released by the 2020 Census. With people putting more and more of their information on the web via social media or signing up for various other online accounts, it only gets easier for cyber predators to combine all this information, learn identities and other personal information about people, and use it to their detriment.
While bypassing the 2020 Census may not be an option, there are a few simple steps you can take to protect your identity and it mainly has to do with your online profile. Keep your online accounts to a minimum, only sign up for accounts that you will actually use and be beneficial to you, never provide information that was solicited via a suspicious email or other suspicious websites, and keep close track of those online accounts that use or save your credit card information.