On October 7, 2019, New York Governor Andrew M. Cuomo signed into law a bill that prohibits New York ambulance service providers and advanced life support first response service providers from selling, disclosing, transferring, or otherwise using identifiable patient information for marketing purposes. “Marketing” is defined as advertising, promotion, or any other activity that is intended to influence business sales or market share, including evaluating the effectiveness of marketing personnel or practices.
Although the legislation limits marketing-related uses and disclosures, it continues to permit ambulance providers and other first responders to share identifiable patient data with the patient and those authorized to make health care decisions for the patient, with health care providers treating the patient, and with the patient’s insurer, as well as third parties that have a legal right to the information—such as those authorized by a court order, a government entity, or law enforcement personnel. With patient consent, identifiable information can be used for training, promotion, or for staff recognition and recruitment.
All types of entities—for-profit, nonprofit, and governmental—are subject to these data restrictions, although nonprofit and governmental entities may use a patient’s name and address to solicit donations.
The legislation takes effect 180 days from its October 7, 2019, enactment date.
On Saturday, January 27, the New York Times published an exposé of Devumi, “an obscure American company” that “has collected millions of dollars in a shadowy global market for social media fraud.” According to the Times, Devumi has “an estimated stock of at least 3.5 million automated accounts, each sold many times over.” Here’s the big problem: 55,000 of those accounts “use the names, profile pictures, hometowns and other personal details of real Twitter users, including minors.”
Within hours of the article’s publication, New York State Attorney General Eric T. Schneiderman opened an investigation into Devumi. He tweeted that: “Impersonation and deception are illegal under New York law. We’re opening an investigation into Devumi and its apparent sale of bots using stolen identities.”
The Times article specifically named a number of prominent actors, models, athletes, reality television stars, journalists, politicians, and — yes — business executives who were customers of Devumi. It does not appear that any of Devumi’s customers are being investigated by the Attorney General at this time, but it is not hard to imagine that criminal and, more likely, civil claims may be forthcoming against companies that knowingly purchase social media followers that are impersonations of real people. Beyond legal liability, purchasers of social media followers risk deactivation of their social media accounts. Twitter’s Rules, for example, expressly prohibit the purchase of followers, retweets, and likes. At the very least, being exposed as a purchaser of social media followers is a public embarrassment that most companies would like to avoid.
How can your business protect itself going forward? Most of the Devumi customers who provided comments in response to the Times article claimed they had no knowledge that followers had been purchased; they claimed the transactions were authorized by a rogue employee, family member, or friend. So, one thing your company can do right now to protect itself is adopt written policies prohibiting employees and affiliates from buying social medial followers, likes, retweets/reposts, comments, or anything else that artificially inflates your social media presence.
If your organization outsources its marketing and public relationship functions, then your contracts should expressly prohibit third parties from purchasing social medial followers on your behalf. It should also include an indemnification provision that requires your marketing or public relations firm to defend and indemnify you against any investigation or claim related to the unauthorized purchase of social media followers.
The Federal Communications Commission ("FCC") has released details on the commission's revised internet provider privacy rules, reworked from its initial set of rules released earlier this year (which NP Privacy Partner reported on here).
Under the proposed rules, internet service providers (or "ISPs") would have to obtain affirmative, opt-in consent from their users to target advertisements based on certain sensitive information gleaned from their web browsing or application-use history. As currently drafted, "sensitive data" would include information on a person's web browsing history, health information, financial data, the substance of communications sent over the internet, data regarding a person's location, and data related to children or collected from using applications. Other data such as an individual's name, home address, and internet protocol addresses would be subject to opt-out consent only.
The FCC's internet provider rules would only apply to the providers of the internet infrastructure itself, as they fall within the ambit of FCC's authority over telecommunication carriers. Companies that host services or applications on the web will not fall within the purview of the rules, as their conduct is regulated by the Federal Trade Commission ("FTC").
Unsurprisingly, the proposed rules have been met with much criticism from ISPs, who increasingly rely on advertising as a steady revenue stream, and other large powerhouse tech companies looking to move into the ISP space, as well as advertising industry groups. Opponents of the rules claim that the categorization of certain data as "sensitive" is much too stringent, and will require overly excessive opt-in consumer consents. The Association of National Advertisers went so far as asserting that the rules, if approved, will create "severe negative impacts for the on-line and mobile experience, resulting in harm to consumers and threatening the financial underpinnings of the Internet ecosystem." The FTC, on the other hand, has endorsed the FCC's action in the name of consumer privacy.
The rules will be considered at an open meeting on October 27th, and the rules can be modified during the FCC's consideration period. NP Privacy Partner will provide updates on the impending regulations as developments arise.
On January 5, 2016, the Federal Trade Commission (“FTC”) announced that it has reached a settlement of administrative charges filed against Henry Schein Practice Solutions, Inc. (“Schein”), a leading provider of office management software for dental practices. The settlement relates to allegations that Schein falsely advertised the level of encryption that it provided to protect patient data.
The FTC alleged that Schein marketed Dentrix G5 software to dental practices nationally, claiming that it provided industry-standard encryption of sensitive patient information and met the requirements of the Health Insurance Portability and Accountability Act. The FTC contended that, as early as November 2010, Schein was aware that Dentrix G5 was less secure and more vulnerable than widely used, industry-standard encryption algorithms such as Advanced Encryption Standard (“AES”) encryption. Stein allegedly knew that its software did not meet the National Institute of Standards and Technology’s (“NIST”) recommended standard to achieve HIPAA compliance. The FTC charged that Schein’s marketing improperly touted the software’s encryption capabilities for protecting patient information and meeting data protection regulations.
Under the terms of the proposed consent order, Schein will pay $250,000 to the FTC. Also, Schein will be required to notify consumers that the FTC claimed that the software provider deceptively advertised from early 2012 to January 2014 that Dentrix G5 encrypts patient data and helps dentists meet HIPAA’s security requirements. In an agreed upon form of written notice, Schein acknowledged that its “software uses a less complex method that doesn’t meet the AES encryption standard recommended by HHS and NIST,” such that dental practices relying on Dentrix G5 software alone would not qualify for the safe harbor under HHS’s Breach Notification Rule. As of January 2014, Schein’s marketing materials have stated more accurately that its software “masks” data, but does not encrypt it.
The FTC has issued the consent order for public comment through February 4. When the FTC issues a consent order on a final basis, it creates the force of law with respect to future actions for twenty years. This regulatory action is significant to show the expansive reach of the FTC’s oversight authority in consumer protection. As shown in the action, the FTC will scrutinize not only the use and protection of data, but also the marketing and public depiction of data protection services and products.
A Singapore-based marketing firm, Adnear, with offices all over world, has been employing drones in the San Fernando Valley in Los Angeles to collect users’ wireless data—yes they are watching us from above. However, this drone data collection does not involve the collection of personally identifiable information or the contents of our conversations. Instead the drones use signal strength, cell tower triangulation and other geolocational indicators to determine where a mobile device is located. This information is then used to map users’ travel patterns. The example Adnear provides is the following:
Say someone is walking near a coffee shop with their mobile device. The coffee shop may want to offer the user an advertisement or a discount coupon to those users who walk by the coffee shop but who do not actually enter. They would also want to offer specials to frequent customers as well when they are near the coffee shop.
While Adnear is currently using drones to test the ability to collect location-mapping data, it is not yet using the information to send advertisements to users. But Adnear does have over 530 million user profiles from its Asian market. Could this massive data base expand to the U.S.?
Before the use of drones, Adnear was collecting this information from mobile signals on bikes, cars, trains and sometimes even stairs. But with drones, Adnear can offer better coverage than these “obsolete” ground-based data collection methods. All Adnear needs is a mobile device user to have an app open that is transmitting cellular or Wi-Fi signals—it does not need to be sending geolocation coordinates. While much of this information is “anonymous,” the user is identified as a code. Adnear assures us that no name, telephone number, router ID or other personally identifiable information is collected, and no photos or videos are taken either.
While Adnear may not be interested in collecting photos, videos or identifiable information, without proper regulations of this evolving technology, we can’t be sure that others won’t start collecting and using such information from the skies above.
Marketing company, Main Street Power Mail, Inc. (Main Street), has agreed to pay Vermont Attorney General William H. Sorrell, $90,000 for sending direct mailings to Vermonters, “many of them elderly,” asking for personal information “without explaining how the information would be used.” The AG alleged that the activity violated both state and federal law.
Main Street sent over 30,000 direct mailings to Vermonters between March 2012 and March 2013 in order to get general leads for insurance agents. The correspondence requested the consumer and the consumer’s spouse’s age so insurance agents could market life insurance products to the consumers. Over 980 consumers responded to the mailing, which stated “NEW BENEFIT UPDATE…FOR VERMONT CITIZENS ONLY” that they could “now apply for a NEW state-regulated life insurance program to pay Final Expenses for just pennies a day…Return this card today and you will receive the latest information…”
The AG alleged that the mailing implied that the consumers would receive information, when in fact, its “real purpose was to persuade consumers to respond with personal data for use in creating leads for insurance agents.”
The settlement requires Main Street to “comply strictly with all provisions of Vermont and federal law;” and “refrain from contacting any Vermont consumer, by mail or other means, for the purpose of generating business leads without clearly and conspicuously disclosing the fact that if the consumer responds to the contact, he or she may be solicited to purchase a described product or service.”
2014 was a banner year for class action litigation involving data privacy and security. There is no indication that 2015 will see anything but an explosion of additional class action litigation. The areas that we predict will be the hottest areas for class action litigation include data breaches and claims around compliance with the Telephone Consumer Protection Act (TCPA).
However, despite our prediction that additional litigation will be filed, there is every indication that the case law around class action litigation in the data breach arena will continue to be developed and hopefully harmonized as the litigation of the big data breaches of 2013 and 2014 wind through the legal system. This includes the Target class action breach litigation and litigation against other retailers which suffered data breaches in the last 18 months. The litigation we will be watching closely include efforts by retailers and other companies to dismiss class action litigation involving credit and debit cards for lack of standing, as well as the litigation pursued by banks against retailers for reimbursement of costs associated with reissuing credit and debit cards. 2015 should be an interesting year for precedent setting decisions.
The TCPA is a lucrative business for plaintiffs’ attorneys, with automatic damages of $500 per violation and potential treble damages if successful in bringing a claim. Fines and penalties can also be assessed up to $16,000 per violation. Therefore, TCPA class action litigation was very active in 2014 as sophisticated plaintiffs pounced on unwary companies for violation of TCPA. Multiple class action litigation in 2014 resulted in multi-million dollar settlements. We predict that this activity will continue and increase as businesses try to navigate the new rules around the TCPA and come into compliance, particularly with texting requirements and robodialing. This is also an area where health care providers need to take care to follow.
The TCPA continues to be a minefield for class action litigation. Last week, Education Corp. (ECA) of America and Virginia College, LLC, were sued in a proposed class action lawsuit alleging that they and marketing company One on One Marketing LLC, used an auto-dialer to make calls on behalf of Virginia College, LLC, a for-profit college operated by ECA. The named plaintiff alleges that the calls violated the TCPA as he did not give express written consent for calls to be made to his cell phone.The suit seeks to gather a class of consumers who have received unwanted calls from ECA and Virginia College on their cell phones in the past four years. It seeks statutory and punitive damages and injunctive relief. The defendants claim that the case has no merit.
On December 15, 2014, plaintiff, Paul Story, filed a proposed class action against the San Diego Chargers (the Chargers) in California state court for the Chargers’ alleged violations of the Telephone Consumer Protection Act (TCPA). Story’s complaint alleges that the Chargers made unsolicited telemarketing calls related to ticket sales. Story alleges that the first telephone call he received on to his mobile telephone number was back in October 2013 from a telephone number associated with the Chargers’ website. Story alleges that thereafter he received several other calls, which he claims could only have been dialed by an autodialer. Story says that he never gave express written consent to the Chargers; under the TCPA before you can make prerecorded or autodialed telemarketing calls you must obtain prior express written consent from the consumer. Story’s complaint says, “Defendants placed thousands of similar calls, all for advertising and telemarketing purposes, to the cellular telephone numbers of members of the general public.” While Story’s complaint does not specify the estimated size of the class, he boldly claims that autodialed calls were made to “well over the 40 individuals required for numerosity purposes.” We will watch the California court’s decision regarding class certification as the litigation proceeds. This would not be the first professional sports franchise accused of violating the TCPA.
Apple Stores, Macy’s, Kohl’s and American Eagle represent just some of the big name retailers utilizing in-store beacon technology for marketing purposes. Industry insider Erik McMillan, CEO of Shelfbucks, a beacon technology firm, estimates that the number of in-store beacons will jump from around 50,000 today to between five and 10 million by the end of next year according to the Associated Press.
Beacon technology allows retailers to send individually tailored offers, discounts, product information and other promotions to customers’ phones via apps downloaded by the customer. The sophisticated technology tracks customer habits and locations to determine the offers available to each customer. The AP explains how beacon technology works, “As you enter a store, your smartphone might light up with a sale alert. Stand in the dress section for a while and a coupon might pop up for something on a nearby hanger.”
Ultimately, consumers will determine the success or failure of in-store beacon technology through the use of their smartphones and wallets. However, recent data hints at a promising future for the technology. Swirl, a marketing company, reported that 30% of customers that received advertisements via an in-store beacon app used the offer to make a purchase. In order to continue its growth, in-store beacon technology providers will need to create value for customers and retailers while managing the privacy rights and expectations of both parties. Customers interested in using beacon technology to enhance their shopping experience should research the applications’ privacy policies and how the companies and stores will use personal information collected by the apps. Businesses need to understand the privacy issues raised by using beacon technology and then take steps to ensure compliance with applicable legal requirements before implementing in-store beacon plans.