Special thanks to Vincent Tennant for his contributions to this post.
An artificial intelligence software mimicked the voice of a CEO so well that it fooled an employee into transferring $243,000 to the criminals’ bank account. The CEO of a U.K.-based energy firm believed he was speaking to the CEO of his firm’s parent company, reports the Wall Street Journal. The employee told police that he recognized the “slight German accent and the melody” of the CEO’s voice.
The funds were quickly swept from a Hungarian bank account to Mexico and other locations. The criminals attempted a second and third transfer request, which raised suspicions and were not completed by the employee. No suspects have been identified.
So called “deepfakes” are created using advanced machine learning or artificial intelligence software with the goal to fool the human senses. While deepfakes are not new, this is reported as the first cybercrime in which criminals clearly used artificial intelligence to execute the crime.
In February 2019, a nonprofit research organization dedicated to “safe artificial intelligence” shut down a text generation artificial intelligence because it performed too well in creating deepfake news stories. In July 2019, Virginia was the first state to impose criminal penalties for nonconsensual sexual imagery created using technology like a “deepfake.”
This incident, although currently unusual, highlights the need to rework internal safeguards and policies as technology evolves and maintain a response plan for when a breach occurs. Recognizing someone’s voice may no longer be sufficient to adequately verify identity for a business transaction.
Special thanks to Christian Albano for his contributions to this post.
Earlier this year, Mark Zuckerberg announced in a written note on Facebook’s website that the company would be shifting its platform’s focus toward a privacy-focused messaging and social networking service. As a part of this shift, Facebook is working to implement end-to-end encryption into its messaging platforms, which includes Facebook Messenger and Instagram Direct. Recently, however, law enforcement officials in the U.S. and U.K. governments have urged Facebook against putting end-to-end encryption into effect, arguing that it will interfere with their ability to investigate criminal activities.
End-to-end encryption prevents third parties from accessing messages sent between sender and recipient through online messaging services by storing the encryption key only on the participants’ devices. Under this system, even Facebook itself could not access the content of conversations that take place on its own messaging platforms because these conversations would not be stored on its servers.
Generally, when an encryption key is stored on a messaging service provider’s own servers, law enforcement officials can subpoena the provider to access the messages. However, with end-to-end encryption in place, companies like Facebook cannot provide law enforcement with access to the encrypted messages. This system is particularly challenging for both the U.S. and U.K., as these countries recently signed a data sharing agreement, the CLOUD Act Agreement, which aims to significantly decrease the amount of time needed to investigate a criminal’s online activities. Under this Agreement, both the U.S. and U.K., with proper authorization, can demand electronic data regarding serious crimes directly from technology companies based in either country. With Facebook making the transition to end-to-end encryption, however, law enforcement agencies in both countries will be unable to access encrypted messages.
In his announcement about the company’s privacy-focused vision, Zuckerberg addressed the inherent challenges end-to-end encryption would cause law enforcement. He noted, however, that Facebook is working to improve its ability to identify and stop bad actors by examining data relating to areas other than the messages’ substance, such as patterns of activity. Zuckerberg also pointed to the importance of private online messaging for those who live under oppressive regimes to freely express themselves.
The National Institute of Standards and Technology (NIST), working in collaboration with private and public stakeholders, has issued a preliminary draft of its voluntary NIST Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management (Privacy Framework). This document strives to drive better privacy engineering and aid organizations in the protection of individuals’ privacy. Among its goals, the Privacy Framework seeks to build customer-trust through product and service design or deployment that optimizes beneficial uses of data. It also seeks to build organizational communication channels about privacy practices with customers, assessors, and regulators. NIST provides the Privacy Framework to assist organizations by building “better privacy foundations by bringing privacy risk into parity with their broader enterprise risk portfolio.”
The Privacy Framework applies to organizations of all sizes and “agnostic to any particular technology, sector, law, or jurisdiction.” Through its recommended protocols, diverse sectors of an organization’s workforce—executives, legal, and IT—will be responsible for different outcomes and activities. Cross-organization collaboration is essential to identification of privacy protections and cybersecurity risks. The Privacy Framework focuses on all organizations and entities regardless of their role in “the data processing ecosystem—the complex and interconnected relationships among entities involved in creating or deploying systems, products, or services.”
The Privacy Framework is composed of three parts: Core, Profiles, and Implementation Tiers, each of which reinforces privacy risk management through connection between business/mission drivers and privacy protection activities. The Core delineates best practices to allow for communicating prioritized privacy protection activities and outcomes across all sectors of an organization from the C-suite to the implementation and operation levels. The Profiles direct organizations to identify business and mission drivers in its data processing and privacy protections. Profiles can enable continual privacy enhancement by evolving current practices into targeted best practices. The Implementation Tiers provide a point of reference on how an organization views privacy risks and how it approaches agile management of such risks.
All organizations should take the time to read and evaluate the recommendations of the Privacy Framework. NIST will accept public comments on the preliminary draft through October 24.
Special thanks to Courtney Way (Summer Associate) for her contributions to this post.
When we imagine cyberattacks, we often picture hackers breaking into websites and stealing credit card or social security information. We think of companies full of financial or personal information falling victim to these attacks. What we don’t often think of is a construction company’s information being held hostage, its checks for services being redirected to unknown accounts, or construction equipment being hijacked. Unfortunately, because we aren’t expecting these attacks is exactly why construction companies are exposed.
Hackers are learning that the construction industry is a vulnerable target. These companies constantly manage complex projects while handling data exchanges among many parties including partners, subcontractors, regulators, and suppliers. Daily communications between these parties occur over e-mail, providing hackers a perfect opportunity to strike.
Typically, hackers will use a fake e-mail account or even mirror a familiar account in order to ask the company to send funds to a “new” or “different” bank account. Since the communication appears to come from a person that the company deals with on a routine basis, the company assumes that the new bank account is legitimate. Yet, theft of funds is not the only type of cyberattack construction companies may face; hackers also use information to lock data or destroy or control hardware and equipment.
Given the sophistication of today’s cybercriminals, construction companies must recognize their risk as targets and begin implementing protective measures. The most important steps for companies to take include: (1) conducting security assessments or routine vulnerability scanning; (2) updating software, including advanced e-mail filtering; (3) enforcing password policies; (4) restricting approval rights and administration privileges; and (5) obtaining cyber liability insurance policies.
However, general liability policies typically do not cover harm suffered by a cyberattack. About a decade ago, companies were unsuccessfully fighting with policyholders about general liability policies covering losses resulting from a data breach. Today, commercial general liability policies generally explicitly exclude electronic data from its definition of “property damage.”
Given the need for a policy that would cover the loss of data resulting from a cyberattack, insurance companies began offering separate cyber liability insurance policies. First-party cyber liability insurance typically covers the cost of network business interruptions, forensic investigation and restoration, legal fees, credit monitoring, and cyber threat extortion expenses. Third-party cyber liability insurance typically covers wrongful disclosure, content liability risks, and security or privacy breach regulatory proceedings.
Companies must be well educated and represented when obtaining cyber liability insurance. Unfortunately, many companies that offer these policies seek to limit their liability and in turn, except many incidences. For example, one policy in 2017 attempted to except costs associated with a fraudulent funds transfer that occurred when employees initiated the transfer after receiving a forged e-mail from a hacker. In 2018, another policy attempted to limit its coverage by arguing that the losses incurred by a company were not directly caused by computer fraud, but rather were incidental. Now, policies are attempting to invoke an “act of war” exception where companies argue that large attacks from foreign hackers are in fact “acts of war” and therefore not covered by the policy.
Although it is recommended that companies obtain cyber liability insurance policies in an effort to combat the enormous expense that follows a cybersecurity breach, cyber liability insurance policies are not a simple catch all and are certainly not an alternative route for staying current on training employees, frequently updating software, and conducting regular security assessments.
While construction companies may not appear to be the most profitable targets for hackers, they are the perfect combination of numerous moving parts, people, and complex projects. Add to this their lax cybersecurity measures, and hackers have found an opportune target.
In order to combat the recent uptick in hackers attacking construction companies, we recommend that companies (1) train employees about cybersecurity; (2) frequently update software; (3) conduct regular security assessments; and (4) look into obtaining cyber liability insurance. A cyberattack could cost millions of dollars and your reputation. In a world where three out of four construction companies have reported a breach in the last year, cybersecurity is not to be taken lightly.
In March 2018, shortly after it had been revealed that Facebook had allowed Cambridge Analytica to collect data from millions of users without their knowledge, the Federal Trade Commission (“FTC”) announced that it planned to investigate Facebook’s data privacy practices. A year later, the social media giant is preparing for the FTC to impose a series of fines that could reach up to $5 billion, which would be the largest penalty the FTC has ever imposed on a technology company. Facebook had annual revenue of approximately $56 billion last year and, as such, many believe the upcoming penalty to be relatively lenient given the gravity of the charges levied against Facebook. This is especially true in light of the fact that Facebook breached a settlement that it had reached with the FTC seven years earlier. As part of the earlier settlement, Facebook was required to obtain permission from users before distributing data beyond the privacy settings set by each user.
Although relatively limited in its enforcement power with respect to consent decrees, the FTC has been able to leverage the support of the public in its investigation of Facebook. Indeed, lawmakers have been calling for increased scrutiny of tech companies, an area in which the United States is decidedly behind its European counterparts. Despite the record-setting fine set to be imposed, though, many lawmakers believe the penalty to amount to nothing more than a slap on the wrist given Facebook’s financial power. Many lawmakers and other political activists believe that regulators should impose reforms aimed at the ability of technology companies to share data with business partners from the outset, which would have more of a lasting impact on consumer privacy practices in the technology industry.
In March 2019, the Department of Health and Human Services (HHS) Office of Inspector General (OIG) released its summary report of penetration testing of certain HHS Operating Division networks. The purpose of the audits was to determine whether the Operating Divisions’ existing security controls were effective to prevent cyberattacks, the level of sophistication that an attacker would need to compromise the Divisions’ systems or data, and the Operating Divisions’ ability to detect and respond to cyberattacks.
The OIG conducted penetration testing at eight HHS Operating Divisions in fiscal years 2016 and 2017. Following this testing, the OIG concluded that the existing security controls at the audited HHS Operating Divisions needed to be improved to better detect and protect against cyberattacks. The OIG informed HHS of a number of vulnerabilities, including issues with access control, data input controls, configuration management and software patching.
Following the audits, the OIG provided HHS with four recommendations to implement across its operations to address the identified vulnerabilities. The OIG summary report noted that HHS management agreed with the OIG’s recommendations and that HHS and the eight Operating Divisions audited have or are working to implement the recommendations.
After the initial audit findings, the OIG summary report details how the OIG is working on new audits, reviewing for active threats on HHS networks, as well as past breaches by threat actors.
The OIG’s audits of the HHS Operating Divisions serves as a reminder to health care entities to review their own cybersecurity processes and controls and to take steps to address and mitigate any identified issues.
Copyright © 2019, American Health Lawyers Association, Washington, DC. Reprint permission granted.
On March 8, 2019, JAMA published a study analyzing the effects of simulated phishing emails at U.S. health care organizations. Concluding that the click rates for the simulated phishing emails present a big cybersecurity risk for health care organizations, the study provides helpful insight into how to prepare an organization’s workforce to detect harmful emails.
Phishing emails are deceptive communications intended to trick recipients into disclosing their security credentials or otherwise sharing sensitive information. Oftentimes, a sender’s identity is spoofed, tricking the recipient into thinking that the email originated from within their organization or that it was sent by a colleague or superior. Hospitals and other health care organizations are attractive targets of cyberattacks, as they have high-value personal and health data.
The study analyzed six health care organizations across the United States as they participated in simulated phishing emails between August 1, 2011 and April 10, 2018. The phishing emails fell into three categories: office-related, personal, and information technology-related. The emails were sent to employees in all types of roles. In total, approximately 2.9 million simulated phishing emails were sent, and recipients clicked on approximately 422,000 of them (approximately 14%). This means that the employees from the studied health care organizations clicked on an average of almost one in seven of the simulated phishing emails.
The study showed that the median click rates were higher for the information technology-related simulated phishing emails (18.6%) than the office-related emails (12.2%).
The study noted that repeated phishing simulations decreased the odds of an individual clicking on a simulated phishing email, which highlights the importance of the phishing simulation process and other forms of personnel training on these types of attacks.
As hospitals and other health care organizations face financial and care-related consequences from cyberattacks, this study emphasizes the need for health care organizations to train their workforces on cybersecurity best practices, including through simulated phishing emails. As the study noted, it only takes one successful phishing incident to paralyze a system that is critical to the patient care provided by a health care organization. The study cited to several elements that may make a health care organization more vulnerable to a cyberattack, including a continuous stream of new employees, the use of a large number of information technology systems, and devices and systems that are highly interdependent. It also discussed other techniques that health care facilities can use to prevent or limit personnel from clicking on phishing emails, including using technology to try to filter suspicious emails and indicate on emails when they are sent by a person outside of the organization.
Copyright © 2019, American Health Lawyers Association, Washington, DC. Reprint permission granted.
While there has not been any concrete movement on a federal data privacy law, there has been some progress on the state and local level.
Washington State Senator Reuven Carlyle’s privacy bill, introduced back in mid-January, cleared the State Senate earlier this month and is under consideration in the House. The bill covers companies that control personal data of 100,000 or more Washington residents and also data brokers with information on at least 25,000 Washington state residents
Some of the obligations imposed on these covered entities echo the CCPA and the GDPR. For instance, companies much specify how they use their personal information and for what purposes. They must also comply with consumer requests to delete personal data, so long as requisite conditions are met (e.g., if a company can no longer identify a business reason for keeping that information). Finally, companies have to perform risk assessments of their data processing activities and take stock of any potential harm for consumers’ personal data.
But, other obligations are unique: this bill expressly addresses facial recognition technology. In the bill’s current form, any company that uses facial recognition in a public space must give notice to visitors that the technology is in use. Moreover, companies that sell facial recognition software must make their software available for third-party testing to monitor bias. Finally, the bill expressly bars public agencies from tracking individuals using facial recognition without a warrant.
Last week, Washington, D.C., Attorney General Karl A. Racine introduced an amendment to D.C.’s current data breach notification law. Racine’s bill expands the definition of personal information to include passport numbers, taxpayer identification numbers, military ID numbers, health information, biometric data, genetic information and DNA profiles and health insurance information. Further, data breach notices to consumers would now have to include (a) categories of information that were, or are believed to have been, involved in the breach; (b) contact information for both the person making the notification and for credit reporting agencies, the FTC and the D.C. Attorney General; and (c) the right under federal law to obtain a security freeze at no cost and how to obtain such a freeze. If the breach includes social security numbers, businesses must also offer two full years of free identity theft protection. Finally, in addition to the requirement to maintain “reasonable safeguards” to protect D.C. residents’ personal information, businesses would also have to contractually impose that obligation on any nonaffiliated third party with which businesses share that personal information.
The Internet of Things (IoT), the growing network of Internet connected devices and sensors, will reach over 20 billion devices by 2020. The devices and their data offer substantial consumer benefits and economies of scale, but the relative insecurity and evolving nature of the technology present significant cybersecurity challenges. For example, IoT devices have been used by hackers to launch Distributed Denial of Service attacks on Internet websites, servers and providers. Bipartisan legislation introduced on March 11 seeks to enhance the cybersecurity of Internet-connected devices.
United States Senators Mark R. Warner (D-VA), Cory Gardner (R-CO), Maggie Hassan (D-NH) and Steve Daines (R-MT) and Representatives Robin Kelly (D-IL) and Will Hurd (R-TX) have introduced companion legislation in Congress titled “Internet of Things (IoT) Cybersecurity Improvement of 2019.” The legislation follows a similar bill that failed during the last congressional session.
The legislation seeks to impose the following:
· Require the National Institute of Standards and Technology (NIST) to issue recommendations addressing secure development, identity management, patching and configuration of IoT devices.
· Direct the Office of Management and Budget (OMB) to issue guidelines for governmental agencies that are consistent with the NIST recommendations and charge OMB with reviewing these policies at least every five years.
· Require any Internet-connected devices purchased by the federal government to comply with these recommendations.
· Direct NIST to interact with cybersecurity researchers and industry experts to publish guidelines on coordinated vulnerability disclosure to ensure that vulnerabilities related to agency devices are addressed.
· Require contractors and vendors providing IoT devices to the federal government to adopt coordinated vulnerability disclosure policies.
Several security firms and groups are publicly backing the legislation, including Symantec, Cloudflare and researchers at prominent universities including Harvard and Stamford.
The proposed federal legislation is comparable to California SB 327, the country’s first IoT security law, which passed in September 2018. The California law imposes specific security measures that device makers must meet, such as removing default passwords and requiring users to generate their own passwords before allowing device access.
As IoT devices integrate into our daily business dealings and personal comforts, we must understand collectively and individually the risks that come with the benefits. We will monitor the proposed federal legislation and comparable state laws and report on the evolving legal protective measures on this blog.
In the wake of the seemingly endless stream of data privacy scandals that surfaced over the past year, lawmakers have renewed the push for the nation’s first comprehensive, bipartisan data privacy law. However, at the start of the first hearings on the matter in the current Congress, legislators have encountered a major roadblock, namely, conflicting state regulations that attempt to cover consumer privacy issues.
State legislatures were spurred into action in 2018 as the number of data privacy breaches mounted. In June 2018, California became the first state to pass a consumer privacy law when then-Governor Jerry Brown signed the California Consumer Privacy Act (the “CCPA”) into law. The CCPA, the requirements from which do not go into effect until January 1, 2020, poses hurdles for business both inside and outside of California. The CCPA applies to for-profit entities that collect and process the “personal information” of California residents. While an entity must do business in California in order to be subject to the CCPA, physical presence in California is not a requirement. The definition of “personal information” is much broader than typically seen in U.S. privacy laws, and includes “information that identifies, relates to, describes [or] is capable of being associated with . . .a particular consumer or household.” Other states have expanded definitions relating to personal identifying information in privacy-related laws.
With Congress now addressing the first federal data privacy law in U.S. history, many on both sides of the aisle fear that a patchwork of state regulations may, at best, lead to confusion amongst businesses having to deal with conflicting regulations, and, at worst, may preclude smaller businesses from being able to comply. Preemption is a potential solution to this issue, and legislators have certainly not ruled out the possibility of preemption if the federal bill is able to adequately protect U.S. consumers. The fear amongst Democrats, however, is that Republican lawmakers seek to pass a federal privacy bill simply as a means to preempt the CCPA, a bill which many Republican lawmakers and industry group members oppose. Whether legislators will be able to come together for a bipartisan agreement sufficient to justify preemption remains to be seen.