PrivacySolved Ransomware Cyberattack solutions

The Ransomware Problem: Board and Leadership Priorities

Briefing

Information security is vital for economic security, innovation and business continuity. Cybersecurity is becoming a high-impact board and senior leadership issue. Digital transformation efforts and cloud service adoption increases the reliance of business-critical functions on digital infrastructures. Malicious actors seek to exploit human and technical vulnerabilities, for profit. Increasingly, data breaches and cybersecurity incidents affect all parts of organisations, their value chains and supply chains. The human element, seen in employee errors, phishing and social engineering, are significant weak points in the fight for information security resilience. Now that boards are increasingly paying attention, their priorities, strategies and actions are crucial for sustainable impact and success. Priorities should be risk based, context-rich, applied in a multi-disciplinary way across the organisation and based on proactive analysis.  

The Increasing Problem of Ransomware

The information security landscape changing rapidly, but key indicators and trends can be identified and monitored. The Verizon Data Breach Investigations Report 2021 reported that 85% of data breach incidents involved the human element, 36% involved phishing and 10% included Ransomware (the latter is double the rate of the previous year). The median breach cost per incident is $21,659 (USD), but most organisations can expect their costs to rise to $650,000 (USD) for large incidents.  The UK Cyber Security Breaches Survey 2021, found that 39% of businesses and more than a quarter of charities (26%) report having cyber security breaches or attacks in the previous 12 months. For the organisations that have suffered breaches or attacks, around a quarter (27% of these businesses and 23% of these charities) experience these at least once a week. Phishing is the most common method for cyberattacks. Among the 39% identifying breaches or attacks, 83% had phishing attacks, 27% were impersonated and 13% had malware (including ransomware).  For those who suffered breaches or attacks, 21% of businesses and 18% of charities lost money, data or other assets. Of all the organisations surveyed, 43% have cyber insurance cover in place, a rise from 32% in the previous year.

Ransomware is a form of malicious software, or malware, that prevents organisations and computer users from accessing their computer files, systems, or networks with a demand that a financial ransom is paid to restore system access or for data to be returned. Cyber attackers often demand that ransom payments are paid in cryptocurrencies, which are hard to trace. Ransomware attacks can cause significant disruption to IT operations and the loss of critical business information and personal data. Ransomware can be introduced to a computer or system by users accidentally downloading ransomware onto a computer by opening an email attachment, clicking an advertisement, clicking on a hyperlink or visiting a website that has been deliberately infected with malware.

Ransomware can be introduced to an IT system by phishing or spear phishing emails, which aim to appear legitimate to users who open and click on infected hyperlinks. These emails may also enter a system as unwanted spam, hoping that an unwitting user will unknowingly click on the link. Highly targeted campaigns, using social engineering, aim to target high profile and senior figures in companies and organisations in order to access the most sensitive information and have the most impact because of the high levels of trust the senior user enjoys internally. Ransomware can also be introduced using Remote Desktop Protocol (RDP) vulnerabilities (after gaining user access credentials) and by exploiting software vulnerabilities.  Malware and ransomware are pernicious and can ensnare a wide range of individuals. As a result, board awareness, continuous staff training and vigilance are crucial.

Ransomware is at the frontline of global cybercrime. Companies and organisations have been warned that these tactics can be used by rogue states, by hackers, to avoid international sanctions, for money laundering, for terrorist financing, for illegal drug trafficking or for modern slavery. The effect of ransomware attacks can also be technically devastating to IT systems and to an organisation’s critical data.  Services can be stopped, IT systems can be destroyed, data disclosed on the dark web, confidential information published freely online and data permanently deleted. Ransomware can be an existential threat to a company’s reputation and the future commercial viability of businesses and organisations. Several organisations and governments have adopted official policies of not paying ransom demands and not engaging with ransomware gangs. Paying ransoms do not guarantee that stolen data will be returned or that IT systems will be repaired. Of all the persistent cybersecurity threats and risks, it is ransomware that creates the most uncomfortable and unforgiving catch 22.

The Cybersecurity Insurance Puzzle

Cybersecurity insurance is important for good governance, financial resilience and business continuity.  However, many businesses and organisations are under insured against modern cybersecurity threats and risks. Some companies and organisations rely on the information security coverage in their general business insurance policies. These protections are often narrow and can be excluded when claims are made after information security incidents and cyberattacks. Some companies and organisations have specific cybersecurity insurance policies, but these can be poorly underwritten and are not future proofed to cover modern and evolving threats and risks.

When information security claims are made, companies and organisations could find that their claim is rejected, or that the payments received do not meet the true costs of the claim. Boards and senior leaders need to realistically assess their organisations’ standing and take strategic decisions as to the optimal range of insurance coverage. Organisations should learn about the cyber insurance market for their industry and sector and balance this against their business, regulatory and financial needs.  A company’s or organisation’s supply chain should also be regularly audited for information security compliance and adequate insurance cover. 

Increasingly, general insurers and cyber insurers are refusing to pay the ransoms demanded by ransomware attackers. This is because these activities often contradict their corporate values or may be illegal if the ransom is linked to terrorism, money laundering, illegal trafficking or breach international sanctions. These insurers also understand that paying ransoms can incentivize criminality and create greater information security risks due to increased sophisticated cyberattacks.  Paying ransoms is always very risky because it involves dealing with those involved in illegal or unethical activity. The risk-reward calculations often reveal significant risks.

Board and Leadership Priorities and Solutions

Boards and Senior Leadership should adopt a “whole organisation” and multi-disciplinary approach to resourcing and empowering their internal teams, partners and supply chains to:

i. Improve and extend cybersecurity strategies to include a cybersecurity insurance strategy as part of financial governance arrangements with Chief Financial Officers or the heads of finance in smaller organisations. This work should be done in conjunction with the Chief Information Officer, Chief Information Security (Risk) Officer or Head of Security in smaller organisations. This group of stakeholders should also include the General Counsel, the organisation’s lead lawyer or the compliance lead in smaller organisations. Human Resources leaders and external specialist advisors should also be included or consulted to strengthen internal resources.

ii. Develop internal expertise about emerging cybersecurity threats and risks. Board and leadership teams should receive summaries of specialist reports and then update their strategies to reflect the changes to the cybersecurity landscape, new business models and the cyber insurance market.   This should not be treated as an IT-only issue.

iii. Include insights from work on international sanctions compliance, export controls, international cybercrime trends, anti-money laundering standards, blockchain strategy and cryptocurrency financial controls into the cybersecurity strategy and ransomware policies and procedures. This will apply most to complex global businesses and organisations.  

iv. Refine and clarify the personal data breach and personal identifiable information (PII) compromise response procedures to specifically reference the nature of ransomware attacks. This will include legal duties to notify data protection and data privacy regulators, informing individuals affected, liaising with cyber insurance providers, informing enforcement authorities and the police, dealing with ransom groups and establishing a team of first responders.  Compliance with the General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), other national laws and sectoral laws is also vital. Fines and financial penalties for data breaches could be 2-4% of global turnover, in addition to the financial impacts of the ransomware attack.

v. Improve information classification and data management by categorising data according to its value to the company or organisation and establish physical and logical separation of networks and data for different organisational units. For example, high value research and development or business data could be deliberately held on a separate server and network segment from the organisation’s email environment. Virtualised environments could be used to execute operating system environments or specific programmes.

vi. Improve information security awareness and training for all levels of the company or organisation. Ransomware often targets end users and so employees should be told about the threat of ransomware, how it is delivered, ways to identify it and how to report likely malware. Training should also include key cybersecurity definitions, principles and techniques.

vii. Increase information security hygiene and resilience activities by regularly backing up data and verifying its integrity. This includes ensuring that backups are not connected to the computers and networks that they are backing up. For example, these could be physically stored offline. Backups are vital in ransomware resilience efforts. After a ransomware attack if computer systems are infected, backups may be the best way to recover business critical data. Backups are very important for recovery, business continuity and ransomware mitigation.

viii. Systematically and regularly patch operating systems, software and firmware on all devices. All endpoints should be patched as vulnerabilities are discovered. This can be made easier by using a centralised patch management system. Ensuring that anti-virus and anti-malware solutions are set to automatically update and that regular scans take place. Another solution is to disable macro scripts from Office files transmitted via email. For example, Office Viewer software could be used to open Microsoft Office files transmitted via email instead of the full Office Suite applications.

ix. Set up application whitelisting to only allow systems to execute programs that are known and permitted by security policy. It is also useful to implement software restriction policies or other controls to prevent the execution of programs in common ransomware locations, such as temporary folders supporting popular internet browsers, and compression and decompression programmes. This includes those located in the AppData or LocalAppData folder. Other solutions include applying best practices for RDP use, including auditing networks for systems using RDP, closing unused RDP ports, applying two-factor authentication where possible and logging RDP login attempts.

x. Implement the least privilege for file, directory, and network share permissions. If a user only needs to read specific files, they should not have write-access to those files, directories, or shares. Least privilege should influence how access controls are configured. Requiring user interaction for end-user applications communicating with websites uncategorised by the network proxy or firewall is also helpful. For example, mandating users to type information or enter a password when their system communicates with a website that is uncategorised by the proxy or firewall.

xi. Invest in developing zero trust networks, especially in mission critical parts of IT systems. Agile project management could be used to test, review, assess and repeat trials and experiments to find the right balance between confidentiality, availability and integrity. Zero trust practices can then extend across the IT system and into critical supply chains. Introducing blockchain technology can accelerate these processes.

xii. Audit supply chains for cybersecurity risks and increase standards through clear contractual obligations, practical and accessible information security schedules, Key Performance Indicators (KPIs), robust reporting and dynamic analysis.

Board and Leadership Resources

National Cyber Security Centre (UK)

An Garda Síochána (Ireland’s National Police and Security Service)

Federal Bureau of Investigations (FBI – United States)

Interpol (Global)

For assistance with Personal Data Beach Response, Ransomware, Cybersecurity Strategy, Board Awareness or Information Security Training, contact PrivacySolved:

London +44 207 175 9771

Dublin +353 1 960 9370

Email: contact@privacysolved.com

PS052021

PrivacySolved AI Machine Learning

Regulating Artificial Intelligence (AI): The EU Moves First

Briefing

Artificial Intelligence (AI) and machine learning services are increasing at a rapid rate around the world. These technologies affect all sectors and are predicted to be key drivers of economic growth, innovation, automation, security and knowledge. Such rapid expansion creates unique opportunities but also creates systemic risks.  The European Union (EU) seeks first mover advantage in creating a trusted environment for the growth and development of responsible AI. This aim has economic and geopolitical motives, but also holds benefits for economic participation, innovation, social equity, human rights, environmental improvement, research and development. In April 2021, the European Commission published a package of legal and policy measures to promote responsible AI. The key initiative is a draft law on AI Regulation.  There is also a Machinery Regulation to increase product safety and an update to the EU’s Coordinated Plan for AI. The EU institutions will finalise and adopt these measures in the coming months and years.

Artificial Intelligence Systems  

The AI Regulation applies to Artificial Intelligence systems (AI systems). This is broadly defined as software that is developed with one or more techniques and approaches applied to a set of human-defined objectives that generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. These techniques and approaches include machine learning, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning. Logic and knowledge-based approaches including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems are also covered. The definition also extends to statistical approaches, Bayesian estimation, search and optimisation methods.

The Scope of the EU AI Regulation 

The AI Regulation will apply to both private sector and public sector organisations inside and outside of the EU if the AI system is placed in the EU market or if its use affects people located in the EU. Providers, developers and manufacturers of AI systems and users (buyers) of high-risk AI systems are included in the scope of the AI Regulation. However, private and non-professional users of AI systems are not included.

Unacceptable Risk (Prohibited), High Risk, Limited Risk and Minimal Risk AI

The AI Regulation is based on four levels of a risk–based approach:

Unacceptable risk (Prohibited): This relates to a few very harmful uses of AI that infringe EU values because they violate EU fundamental rights. This includes banning social scoring by governments, exploitation of the vulnerabilities of children and the use of subliminal techniques. Live remote biometric identification systems in publicly accessible spaces used for law enforcement purposes (subject to narrow exceptions) will also be banned.

High Risk: This relates to a limited number of AI systems that create an adverse impact on the safety or the fundamental rights (protected by the EU Charter of Fundamental Rights) of individuals. These include Biometric identification and categorisation of natural persons, Critical infrastructure management and operations as well as Education and Vocational training. The category also includes Employment, workers management and access to self-employment, Access to and enjoyment of essential Private Services and Public Services and benefits as well as Law Enforcement. Further, Migration, Asylum and Border Control Management systems and Administration of Justice and Democratic process systems are also included. These categories can be reviewed and expanded, over time, to create effective futureproofing. These categories also include safety components of products covered by sectorial EU laws. These categories of systems will always be high-risk when covered by third-party conformity assessment under these sectorial laws.

To ensure trust, consistency, effective protection and compliance with EU fundamental rights, mandatory requirements for all high risk AI systems are proposed. These include the quality of data sets used, technical documentation and record keeping, transparency and providing information to users, human oversight as well as AI system robustness, accuracy and cybersecurity. Where data breaches or cyber security attack incidents occur, national AI Regulation authorities will have access to the information needed to investigate whether the use of the AI system complied with the law.

Limited Risk: For certain AI systems, specific transparency requirements are imposed, for example where there is a high probability of manipulation, for example when using chatbots. Users should be made aware that they are interacting with a machine or automated system.

Minimal Risk: All other AI systems can be developed, sold and used subject to existing laws without additional EU legal obligations. Most of the systems used in the EU will fall into this category. Providers of these Minimal Risk systems can voluntarily choose to apply trustworthy AI requirements and adhere to voluntary codes of conduct.

Enforcement and Penalties

Each EU Member State will apply and enforce the AI Regulation by choosing national authorities to implement, apply, supervise, enforce and carry out market surveillance activities. Each national AI authority will also represent each country on the EU-level European Artificial Intelligence Board (EAIB).

The AI Regulation requires EU Member States to put in place effective and proportionate penalties, including administrative fines, for infringements and inform the European Commission. When AI systems enter the market or are in use and do not meet the requirements of the AI Regulation, EU Member States are required to take enforcement action. The AI Regulation sets out the following penalty thresholds:

(i) Up to €30m or 6% of the total worldwide annual turnover of the previous financial year (whichever is higher) for infringements of prohibited practices or noncompliance with data requirements;

(ii) Up to €20m or 4% of the total worldwide annual turnover of the previous financial year for non-compliance with any of the other requirements or obligations in the AI Regulation;

(iii) Up to €10m or 2% of the total worldwide annual turnover of the previous financial year for the supply of incorrect, incomplete or misleading information to notified bodies and national authorities in reply to a request.

Strategic and Operational Impacts

AI system providers and users (buyers) have been put on notice of this significant regulatory framework that will have cascading impacts around the world and on AI systems. The full impacts will emerge over time, but the following are significant:

  • The definition of Artificial Intelligence systems is deliberately broad and subject to future expansion and regulatory reinforcement. This does not necessarily make the AI Regulation unfocussed or difficult to enforce; this expansive definition is a robust policy position by the EU that all AI systems that interface with the EU will be subject to some sort of regulation, even if these are self-regulation and codes of conduct for Minimal Risk AI systems.  
  • The AI Regulation acknowledges that the GDPR is the accepted baseline protection for personal data and special categories of data. The AI Regulation is seen as a detailed law that overlays and supplements the GDPR for AI systems. As a result, poor or ineffective GDPR compliance will negatively impact a business or organisation’s ability to operationalise the AI Regulation.
  • The AI Regulation is a bold geopolitical, economic, social equity and commercial effort to dictate the future of international AI regulation.  The Regulation’s application outside of the EU is also intentional, and this effect should not be underestimated.
  • The AI Regulation puts forward a radical hybrid of infrastructure, hardware, software, data and cybersecurity protections (the complete IT stack and supply chain) by incorporating elements of product safety, product liability, consumer protection, product conformity and product certification. 
  • AI systems providers, purchasers and users will need to review their digital transformation and new technology lifecycles to ensure that their AI systems are purchased and adopted efficiently within the implementation timeline of the AI Regulation. Old noncompliant AI systems will need to be significantly upgraded or decommissioned over time. Businesses and organisations that are AI-only or are significantly reliant on AI systems will need to establish bespoke AI Regulation projects and allocate significant and flexible financial budgets. Relationships between AI systems providers, businesses and organisational users will become more inter-dependent and mature over time.

For help with Artificial Intelligence systems, Data Ethics compliance, New Technology and Digital Transformation projects, Board awareness and Staff training, contact PrivacySolved:

London +44 207 175 9771

Dublin +353 1 960 9370

Email: contact@privacysolved.com

PS042021

Five Key Things to Know about California Privacy Rights Act (CPRA)

The California Privacy Rights Act 2020, the CPRA, is a US state privacy law that took effect in December 2020 and comes into force fully on 1 January 2023. The CPRA expands the existing California Consumer Privacy Act (CCPA) to protect the rights of California consumers. CPRA defines and protects sensitive personal information, places a duty on businesses to put in place reasonable information security measures and expands the right to delete personal information.  The right to opt-out of the sale of personal information (called “Do Not Sell”) has been extended to include limits on non-sale data sharing (“Do Not Share”). The law creates a new regulator called the California Privacy Protection Agency, which will inherit the California Attorney General’s rule making and enforcement powers from 1 July 2021.

  1. What types or organisations are covered by CPRA?

The law applies to Businesses, defined in four categories 1.1, 1.2, 1.3 and 1.4:

(1.1) A legal entity organised or operated for the profit or the financial benefit of shareholders, that collect consumers’ personal information or have personal information collected on its behalf.  This entity also determines the purposes and means of the processing of consumers’ personal information, alone, or jointly with others, does business in the state of California and meets one or more of the following threshold criteria:

(a) As of January 1, of the calendar year, have annual gross revenues more than $25,000,000 in the last calendar year, or

(b) Alone or in combination, annually buy or sell, or share the personal information of 100,000 or more consumers or households, or

(c) Creates 50% or more of its annual revenues from selling or sharing consumers’ personal information.

(1.2) Any entity that controls or is controlled by a business falling within criteria 1.1 (above) and that share common branding and consumers’ personal information with each other.

(1.3) A Joint Venture or Partnership composed of businesses in which each business has at least a 40 percent interest. Each business in the Joint Venture or Partnership is seen as a separate single business.

(1.4) Organisations doing business in California, but are not covered by criteria 1.1, 1.2 or 1.3 above and voluntarily certifies to the California Privacy Protection Authority that they are compliant.

2. What types of data or information are covered by CPRA?

Like the CCPA, the CPRA protects the personal information of California consumers. Personal information includes many different types of data and information including identifiers (name, address, social security number and online identifiers etc), protected characteristics, commercial information, biometric information, internet activity, geolocation data, audio files, visual files, employment information, education information, profiles and inferences taken from data that reveal a consumer’s characteristics, psychology, predispositions, attitudes and intelligence.

The CPRA introduces a new category of sensitive personal information which includes a wide range of personal data such as passport details, driving licence details, specific geolocation information, race or ethnic origin information, genetic data and biometric data. These types of data require greater protection. 

3. What are the main CPRA obligations for businesses?

Businesses must ensure that:

(i) When selling or sharing personal information with third parties, binding contracts are in place to ensure that third parties comply with CPRA requirements and their contractual obligations.

(ii) Service providers and contractors must help businesses to respond to verifiable personal information CPRA requests. Service providers are not required to fulfil requests received directly from consumers.

(iii) They inform consumers about the data categories they collect and whether information will be sold or shared.

(iv) Businesses cannot collect additional categories of personal information in ways that are incompatible with the original purposes, once the businesses inform consumers of these purposes.

(v) Third-parties that control personal information collection must provide the same disclosures on their website, as the business that engages them.

(vi) Have systems that protect availability, authenticity, integrity, and confidentiality of personal information. Detect security incidents, resist malicious, deceptive, fraudulent, or illegal actions and ensure the physical safety of individuals. Reasonable security practices and procedures must be introduced, including robust email address and password protections.

(vii) Ensure that consumers can exercise their right to limit or restrict the use of sensitive personal information and receive full notices about data use, purposes and retention.

(viii) Ensure that consumers can exercise their rights to request deletion and correction of their personal information.

(ix) Put in place clear policy and procedures for children under 16 years old to opt-in to the selling or sharing of their personal information.

(x) Develop clear data retention and deletion policy and retention schedules to ensure that personal information is deleted when legitimate use ends.

4. If businesses comply with the European Union’s General Data Protection Regulation (GDPR) and the CCPA, will they automatically comply with CPRA?

No. GDPR, CCPA and CPRA have different scopes, definitions and compliance requirements. However, there are important similarities. Organisations that are governed by CCPA are very likely to fall within the CPRA’s scope. CPRA is more closely aligned with GDPR than CCPA. GDPR data mapping and records of processing activity logs can help to identify California consumers’ personal information. Data privacy notices, policies, information security frameworks created for another law can be tailored to meet the requirements of CPRA. Data processing agreements, supply chain contracts and online notices must be specifically updated for CPRA. Do Not Sell and Do Not Share notices and their underlying management systems are unique to CCPA and CPRA and require specific technical solutions.

5. Does the CPRA apply to businesses or organisations in other US states or to foreign companies?

Yes, it can. If a business or organisation falls within the CPRA qualifying criteria and holds personal information about California consumers, then CPRA applies. Businesses that are based in other US states and companies from outside of the United States may also have to comply with the CCPA.  All organisations should seek specialist advice, review new CPRA regulations, monitor the development of the CPRA enforcement, examine official guidance and watch the regulator, the California Privacy Protection Agency for interpretation and priorities.

Analysing UK Data Protection Adequacy: The Benefits and The Limits

Briefing

The route to the United Kingdom (UK) gaining data protection adequacy has been set out by the European Commission. UK adequacy is a declaration by the EU that the UK’s laws and systems are essentially equivalent to cover the General Data Protection Regulation (GDPR) and the Law Enforcement Directive’s (LED) data flows. The UK uniquely benefits from many years of alignment with European data protection standards including ratifying the Council of Europe’s Convention 108. The UK’s pioneering first law was the UK Data Protection Act 1984. The UK then adopted both the EU Data Protection Directive 1995 and the GDPR of 2016.

Data protection adequacy creates certainty and trust for data flows to and from the EU and UK. There are numerous benefits to data protection adequacy for business, trade, cooperation, security and law enforcement. However, because the UK has left the EU (Brexit), it now stands apart from EU developments and automatic institutional advancements. Inevitably, over time, there will be degrees of divergence, duplication of compliance activities and an evolving dynamic tension between the EU and UK regimes. Despite this, there will be an enduring, broad and deep commonality between the EU and UK data protection regimes, well into the future.

The Benefits: What UK Data Protection Adequacy Means

UK data protection adequacy creates a new status quo:

  • The UK will join Andorra, Argentina, Canada (commercial organisations), Faroe Islands, Guernsey, Israel, Isle of Man, Japan, Jersey, New Zealand and Uruguay as a country with essentially equivalent data protection standards to the EU, the European Economic Area (EEA) countries and Switzerland.
  • The EU will allow the free flow of personal data from the EU to the UK and these will not be considered international data transfers and require the complex additional safeguards listed in the GDPR. The UK has already declared adequate the EU, the EEA, Switzerland and the current list of EU-adequate countries, which creates fully reciprocal personal data flows between the UK and EU.
  • Going forward, the UK will be obliged to ensure that domestic developments in data protection law and systems substantially reflect developments in the EU. This will create a degree of certainty and transparency for companies, organisations and governments.
  • In the future, the Information Commissioner’s Office (ICO), the UK’s GDPR regulator, will be more inclined to interpret and enforce the GDPR in line with EU developments. Though, the ICO must also reflect UK-led changes to the legal framework, UK GDPR interpretation and UK court decisions.
  • Companies and organisations that operate both in the UK and EU must now establish two distinct personal data breach reporting arrangements. UK personal data breaches will need to be reported in the UK, to the ICO. EU data breaches must be reported to one or more of the EU’s twenty-seven GDPR regulators. Bureaucratically, personal data breaches affecting individuals based in the UK and EU must be reported in both regions.
  • International companies and organisation can continue to blend their data protection programmes to cover all EU countries and the UK but specifically allow for future UK variations. This approach will encourage economies of scale, compliance costs savings, interoperability and more transparent European-wide data risk profiles. 

Dynamic Controls

UK data protection adequacy includes several dynamic controls that supervise the EU/UK data relationship into the future. Companies and organisations should note that:

  • UK adequacy decisions are subject to review by the European Commission at four-year intervals. The decisions are re-examined periodically.
  • The validity of the UK’s adequacy decisions could be challenged in the Court of Justice of the European Union (CJEU). This court has the power to invalidate the adequacy decisions, forcing organisations to stop transferring personal data from the EU to the UK. This happened to the EU-US-Swiss Safe Harbour adequacy decision in 2015 and EU-US-Swiss Privacy Shield adequacy decision in 2020, causing much disruption, uncertainty and costs to businesses and organisations.
  • The European Commission can suspend UK adequacy decisions based on a serious violation or series of serious violations that offend the EU’s  rights-based system. This is unlikely. However, a significant UK/EU disagreement about human rights, EU fundamental rights, national security and large-scale surveillance could increase the risk. A significant breakdown in the UK’s internal checks and balances that safeguard the right to personal data protection could negatively affect the stability of UK adequacy.

The Limits: What UK Data Protection Adequacy does not Mean

UK data protection adequacy does not alter several important issues and so companies and organisations should note that:

  • UK adequacy creates and maintains equivalence for data transfers from the EU to the UK. However, the UK will still need to create new international data transfer mechanisms for UK personal data flows to the rest of the world. These may be different from the EU’s system and may include UK-specific data protection standard contractual clauses. Companies and organisations in the UK and EU must now navigate two systems for international transfers.
  • Companies and organisations that have no presence in the EU but offer goods or services or monitor individuals in the EU will need to appoint an EU Data Protection Representative based in the EU, separate from any UK representative.
  • Companies and organisations that have no presence in the UK but offer goods or services or monitor individuals in the UK will need to appoint a UK Data Protection Representative based in the UK, separate from any EU representative.
  • Post Brexit, the UK is still part of the European Convention on Human Rights (ECHR), with its well-established right to privacy, family life, home and correspondence. This right is reflected in the UK’s Human Rights Act 1998.  However, there is no longer a fundamental right to personal data protection in UK law as it exists in EU law. The UK is no longer a party to the EU Charter of Fundamental Rights, and its specific additional Article 8 personal data protections. As a result, data protection rights in the UK are now narrower in scope than in the EU. 
  • The UK continues to have GDPR embedded into its laws. However, automatic data protection alignment is no longer legally and practically inevitable. Brexit means that the UK is no longer a part of the EU’s governing treaties, democratic institutions, internal single market, digital single market, regulators and courts. Data protection decisions and opinions from the European Commission, European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) no longer have automatic legal force on the UK.

For assistance with GDPR, EU/UK data flows and Brexit, contact PrivacySolved:

London +44 207 175 9771

Dublin +353 1 960 9370

Email: contact@privacysolved.com

PS022021

1 2 3 5