06/10/2025 | News release | Distributed by Public on 06/10/2025 13:54
Personally identifiable information (PII) is any data that can be used on its own or in combination with other information to identify, contact, or locate an individual. Examples of PII include full name, Social Security number, passport number, driver's license number,r and biometric data.
The term "personally identifiable information" gained prominence in the United States in the 1970s, as governments and organizations recognized the risks of unauthorized data use. One of the landmark moments was the 1973 report by the US Department of Health, Education, and Welfare, which outlined principles for fair information practices (FIPPs). This laid the foundation for data privacy regulation and the conceptual framework for what became known as PII.
As technology evolved, the scope of PII expanded:
PII is critically important as it underpins privacy, security and trust in an increasingly connected world. The widespread collection and processing of personal data by businesses, governments and online platforms heighten the risks of identity theft, fraud and data breaches. Strong protection of PII is essential not only for regulatory compliance with laws like GDPR and CCPA but also for maintaining customer trust and safeguarding individual rights in an era of rapid technological advancement.
[Free eBook] 10 Ways You Can Identify Sensitive Data More Accurately with Netwrix
Information qualifies as PII when it meets either of the following conditions:
Context is critical when assessing whether information is PII:
PII can be broadly classified into two main categories based on the degree of identifiability: direct identifiers and indirect (or quasi-) identifiers.
These are pieces of information that can directly and uniquely identify an individual without the need for additional data. Direct identifiers include:
These pieces of information do not directly identify a person but can do so when combined with other data. Here are some common types of indirect identifiers with examples:
With advances in technology, newer types of information are increasingly considered PII:
PII can be sensitive or non-sensitive.
Sensitive PII is information that, if disclosed or compromised, could cause significant harm, embarrassment, financial loss or legal liability to an individual. Examples include:
Non-sensitive PII is information that is generally publicly available or unlikely to cause serious harm if disclosed. Examples are:
Non-sensitive PII has a low standalone risk but can contribute to risk when combined with other data sets (data aggregation).
Sensitive PII Handling | Non-Sensitive PII Handling |
Strict access controls and encryption (both at rest and in transit)Mandatory user authentication and monitoringData minimization principles - collect and store only what is necessaryRegulatory compliance under GDPR, HIPAA, CCPAData breach notification requirements | Basic privacy measures (e.g., secure storage, access logging)Often excluded from breach notification laws if not linked to sensitive dataMay still be protected by internal company policies to prevent misuse |
Common examples used in identity verification
Industry-specific instances (e.g., healthcare, finance)
Myths about what is or isn't PII
Not all data is considered PII. Non-PII refers to information that cannot be used on its own to identify a specific individual. However, in some cases, non-PII can become PII when combined with other data points.
The following are typically not PII - unless they are linked to other identifying data:
Non-PII can turn into PII when:
Organizations must manage PII responsibly to protect individuals and comply with regulations like GDPR, CCPA and HIPAA.
Method | Examples |
Online collection | Web forms: Contact forms, sign-up pages, checkout processesCookies and trackers: Browser and behavioral dataSurveys and polls: Email address or phone numberMobile apps: App permissions may grant access to contacts, location, etc.Social media platforms: Profiles, interactions and uploaded content |
Offline (physical) collection | Paper forms: Applications, contracts, visitor logsFace-to-face interactions: Interviews, customer service interactionsSurveillance: CCTV footage (may include biometrics) |
Biometric data collection | Facial recognition systemsFingerprint scannersVoice recognitionIris/retina scans |
Most modern organizations store PII in cloud environments, such as AWS, Microsoft Entra, Google Cloud (public, private, or hybrid). PII is also often stored in SaaS platforms (CRMs, HRIS, ERPs).
However, some enterprises still use local servers and databases for sensitive workloads, as well as hybrid systems that combine local and cloud storage.
Regardless of location, PII must be encrypted (at rest and in transit), backed up securely, and audited regularly.
Laws like GDPR and CCPA require explicit, informed consent before collecting personal data. This means:
Organizations must provide:
We care about security of your data.
Privacy PolicyPII is a prime target for cybercriminals because it can be used for financial fraud, identity theft and corporate espionage. Understanding the common ways PII is compromised helps organizations and individuals better protect it.
Attackers often gain access to PII using:
Data breaches occur when unauthorized individuals gain access to PII. The consequences can be severe for both the organization that was breached and the individuals who data was compromised.
Here are a few high-profile incidents:
Stolen PII has immense black-market value and can be exploited in multiple ways. Even a small amount of PII (such as name, date of birth or Social Security number) is often enough to commit these acts.
Stolen PII enables attackers to impersonate victims and perform fraudulent actions such as:
If the stolen PII includes usernames and passwords, attackers can run automated scripts to test these credentials across multiple platforms (banks, email, e-commerce). Due to password reuse, this often results in full account takeovers, granting access to sensitive data and stored funds.
Having detailed PII about an individual enables attackers to target them more effectively in the future. With accurate details about things like the person's employer or recent purchases), they can pretend to be a trusted contact or service provider and trick the victim into revealing more information or making financial transactions.
Stolen health records or insurance information can be used to:
Highly sensitive PII (such as private messages or health history) may be used to threaten public exposure unless a ransom is paid, as well as to target individuals in reputation-damaging campaigns.
Sometimes, attackers sell the PII instead of using it themselves. "Fullz" packages (full identity profiles) can be sold for high prices, and health records, passports and credit card data fetch premium value on dark web marketplaces.
Unlike a stolen credit card, PII cannot always be easily replaced. Victims may face:
Let's have a quick look at the regulations governing PII, covering US laws, international frameworks, and differences in global definitions and protections.
Law | Quick Info |
Privacy Act of 1974 | Scope: Applies to federal government agencies (but not to private sector entities) Key Provisions: Restricts the collection, use and dissemination of personal information by federal agenciesGrants individuals the right to access and correct their personal records |
California Consumer Privacy Act (CCPA) | Scope: Applies to for-profit entities that meet certain thresholds (such as revenue, data volume) Key Provisions: Gives California residents the right to know what personal data is collected, request deletion, and opt out of data salesRequires businesses to disclose data practices Expansion: Strengthened by the California Privacy Rights Act (CPRA), effective 2023 |
Health Insurance Portability and Accountability Act (HIPAA) | Scope: Health care providers, insurers and their business associates Key Provisions: Protects medical records and other health-related PIIEnforces safeguards for data storage, access, and transmissionIncludes breach notification requirements |
Gramm-Leach-Bliley Act (GLBA) | Scope: Financial institutions Protects: Financial and personal consumer information Requires: Privacy notices, data security policies |
Family Educational Rights and Privacy Act (FERPA) | Scope: Educational institutions Protects: Student education records and PII |
Law | Quick Info |
General Data Protection Regulation (GDPR) | Scope: Applies to all entities the process the data of EU residents, regardless of the entity's location Key Provisions: Strong emphasis on consent, transparency, and purpose limitationRights include access, rectification, erasure (right to be forgotten) and data portabilityRequires Data Protection Officers and Data Processing AgreementsSevere penalties for non-compliance (up to €20 million or 4% of global revenue) |
Australian Privacy Act (1988) | Scope: Applies to most Australian businesses with revenue over AUD $3 million, with some exceptions Key Provisions: Includes 13 Australian Privacy Principles (APPs)Governs collection, use, storage, and disclosure of personal dataGrants rights to access and correct data |
Aspect | United States | European Union | Australia |
Definition of PII | Narrower; often context-specific (e.g., SSN, email) | Broad; any data that can identify a person, directly or indirectly | Similar to GDPR; includes any info about an identified or identifiable individual |
Consent requirement | Often implicit; varies by law | Explicit, informed, freely given consent is critical | Requires consent in many cases, but often less stringent than GDPR |
Data subject rights | Varies widely by sector/law | Extensive and uniform across member states | Includes access, correction, limited deletion rights |
Cross-border transfer restrictions | Sector-specific (e.g., HIPAA data must remain in the US) | Transfers outside EU require adequate safeguards | Transfers allowed with adequate protection or consent |
Enforcement | Sector-specific agencies (FTC, OCR, etc.) | Centralized Data Protection Authorities | Office of the Australian Information Commissioner (OAIC) |
Protecting PII is crucial in today's digital age due to increasing threats from data breaches, identity theft and cyberattacks.
Best Practice | Details |
Use strong, unique passwords. | Pick long passwords. Use passphrases when possible.Create complex passwords with a mix of letters, numbers and symbols.Never use the same password for multiple platforms. |
Enable multifactor authentication (MFA). | MFA adds a second layer of security beyond just a password. Common types include SMS codes, authenticator apps and biometrics. |
Encrypt personal data. | Use full-disk encryption on devices (e.g., BitLocker, FileVault).Encrypt sensitive files and communications (e.g., with PGP, end-to-end encrypted messaging apps). |
Be wary of phishing and social engineering. | Do not click unknown links or open suspicious attachments.Verify email sender details and URLs before responding. |
Secure devices. | Keep operating systems, browsers and antivirus software up to date.Use screen locks and auto-timeouts on mobile and desktop devices.Disable Bluetooth and location sharing when not needed. |
Area | Best Practices |
Data classification and access control | Classify data based on sensitivity (e.g., public, internal, confidential, restricted).Grant access based on the principle of least privilege. |
PII data mapping and inventory | Maintain an up-to-date inventory of where PII is stored, processed and transmitted.This helps ensure compliance with regulations like GDPR and CCPA. |
Employee training and awareness | Conduct regular training on data privacy, phishing and secure handling of PII.Include role-specific modules for HR, IT and customer service teams. |
Encryption in transit and at rest | Use TLS/SSL for secure data transmission.Encrypt databases and storage systems to protect data at rest. |
Audit trails and monitoring | Log access to sensitive data and monitor for anomalies.Use security information and event management (SIEM) systems for real-time alerts. |
Data minimization and retention policies | Collect only necessary data.Regularly review and delete outdated or unnecessary PII. |
Best practice | Details |
Minimize data collection. | Avoid collecting unnecessary PII; assess risk vs. value of each data point.Anonymize or pseudonymize data where possible. |
Limit data sharing. | Avoid sharing sensitive data across multiple systems or third parties without proper contracts and encryption.Use tokenization or masked identifiers where practical. |
Patch and update systems. | Keep software, firmware and security tools current to fix known vulnerabilities. |
Use network segmentation. | Separate networks for different business functions (such as finance vs. operations) to limit the spread of a breach. |
Adopt a Zero Trust security model. | Authenticate and authorize every user, device and app regardless of location.Assume breach and continuously verify trust. |
Mishandling PII can lead to serious consequences for both organizations and individuals.
Organizations that fail to comply with data protection laws may face substantial fines. Here are some examples:
Victims of data breaches often initiate class actions, where settlements can cost millions in payouts, legal fees and operational changes.
Post-breach expenses include incident response, forensics, public relations, customer notification and credit monitoring. According to the IBM Cost of a Data Breach Report 2024, the average cost of a breach is $4.45 million.
We care about security of your data.
Privacy PolicyEquifax, one of the largest credit reporting agencies in the US, suffered a major cybersecurity incident that exposed the sensitive personal information of some 147.9 million people in the US - nearly half of the country's population. It also affected people in Canada and the United Kingdom. The incident stands out as one of the largest breaches of sensitive data in history.
The breach occurred between May 13 and July 30, 2017. It went undetected for 76 days, allowing attackers to exfiltrate massive amounts of data. Equifax publicly disclosed it on September 7, 2017. The attackers exploited a vulnerability (CVE-2017-5638) in Apache Struts, an open-source web application framework. This vulnerability had been publicly disclosed and patched in March 2017, but Equifax did not apply the patch in time. Later reports indicated that some systems lacked proper encryption and security protocols.
Compromised data included names, Social Security numbers, birth dates, addresses, credit card information (for around 209,000 people), and dispute documents containing personal information (for 182,000 individuals). Overall, the breach posed a significant risk of identity theft and fraud.
Reputational damage aside, Equifax incurred $1.4 billion in costs related to the breach, including legal settlements, customer support and security improvements. The settlement also included free credit monitoring and identity theft protection for affected individuals. Equifax had to invest in overhauling its cybersecurity infrastructure, implementing stronger encryption, multifactor authentication and real-time threat monitoring. On the legal front, Equifax faced several lawsuits and investigations from regulators and private entities.
The Facebook-Cambridge Analytica scandal came to light in early 2018, when it was revealed that the personal data of millions of Facebook users had been harvested without consent and used for political advertising purposes. The core of the issue revolved around a third-party app that collected data through a seemingly innocuous personality quiz. While only about 270,000 users directly interacted with the app, Facebook's API at the time allowed the app to access not just their data but also the data of their friends - ultimately compromising the information of over 87 million users. This data was then passed on to Cambridge Analytica, a political consulting firm that used the profiles to build psychographic models and target users with highly personalized political messages, particularly during the 2016 US presidential election and the Brexit campaign.
Facebook faced intense scrutiny from governments and regulators worldwide, leading to CEO Mark Zuckerberg testifying before the US Congress and the European Parliament. The company's stock value dropped significantly and its reputation suffered substantial damage. Regulatory consequences followed, including a $5 billion fine from the US Federal Trade Commission (FTC) in 2019 - the largest ever imposed for a privacy violation at that time.
Here are the key lessons learned from high-profile data privacy incidents such as the Equifax breach and the Facebook-Cambridge Analytica scandal:
Here's a detailed guide on managing PII in the workplace and in vendor agreements, with emphasis on identifying PII during third-party negotiations, employee responsibilities and vendor risk management.
Clear PII definition | Clearly specify what qualifies as PII in your organization's context (for example, names, emails, national IDs, biometric data). Make a point to reference regulatory definitions (GDPR, CCPA, etc.) when drafting or reviewing contracts. |
Data flow mapping | Document how PII will be collected, processed, transmitted, and stored by third parties. Moreover, identify who the data controller and data processor are, along with their responsibilities. |
Key contractual clauses to include | Make sure you include the following: Data use limitations - Ensure the vendor cannot use PII beyond the specified purpose.Data retention and deletion - Define how long data is retained and how it should be securely deleted.Breach notification - Require vendors to notify your organization within a specific time frame in case of a data breach.Audit rights - Allow for periodic privacy and security audits. |
Standard agreements | Use data processing agreements (DPAs) and service level agreements (SLAs) to codify responsibilities. For international transfers, incorporate standard contractual clauses (SCCs) or use vendors certified under a framework like Privacy Shield (historical), UK IDTA or Binding Corporate Rules. |
Perform a pre-contract risk assessment. | Use questionnaires or risk assessment templates to evaluate: Security certifications (e.g., ISO 27001, SOC 2)Data encryption practicesBreach historySub-processor management |
Perform due diligence. | Check vendor reputation, financial health, and regulatory history.Confirm alignment with frameworks like GDPR, CCPA, HIPAA, or PCI DSS, depending on industry. |
Monitor ongoing compliance. | Require annual compliance reports or certifications.Use third-party risk monitoring services if managing many vendors. |
Include exit and termination provisions. | Contracts should specify data return or destruction procedures when ending the relationship.Ensure vendors do not retain any PII beyond contract termination unless legally required. |
PII and protected health information (PHI) are both foundational concepts in data privacy and security. The following will help you distinguish between the two.
PII is any information that can be used to identify, contact or locate a specific individual, either directly or indirectly, such as full name, Social Security number, email address, phone number, passport number or driver's license number. PII applies broadly across sectors, such as finance, education and retail.
PHI is a subset of PII that specifically relates to an individual's health status, medical care or payment for health services. PHI is defined and regulated under HIPAA in the United States. Examples of PHI include:
PHI is a subset of PII, but not all PII is PHI. For example a name is PII, but a name combined with a medical diagnosis or insurance number becomes PHI.
PII (general) | Regulated by: Privacy Act of 1974, CCPA, GDPR, FERPA and various state-level laws Breach consequences: Vary depending on sector and jurisdiction Use cases: Customer accounts, marketing, banking, education, etc. |
PHI (under HIPAA) | Regulated by: HIPAA Privacy Rule and HIPAA Security Rule and enforced by the US Department of Health and Human Services (HHS) Applies to: Covered entities (healthcare providers, insurers) and their business associates (e.g., billing companies, cloud providers handling PHI) Requirements include: Safeguards for electronic PHI (ePHI)Patient consent for use/disclosureBreach notification rules Penalties: Can reach up to $1.5 million per year per violation type, plus criminal liability in serious cases |
Here are some trends and challenges regarding PII to watch for.
AI and big data analytics are transforming PII management but also amplifying risks. Key concerns include:
Core challenges will be balancing innovation with privacy, and ensuring proper oversight in AI-driven decisions involving PII
Biometric data - facial recognition, iris scans, fingerprints, voice prints, DNA - is increasingly used for authentication, surveillance and personalization. For example, this data is being used in smartphone security, airport clearance, workplace access and healthcare.
Risks include:
A key challenge will be creating legal and technical safeguards for the ethical use and storage of biometrics.
Many countries (as well as individual US states) are developing their own privacy laws, which increases complexity:
Efforts are underway to draft frameworks that address these concerns. They include:
The challenge will be aligning privacy rights and enforcement mechanisms globally while respecting national sovereignty and cultural differences.
Identifying and safeguarding PII is essential to avoid costly breaches and ensure compliance with strict data security regulations like GDPR and HIPAA. However, PII is often dispersed across various data environments, making manual identification both difficult and error-prone.
Netwrix Data Classification empowers organizations to accurately identify, categorize and secure sensitive data, including PII. This process helps them reduce data-related risks, ensure compliance and improve operational efficiency. Key capabilities related to PII include:
We care about security of your data.
Privacy PolicyPII is data that can reveal an individual's identity. The following definition is widely accepted by privacy frameworks such as the US National Institute of Standards and Technology (NIST) and international regulations like GDPR and CCPA, though the specific terminology may vary slightly:
Any information that can be used to identify, contact, or locate a single individual, either directly or indirectly.
Some common examples of PII are:
See the Types of Personally Identifiable Information section for details.
Regulations that protect PII vary by country and industry, but they all share the goal of ensuring that individuals have control over their personal data, and that organizations handle PII responsibly. Here are some of the most well-known laws.
Country / Region | Laws |
United States | HIPAA (Health Insurance Portability and Accountability Act)GLBA (Gramm-Leach-Bliley Act)FERPA (Family Educational Rights and Privacy Act)CCPA / CPRA (California Consumer Privacy Act / Privacy Rights Act) |
Canada | Personal Information Protection and Electronic Documents Act (PIPEDA) |
European Union | General Data Protection Regulation (GDPR) |
Singapore | Personal Data Protection Act (PDPA) |
See the Regulations Governing Personally Identifiable Information section for details.
Yes, non-sensitive data can become PII when it is combined with other data in a way that allows an individual to be identified, located or contacted. For example, a zip code alone is not PII, but combining it with date of birth and gender could uniquely identify a person.