01/28/2026 | Press release | Distributed by Public on 01/28/2026 14:05
Easily browse the critical components of this report…
Artificial intelligence is reshaping every facet of modern life, transforming how we work, communicate and make decisions. It also is advancing at an incredible pace, with impacts in every industry, including the public sector, which also presents new opportunities for tax administration and tax policy. These rapid technological advances are particularly consequential for tax systems, where governments face large fiscal stakes, persistent compliance challenges and complex policy tradeoffs.
Although federal and state governments collect over $5 trillion annually in tax revenue, the Internal Revenue Service estimates that the "tax gap"-the difference between what is owed versus what is collected-exceeds $600 billion each year. Budget forecasting errors compound the problem, leading to fiscal miscalculations that affect policy decisions at every level. Meanwhile, taxpayers and businesses spend time navigating regulations, often with inconsistent results. Artificial intelligence has emerged as a potentially transformative solution to these long-standing challenges. From machine learning algorithms that detect sophisticated tax evasion schemes to predictive models that enhance revenue forecasting accuracy, AI technologies are already reshaping how governments approach tax administration and fiscal planning. Federal agencies including the IRS have begun deploying AI-powered systems to identify compliance patterns, while state governments are experimenting with automated tools for sales tax auditing, fraud detection and expenditure optimization.
This brief identifies key trends at the intersection of tax and AI, highlights real-world examples of both opportunities and risks that arise from integrating AI into the tax space, and examines emerging areas and trends. As AI becomes prevalent in the tax landscape, understanding its implications is critical to navigating the ways technology will reshape tax administration.
AI is poised to transform tax administration by reducing errors, shortening preparation time and creating a fairer system overall. As AI tools become integrated into common tax software and state revenue websites, compliance will become easier for taxpayers, who will have access to guidance and expertise previously available only to large enterprises. Today, AI already assists with individual tax returns, but its capabilities are expanding to identify potential deductions and improve filing accuracy. For corporations navigating complex state and federal codes, such as the Internal Revenue Code's 70,000-plus pages, AI can significantly streamline interpretation and application of the law. State tax administrators can also use AI to detect fraud, analyze returns for inaccuracies, and identify patterns of underreporting among retailers and individuals. In short, AI promises to enhance efficiency, accuracy and fairness across the entire tax ecosystem.
Artificial intelligence is fundamentally changing how tax returns are processed, helping agencies handle millions of returns faster and more accurately. AI-powered systems can cross-check information from multiple sources such as W-2 forms, 1099s and banking records, spotting inconsistencies while quickly processing straightforward returns. This automation can significantly reduce the backlogs that have historically overwhelmed tax agencies during busy filing seasons, helping both taxpayers and administrators through faster, more reliable service.
AI tools, which can be tailored to spot fraud, can analyze hundreds of factors simultaneously, recognizing subtle signs of schemes ranging from identity theft to complex corporate tax avoidance. By generating risk scores and prioritizing high-risk cases, AI helps tax agencies use their limited audit resources strategically, rather than relying on random selection or simple red flags that sophisticated tax evaders have learned to avoid. The Government Accountability Office reports that AI is being used to select the highest-risk returns for audit and to identify noncompliance. According to the Treasury Department, the IRS' return review program shows promise in preventing billions of dollars in improper payments while reducing false alarms that delay legitimate refunds.
Artificial intelligence, including chatbots, already helps with customer service, but this role will grow as virtual assistants are better able to accurately answer a wider range of questions. AI chatbots will soon have the ability to provide support 24/7, making tax help accessible for taxpayers who cannot reach agencies during regular business hours. For example, since January 2022, IRS chatbot technology has helped more than 13 million taxpayers avoid long wait times by resolving tax issues, including setting up roughly $151 million in payment agreements.
At the state level, the California Department of Tax and Fee Administration uses an AI virtual assistant to help its call center manage tax inquiries. Initial analysis found that the AI assistant saves about 1.5% of agents' time on calls, which could result in the department responding to over 10,000 more calls each year. This is particularly valuable during busy periods as chatbot systems can manage routine questions and handle numerous conversations without any drop in quality. This frees up time for human agents to remain available for complex situations requiring judgment and empathy.
AI tools help legislative staff and policy analysts who need to review massive amounts of information when evaluating tax reform proposals. AI-powered research tools can quickly scan through decades of legislative history, academic research and policy evaluations to find relevant information, presenting analysts with easy-to-read summaries. For fiscal offices that need to calculate the costs and benefits of legislation, AI modeling tools can analyze how proposed changes would affect different income levels, regions and industries.
Organizations like NCSL are adopting AI-driven platforms to track bills, summarize committee hearings, provide real-time alerts and support collaboration among legislative staff. Internal policies ensure impartiality, factual accuracy and human review of AI-generated content. Globally, similar innovations are emerging. Researchers in Ireland have built AI models to forecast economic reactions to tax incentives, while the U.S. Congressional Research Service uses AI to draft initial bill summaries for analysts to refine. These developments illustrate how AI can enhance legislative evaluation and improve the quality of tax policy decisions.
States are using AI to predict tax revenues and analyze budgets, making complex, time-consuming tasks more efficient. AI tools are especially good at processing numbers and handling repetitive tasks, which makes them particularly useful in the tax space. These systems still need careful oversight to ensure the right information goes in, and humans still need to check the results to make sure error rates stay low.
The International Monetary Fund studied the use of machine learning for revenue forecasting and found that, in some cases, it could reduce prediction errors by 30%. One major advantage the IMF identified was AI's ability to process large amounts of data at once. After seeing these results, the IMF started using AI tools to improve its own forecasts. Minnesota integrated machine learning into its revenue forecasting in 2020. The state's model uses data from employers' weekly tax payments and has proven better at spotting important changes earlier than older models. This allows for better revenue predictions and helps the state plan its budget more efficiently.
Perhaps most valuable is AI's ability to simulate the fiscal impacts of proposed tax policy changes in real time, testing multiple scenarios with different assumptions about how people and businesses might respond. AI-powered predictive models can process vast amounts of economic data, including employment numbers, consumer spending, real estate transactions and market trends, to generate more accurate revenue projections. When legislators consider changing tax rates or reforming the tax structure, AI models can project both the direct revenue effects and how taxpayers might change their behavior in response, such as by shifting income, adjusting investments or changing spending patterns.
While AI tools can improve tax administration and policymaking, they also introduce significant risks that must be addressed to ensure systems remain fair, transparent and accountable. Algorithmic models trained on historical data may replicate or worsen existing biases in enforcement patterns, disproportionately flagging certain demographic groups or small businesses while failing to detect sophisticated tax avoidance among wealthy workers. Expanding analytical capacity also raises substantial privacy concerns. As tax agencies gain access to increasingly detailed financial and behavioral data, the potential for over-collection, misuse or unauthorized access grows. The displacement of experienced tax professionals and the concentration of technical expertise raise questions about workforce transitions and the retention of institutional knowledge. Perhaps most fundamentally, the difficulty of many AI systems challenges traditional notions of due process and the right of taxpayers to understand and contest the basis for government actions affecting their financial obligations.
AI systems are only as fair as the data and assumptions that shape them. If models are trained on datasets reflecting historical disparities, such as past audit patterns that skew toward lower-income taxpayers, they may inadvertently replicate those inequities. This is known as algorithmic bias, and it could result in the additional scrutiny or auditing of certain taxpayers despite them having no meaningful risk indicators. Such practices can erode confidence in tax agencies by replacing the human judgment and contextual understanding taxpayers expect with opaque, automated decision-making.
Bias concerns are not hypothetical, and early analyses of federal and state systems have uncovered examples of automated tools disproportionately targeting certain groups. The challenge is compounded by the fact that AI systems can identify correlations in data that may reflect societal inequities rather than actual tax compliance risk. Without careful design, validation and ongoing monitoring, these systems risk perpetuating or even amplifying existing disparities in tax enforcement.
In its AI Risk Management Framework, the Commerce Department's National Institute of Standards and Technology recommends that organizations identify and manage AI-associated risks, including those related to bias and fairness in data and models. Specifically, practitioners are encouraged to audit and evaluate training data for representativeness and potential sources of bias, and to use risk-informed fairness metrics to assess system performance across different demographic groups.
The framework underscores the importance of human oversight and governance structures so that experienced professionals make final decisions in high-impact contexts. It also promotes transparency in AI systems design, testing to support accountability, and safeguarding sensitive processes.
AI-enabled tax systems often rely on large volumes of sensitive personal and financial data. While this can improve accuracy and reduce fraud, it also creates new privacy and cybersecurity risks. Any system that is not fully secured and properly monitored could expose taxpayer data through breaches, unauthorized third-party access or even inadvertent leakage during the model-training process. State and local tax agencies already are frequent targets of cyberattacks; integrating AI expands the range of potential vulnerabilities, from data ingestion pipelines to automated decision engines to cloud-based storage systems.
The use of AI also introduces new privacy considerations beyond traditional cybersecurity. Machine learning models themselves can sometimes inadvertently reveal information about the data they were trained on. If agencies use third-party vendors or cloud services to process tax data, questions arise about data governance, ownership and the potential for unauthorized use. Additionally, as AI systems become more sophisticated at analyzing patterns across multiple data sources, they may be able to infer sensitive information that taxpayers never explicitly provided.
Ensuring robust encryption, strict access controls, comprehensive auditing and clear data-retention limits will be essential to maintaining public trust. Agencies must also carefully vet vendors, ensure contractual protection for data privacy and maintain the ability to audit the use of their data. Regular security assessments, penetration testing and incident response planning are necessary to identify and address vulnerabilities before they can be exploited.
Governments operate under tight fiscal constraints and face constant pressure to demonstrate value to taxpayers. Many tax agencies already struggle with outdated IT systems, limited budgets and staffing shortages. Without careful planning, AI adoption can exacerbate these challenges. Because up-front costs are often high and returns may be delayed, policymakers should consider proactive public outreach to explain the long-term benefits of incorporating AI into tax administration.
Agencies should also establish clear criteria for evaluating potential AI solutions, including vendor capabilities, compliance with legal and privacy requirements, and plans for ongoing maintenance and oversight. Key considerations include data security, algorithmic transparency and the ability to audit system decisions. Integration with existing IT infrastructure can be complex, requiring detailed planning, rigorous testing and continuous monitoring to ensure accuracy, fairness and reliability.
Workforce considerations are equally important. Some routine tasks, such as answering basic taxpayer questions, reviewing simple returns or flagging low-risk cases, may become automated, raising concerns about job displacement. However, many experts emphasize that human judgment remains essential, particularly for complex audits, nuanced compliance decisions and oversight of AI outputs. The IRS continues to rely on auditors and staff to validate AI findings and make final determinations, demonstrating a model where humans work in tandem with technology.
States also are using automated tools to support staff rather than replace them entirely (examples include the California virtual assistant discussed earlier). This requires thoughtful workforce planning, including retraining programs to help employees develop new skills, clear communication about how AI will be used, and efforts to preserve institutional knowledge even as processes change. The most successful implementations occur when AI is used to augment human expertise rather than replace it, freeing experienced professionals from routine tasks so they can focus on complex cases that require judgment, context and interpersonal skills.
As AI becomes more deeply embedded in tax administration, fundamental questions about due process, transparency and legal accountability arise. Taxpayers have a constitutional and statutory right to understand the basis for government actions that affect their financial obligations, to challenge those decisions and to receive fair treatment under the law. However, the opacity of many AI systems, often referred to as the "black box" problem, can make it difficult or impossible to explain why a particular taxpayer was selected for audit, why a deduction was flagged as suspicious or how a risk score was calculated.
The use of AI in tax administration has drawn significant attention from policymakers and the public, but legislative action at the federal level has been limited, with most developments occurring through agency implementation and internal guidance rather than through new laws or regulations.
Federal agencies, particularly the IRS and the Treasury Department, have been implementing AI for tax administration through internal policies rather than formal published regulations. For example, the IRS issued its Interim Policy for AI Governance in May 2024 and updated it in March 2025.
The Treasury Department has used AI and machine learning to recover $4 billion in fraudulent and improper payments, including $1 billion from check fraud. Under the America's AI Action Plan, released in 2025, the department was directed to release guidance clarifying that AI literacy and skill-development programs may qualify as tax-free educational assistance under Section 132 of the Internal Revenue Code, though this guidance has not yet been published.
Several legislative proposals pertaining to the use of AI at the IRS are circulating in Congress, including the Digital Evaluation for Tax Enforcement and Compliance Tracking Act (DETECT) Act (HR 4974), a bipartisan effort to use AI for fraud detection. The proposed No AI Audits Act (HR 7694) goes in the other direction, preventing the use of AI for audits. In the absence of clear legislation, the IRS continues to use its interim policy.
The political realities of navigating AI's risks and its technical opportunities are often at odds. AI has the clear capability to increase audit capacity and fraud detection. At the same time, the risk of overly aggressive revenue collection as well as errors and bias in detecting fraud are considerable concerns.
State lawmakers have been more active than their federal counterparts in addressing AI, though most state activity has focused on broader regulation of AI rather than taxation specifically. Tax rulings, when issued, have targeted specific AI services, while a few jurisdictions have imposed unique local taxes on AI products.
While no state legislatures have specifically taxed AI, several state revenue departments have issued administrative rulings clarifying how existing sales tax law applies to AI services. In a September 2025 ruling, the Illinois revenue department clarified that companies offering AI tools and services online, such as chatbots or other software hosted in the cloud, do not owe state sales tax on those services, according to H&R Block. The ruling treated AI services like software as a service, or SaaS, which is generally not taxable under Illinois law because it is considered a service rather than tangible personal property.
Similarly, Indiana's revenue department stated in July 2025 that generative AI services accessed through web browsers or application programming interfaces are considered services and not subject to sales/use tax. Indiana's approach mirrored Illinois' treatment, recognizing AI services as distinct from taxable digital products. Both states applied their existing SaaS frameworks to conclude that AI chatbot services accessed remotely without software transfer are not subject to state sales tax.
The most significant taxation of AI at the state or local level occurs in Chicago, which imposes an 11% tax on the use of artificial intelligence platforms. The tax, which began in 2023 at 9%, applies specifically to AI platforms accessed as cloud-based services within Chicago city limits, making the city unique in explicitly taxing AI services at a local level.
The Federation of Tax Administrators' 2024 survey highlights the limited use of AI across state tax agencies. Only 15% of respondents said their agencies were using AI in core functions or were conducting pilots. Nearly half were not planning to use AI at their agencies, and 57% noted that chatbots or virtual assistants were not available for taxpayers in their states. Meanwhile, 34% of respondents cited data security as their No. 1 agency priority, with 33% noting security risks as the top reason for not moving forward with technology advances.
In the absence of clear legislation, tax administrators are often the ones making decisions about AI adoption in their agencies. Data security is a major priority for administrators, even apart from the AI conversation, which may be perceived as auditing additional and little-understood data risks.
Given the lack of funding for technology integration and related hiring, the few laws authorizing or guiding the use of AI for tax administration, and the absence of clear data security guidelines, state tax agencies are moving slowly or not at all in expanding AI. Administrators may be missing out on opportunities to incorporate new technology, with the status quo heavily favoring a risk-averse operating posture.
Artificial intelligence is rapidly transforming tax administration at both federal and state levels, offering substantial promise while also raising important challenges that demand careful attention. The opportunities are clear. AI can improve revenue forecasting accuracy, streamline return processing, enhance fraud detection and provide taxpayers with more accessible and personalized guidance. Early examples of implementation at the IRS and in California, Minnesota and New York demonstrate real gains in efficiency and effectiveness, from processing millions of returns more quickly to preventing billions of dollars in improper payments.
Yet these advances come with significant responsibilities. Algorithmic bias threatens to perpetuate historical inequities in tax enforcement if systems are not carefully designed and monitored. Privacy and security risks expand as agencies collect and analyze more sensitive data. Workforce transitions require thoughtful planning to preserve institutional knowledge while helping employees adapt to new roles. Most fundamentally, the opacity of many AI systems raises due process concerns about taxpayers' ability to understand and challenge decisions that affect their financial obligations.
While Colorado and California have enacted AI legislation targeting discrimination and transparency, and states including Illinois and Indiana have clarified that AI services are generally not subject to sales tax, no jurisdiction has developed a comprehensive framework specifically addressing AI in tax administration. Federal action has been limited to internal agency guidance rather than formal regulations, leaving many questions about accountability, oversight and taxpayer rights unresolved.
As AI becomes more deeply embedded in tax systems, policymakers face a critical task: harnessing the technology's potential to create a more efficient and equitable tax system while also establishing guardrails that protect fairness, privacy and due process for all taxpayers.