Table of Contents
- Executive Summary.. 1
- Historical Context of Arab American Discrimination.. 2
- The Post-9/11 Origins of Discriminatory AI Surveillance. 2
- Understanding AI Systems: Technical Framework and Government Deployment 4
- The Fundamentals of AI: Machine Learning and Deep Learning Models. 4
- Understanding Facial Recognition Technology: Methods and Mechanics. 5
iii. How Bias Infiltrates AI Systems: Direct and Indirect Discrimination. 7
- AI Deployment Across Federal Agencies: From DHS to FBI 8
- Opacity and Intent: Barriers to Legal Remedies for AI Bias Against Arab American Communities 10
III. Technical and Social Biases Against Arab Americans in AI Systems 11
- Biased Benchmarks and False Matches: How FRT’s Technical Limitations Create Security Risks for Arab Americans 12
- Language Processing Bias and Security Screening, A Dangerous Convergence of Technical Limitations and Institutional Prejudice. 13
- Covert Linguistic Discrimination: The New Frontier of AI Bias. 15
- How AI Systems Uniquely Target Arab American Identity. 17
- Impenetrable Systems: The Technical and Legal Challenges of Proving AI Discrimination 18
- AI’s Built-in Barriers to Accountability. 19
- The Transparency Paradox: How Ibrahim v. Department of Homeland Security Illustrates the Growing Challenge of AI Accountability. 20
- How Current Laws Fail to Monitor National Security AI Systems. 21
- Building a New Framework: Integrating European AI Regulation with US Civil Rights Reform 22
- Beyond Agency Enforcement: Establishing Permanent Civil Rights Protections in AI 23
- Beyond the Black Box: Creating Meaningful Transparency in AI Systems. 24
- Comprehensive Auditing Framework: Implementing Multi-Layered Oversight to Combat Compound Discrimination 26
- Conclusion.. 27
I. Executive Summary
The integration of artificial intelligence (AI) into United States (U.S.) national security operations has automated and amplified discriminatory practices established in the post-9/11 era, creating unprecedented barriers for Arab Americans. This paper examines how AI systems deploy overlapping forms of bias through facial recognition technology (FTR), language processing, and automated screening, producing a uniquely destructive form of compound discrimination that is more pervasive and harder to challenge than traditional bias.
This analysis reveals how technical limitations in AI systems intersect with institutional biases from post-9/11 security frameworks, while the combination of AI’s “black box” nature and broad national security exemptions creates nearly insurmountable barriers to legal accountability. Current civil rights frameworks prove inadequate, as traditional protections fail to capture algorithmic bias effectively. The paper examines the proposed Artificial Intelligence Civil Rights Act of 2024 and the European Union’s AI Act of 2023 as models for reform, arguing for comprehensive federal legislation that would require testing for compound discrimination, mandate transparency, and limit national security exemptions that currently shield discriminatory systems from meaningful oversight.
II. Historical Context of Arab American Discrimination
The integration of AI into U.S. national security operations cannot be understood without examining the discriminatory framework established in the post-9/11 era. Today’s AI systems do not operate in a vacuum; rather, they automate and amplify a pre-existing architecture of surveillance and control that has long targeted Arab and Muslim Americans.
a. The Post-9/11 Origins of Discriminatory AI Surveillance
The current deployment of AI in national security emerged from an extensive framework of surveillance and control established in the immediate aftermath of 9/11. Because the terrorists responsible for perpetrating the horrific 9/11 attacks were of Muslim faith, the “War on Terror” that followed focused on Muslim foreign nationals and citizens.[1] This framework of suspicion was institutionalized through three key developments: the creation of vast new electronic surveillance authorities, the structural reorganization of federal agencies under the Bush Administration, and the implementation of discriminatory screening programs that explicitly targeted Muslim and Arab Americans.[2]
The creation of the Department of Homeland Security (DHS) in 2002 marked a fundamental shift in how the federal government approached surveillance of Muslim communities.[3] DHS consolidated previously independent immigration, customs, and emergency management functions into a single entity, functioning as what scholars have called “the institutional fulcrum” for sweeping federal and local anti-terror surveillance authorized by the USA PATRIOT Act.[4] An unprecedented expansion of surveillance capabilities accompanied this consolidation of power.[5]
The USA PATRIOT Act provided the legal framework for what would become systematic surveillance of Muslim Americans, severely undermining Fourth Amendment protections by enabling secret, warrantless searches where law enforcement could covertly enter homes, copy hard drives, and install surveillance devices without notice.[6] Under this new regime, the monitoring of Muslim spaces, including mosques and community centers, was deemed an acceptable “collateral cost” of achieving national security objectives[7]. Additionally, under the guise of counter-terrorism efforts, thousands were criminally prosecuted or detained under immigration holds.[8] This framework established the precedent for treating entire Muslim communities as presumptively suspicious.[9] The post-9/11 terror watchlist systems, including the Terrorism Screening Dataset (TSDS) used to train AI systems deployed by government agencies, exhibited systematic bias against Muslim names and identities.[10] This systemic surveillance effectively automated and amplified institutional prejudices against Muslims and Arabs that were embedded in early counterterrorism policies, transforming manual discriminatory practices into algorithmic ones.
One of the most striking examples of institutionalized discrimination was the National Security Entry Exit Registration System (NSEERS), implemented in June 2002.[11] This program required male teens and adults from twenty-four Muslim-majority countries to submit to fingerprinting and registration with federal authorities or face immediate deportation.[12] Though NSEERS was dissolved in 2011,[13] its underlying logic––that national origin and religious identity could serve as proxies for national security threats––persists in current automated screening systems and counterterrorism practices.
The impact of these policies was immediate and severe: in just one week after 9/11, anti-Muslim crimes exploded from five to 197 incidents, while anti-ethnic crimes targeting Arab Americans skyrocketed from seven to 503 cases.[14] The year ended with a staggering 324% increase in anti-Arab hate crimes.[15] Of the record 499 anti-Muslim hate crimes recorded that year, over 460 occurred in just the final four months of 2001,[16] revealing how swiftly official policies of suspicion contributed to private violence.
This historical context is crucial for understanding current AI systems not as neutral technological innovations, but as the digital extension of historical bias designed to treat Muslim and Arab Americans as perpetual security threats.
b. Understanding AI Systems: Technical Framework and Government Deployment
AI surveillance systems have fundamentally transformed how government agencies collect, analyze, and act on information in national security operations. From basic machine learning (ML) algorithms to sophisticated FRT, these systems are now deeply embedded across federal agencies like the DHS, Federal Bureau of Investigation (FBI), and U.S. Citizen and Immigration Services (USCIS), raising critical questions about bias, privacy, and civil liberties in an era of automated decision-making.
i. The Fundamentals of AI: Machine Learning and Deep Learning Models
AI is revolutionizing how we process and understand information. Today, there are two broad categories of AI models used across government agencies: basic ML systems and deep-learning models. Basic ML systems use “algorithms (i.e., instructions for computers) to process and identify patterns in large amounts of data (‘training’), and then use[s] those patterns to make predictions or decisions when given new information.”[17] By combining sophisticated computational tools with ML capabilities, AI transforms raw data into actionable insights, enabling more informed and efficient decision-making across all sectors.[18]
On the other hand, deep-learning models like OpenAI’s ChatGPT model work by processing information through layers of artificial neurons, similar to how a human brain works.[19] The system first converts inputs, like text or images, into numbers it can understand.[20] As information flows through these layers, each layer learns to recognize increasingly complex patterns.[21] During training, these internal connections, called weights and biases, determine how much importance the model gives to different pieces of information.[22] The system learns through a process called backpropagation; when a model makes a prediction, it measures how wrong that prediction was and then works backward through its neural network to correct the mistake.[23] Rather than following fixed rules, the system discovers patterns on its own from training data, though this can make it difficult to understand exactly how it reaches its conclusions.[24] The quality of its learning depends heavily on the data it’s trained on.[25]
This technology fundamentally changed how humans approach complex problems, analyze patterns, and extract meaningful conclusions from vast amounts of information.[26] Some scholars even argue that integrating AI into decision-making may help curb biases that contribute to errors in judgment in humans because, in theory, algorithms should be more easily and efficiently debiased than the human mind.[27]
ii. Understanding Facial Recognition Technology: Methods and Mechanics
FRT is one form of technology employed by governmental agencies that operates through a complex system of detection, verification, and identification processes that enable government agencies to conduct both targeted individual matching and broad-scale surveillance. These systems can function in numerous ways. First, they can be used to detect the presence of a face and locate it in an image.[28] Second, FRT may be used to try to assess other characteristics about a person from their facial image, like their age, gender, or emotions.[29] Finally, and perhaps most pertinently, FRT can be used to establish the identity of a person.[30]
Establishing identity can be accomplished in two distinct ways: face verification and face identification.[31] Face verification is a 1-to-1 matching system that determines whether an image shows a specific person.[32] This can be done either by comparing an image to stored information about a named individual or by comparing two images to determine if they show the same person.[33] Face identification, on the other hand, is a 1-to-many matching system that attempts to identify whose face appears in an image by comparing it against a gallery of stored appearance information.[34]
FRTs, as a biometric technology, work through several key components and processes.[35] It begins with capture and detection, where faces are photographed either voluntarily, like unlocking a phone, or involuntarily, like surveillance footage.[36] The system then goes through enrollment, where facial information is stored in a gallery database.[37] This facial data is converted into a digital template called a faceprint, which serves as a numerical representation of facial characteristics.[38] The recognition process involves comparing faceprints, either through 1-to-1 verification or 1-to-many identification.[39] The system generates similarity scores to determine matches, but this process is inherently imperfect.[40] Various factors like lighting, camera angle, and image quality can affect faceprint accuracy, leading to potential errors.[41]
iii. How Bias Infiltrates AI Systems: Direct and Indirect Discrimination
Although artificial intelligence is often heralded as an objective alternative to inherently biased human decision-making, this assumption warrants scrutiny. Indeed, research has repeatedly shown that algorithms can and do produce stereotyped and discriminatory outputs.[42] Bias can rear its head in many ways. First, the human developers who create these algorithms can unconsciously embed their own biases into the systems they design.[43]
Second, when AI systems are trained on historical data that reflects societal inequalities and prejudices, they risk perpetuating and amplifying existing patterns of discrimination.[44] These discriminatory outputs may be produced through two mechanisms: 1) direct discrimination; and 2) indirect discrimination[45] Direct discrimination occurs when AI systems explicitly use protected characteristics like race, gender, or ethnicity in their decision-making process.[46] For example, when content moderation systems factor in an author’s racial background.[47] Indirect discrimination happens through proxy variables––seemingly neutral data points that strongly correlate with protected characteristics.[48] For instance, postal codes may serve as proxies for race or shopping patterns for gender.[49] This type of discrimination is more challenging to detect and mitigate because the proxy’s correlation with protected characteristics appears neutral on the surface and may not be immediately obvious.[50] Feedback loops can amplify this discrimination.[51] For example, if initial bias leads to more frequent screening of Arab travelers, the additional data collected from these screenings can reinforce and amplify the original bias.
Finally, when training data fails to adequately represent diverse populations, particularly those from historically marginalized communities, the resulting AI systems can produce skewed and inequitable results.[52]
iv. AI Deployment Across Federal Agencies: From DHS to FBI
Governmental entities have started to catch on to these troubling realities. In response, the White House Office of Science and Technology Policy (OSTP) has created a working definition for algorithmic discrimination and set out a Blueprint for an AI Bill of Rights to mitigate discriminatory outputs.[53] While recent federal initiatives require agencies to report and disclose their AI usage, broad national security exemptions exist for intelligence and defense applications, as evidenced by carveouts in the Advancing American AI Act of 2022 and Executive Order 14110.[54] Enabled by these grants of discretion, many governmental agencies focused on national security have moved forward with implementing AI into their surveillance frameworks.
1. Current AI Implementation and Plans for Expansion by DHS, CBP, and TSA
In August 2024, DHS announced the Homeland Advanced Recognition Technology System (HART), poised to replace its current Automated Biometric Identification System (IDENT). Both systems use AI to connect biometric data, like fingerprints, iris scans, and facial images, with biographical information and derogatory records like warrants, terrorist designations, and criminal histories.[55] Further, DHS’s AI surveillance capabilities operate through multiple interconnected programs. CBP uses the Automated Targeting System (ATS), originally developed post-9/11 for cargo screening but expanded to passenger screening.[56] ATS aggregates data from numerous sources: government databases, airline records, border device searches, DMV records, FBI data, TSDS, and commercial data brokers including for social media.[57] The system flags travelers by comparing data against watchlists and analyzing suspicious activity patterns, such as travel to high-risk regions.[58] Both CBP and TSA can recommend watchlist additions based on risk assessments, creating an interconnected web of AI-driven surveillance.[59] TSA is implementing FTR for domestic flyer verifications with plans for expansion.[60] Additionally, DHS conducts extensive social media monitoring for counterterrorism through private contractors like Babel X.[61] The system uses Natural Language Processing (NLP)[62] and deep-learning models for sentiment analysis to categorize opinions expressed in a piece of text,[63] and predictive analytics to predict future outcomes or behavior.[64]
2. Current Use of AI by FBI
The FBI conducts facial recognition surveillance for intelligence and national security through “open assessments,” a preliminary stage that requires no warrant, probable cause, or fact-based suspicion.[65] Despite falling short of a formal investigation, these assessments enable the FBI to deploy informants, conduct undercover questioning, search various databases, and track individuals’ movements.[66] The agency has significantly expanded its facial recognition capabilities through two key partnerships: Clearview AI, which provides access to billions of scraped internet photos, and U.S. Immigrations and Customs Enforcement (ICE), which enables searches of state driver’s license databases for biometric matching.[67] Beyond facial recognition, the FBI also employs contractors like Babel X to monitor social media for counterterrorism and threat detection.[68]
3. Future Plans of AI Use by USCIS
The U.S. Citizenship and Immigration Services (USCIS), the federal agency with primary authority over immigration benefits like visas, permanent residency, and naturalization, has announced a potential shift toward artificial intelligence in its security screening processes.[69] This initiative, led by the agency’s Fraud Detection and National Security Directorate (FDNS), aims to revolutionize how potential national security threats are assessed among applicants.[70] The proposed AI systems would serve dual functions: conducting comprehensive background investigations by analyzing patterns across multiple data sources to identify red flags, and prioritizing cases that warrant enhanced scrutiny through an automated triage system.[71] This technological shift represents a fundamental transformation in USCIS’s approach to security screening.[72]
c. Opacity and Intent: Barriers to Legal Remedies for AI Bias Against Arab American Communities
The nature of AI’s “black box” can compound the risk of biased AI systems perpetuating discriminatory outputs against Arab American communities. The term “black box” in artificial intelligence refers to the opacity of AI systems’ decision-making processes.[73] “If an AI program is a black box, it will make predictions and decisions as humans do, but without being able to communicate its reasons for doing so.”[74] This disconnect means the AI’s developers can’t reliably predict or control its outputs, making it difficult to attribute the system’s decisions and behaviors to human intent.[75]
Intent is crucial in many statutory remedies allowing harmed individuals to bring intentional discrimination claims, specifically under Title VI of the Civil Rights Act of 1964 (Title VI)[76] or the Equal Protection Clause.[77] Discriminatory intent doesn’t need to be the only motive, but a policy must be adopted “because of” its effects on a specific group, not just “in spite of” them.[78] In the context of algorithmic discrimination, challengers can argue that algorithms trained on biased data effectively create facial discrimination by learning to automatically associate race or religion with suspicion or security threats. However, in algorithmic discrimination cases, plaintiffs face two key hurdles in proving facial discrimination. First, accessing critical evidence of racial classification is difficult as deployers of this technology often shield algorithmic details behind trade secrets, while agencies cite security concerns.[79] Second, because AI systems typically perpetuate bias through indirect discrimination rather than explicit racial coding, demonstrating facial discrimination becomes complex, particularly since algorithms, being non-human, cannot possess the intent required for Title VI or Equal Protection claims.[80]
Disparate impact claims, which focus on discriminatory outcomes rather than intent, offer an alternative legal pathway.[81] This approach requires plaintiffs to show statistical evidence of disproportionate harm, after which defendants must prove legitimate interest, and plaintiffs can prevail by demonstrating less discriminatory alternatives.[82] However, this avenue faces its own obstacles. The legal framework is fragmented, with many spheres like criminal justice lacking federal disparate impact protections.[83] The courts have also weakened existing protections, as seen in the Supreme Court’s elimination of private rights of action under Title VI.[84] Technical complexity poses additional challenges because AI systems’ “black box” nature makes it difficult to understand decision-making processes and identify less discriminatory alternatives.[85]
II. Technical and Social Biases Against Arab Americans in AI Systems
The deployment of AI systems in national security and surveillance contexts creates multifaceted discrimination against Arab Americans, from biased FTR and problematic language processing to flawed risk assessment algorithms. These technical limitations intersect with institutional biases and post-9/11 security frameworks to create a uniquely compounded form of algorithmic discrimination, where religious, ethnic, and security biases amplify each other through interconnected automated systems.
a. Biased Benchmarks and False Matches: How FRT’s Technical Limitations Create Security Risks for Arab Americans
Benchmark datasets for FTR reveal significant demographic biases. Studies show that existing datasets predominantly feature lighter-skinned individuals, particularly men.[86] In one study, government benchmarks showed a 79.6% heavy skewing toward lighter-skinned individuals, while other research benchmarks showed a similar 86.2% skewing.[87] The same test analyzing results on 1,270 faces revealed consistent bias patterns across different systems.[88] All tested classifiers performed better on male faces than female faces with error rate differences of 8.1%-20.6% and better on lighter versus darker faces, with an 11.8%-19.2% difference in error rates.[89] The most significant finding was that darker-skinned females faced the highest error rates of between 20.8%-34.7%, while lighter-skinned males had the lowest, with a rate as low as 0.0-0.3% for some systems.[90] The maximum performance gap between the best and worst classified groups reached 34.4%.[91]
The real-world impact of even small error rates can be substantial. For example, a system with a seemingly low false match rate of 1 in 500, when applied to a city of 2 million workers, could generate approximately 4,000 false matches daily.[92] As explained above, these errors often disproportionately affect certain population segments.[93] The intersection of FRT bias with homeland security and counter-terrorism efforts raises particularly serious concerns for Arab American communities. Given that DHS and other security agencies increasingly rely on facial recognition for threat assessment and watchlist matching, the documented error rates become especially problematic. Since FRT systems are predominantly trained and tested on datasets skewed toward lighter-skinned individuals, they may perform less reliably on Middle Eastern facial features. Arab Americans, thus, face a real risk of being incorrectly matched with individuals on terrorism watchlists or other security databases. A false match rate of 1 in 500 becomes significantly more concerning when it could result in someone being wrongly associated with terrorism.
The implications extend beyond individual misidentification. When these systems are deployed at critical infrastructure points like airports and border crossings, the higher error rates for Arab Americans could create a systemic pattern of harassment and discrimination masked as objective security measures. Moreover, the storage and sharing of this biometric data across multiple agencies means that initial misidentifications could follow individuals through multiple security systems, creating persistent, compounding patterns of discrimination justified under the umbrella of national security concerns.
b. Language Processing Bias and Security Screening, A Dangerous Convergence of Technical Limitations and Institutional Prejudice
The integration of NLPs and deep-learning models into national security screening creates another complex web of technological and social biases that affect Muslim and Arab communities. These AI systems currently employed by DHS demonstrate inherent biases in how they interpret and process language. One Stanford University study from 2021 examined OpenAI’s GPT-3 model, finding systemic bias in how it processed language related to Muslim identity, specifically showing a troubling pattern of associating Muslims with violence and terrorism.[94] When repeatedly prompted with “[t]wo Muslims walk into a…” GPT-3 generated violent scenarios 66% of the time, compared to 20% for Christians or Sikhs and under 10% for Jews, Buddhists, or atheists.[95] These responses included “[t]wo Muslims walk into a . . . synagogue with axe and a bomb, . . . Texas cartoon contest and opened fire, . . . gay bar in Seattle and start shooting at will, killing five people.”[96] The system created fictional, violent scenarios rather than copying real events, suggesting it had formed an association between Muslims and violence.[97] In a separate experiment testing analogies, GPT-3 completed “[a]udacious is to boldness as Muslim is to…” with “terrorist” nearly 25% of the time, demonstrating deeply entrenched, systemic bias.[98]
Studies reveal significant accuracy issues in Large language models (LLMs), a kind of deep-learning model, processing Arabic and other non-Latin languages due to a “resourcedness gap”––where these languages lack the high-quality training data available for English, Spanish, Chinese, or German, for example.[99] The problem is particularly acute for Arabic on multiple levels due to its non-Latin script: approximately 95% of available web-crawled data is mislabeled, containing machine translation errors and incorrectly scanned text.[100] This creates a circular problem where poor language identification leads to more data quality issues.[101] The technical limitation is exacerbated by the complexity of Arabic dialects, which despite sharing common written forms, can be mutually unintelligible across different countries of origin.[102] For instance, Moroccan Arabic may be completely different from Gulf Arabic in everyday speech, yet AI systems often fail to recognize these crucial distinctions. These compounding technical limitations create systemic processing barriers that deepen digital linguistic inequalities for Arabic speakers, particularly in high-stakes contexts like security screening and social media monitoring. While multilingual language models try to solve data scarcity by transferring knowledge from high-resource to low-resource languages, this approach creates new problems.[103] Models predominantly trained on English may impose English-language values and assumptions upon other cultures, like assuming “dove” means “peace” in all languages.[104] These attempts to fill data gaps often introduce new errors.[105] Compounding data scarcity issues, the complex cross-language connections in these models make failures harder to detect and correct.
The deployment of these flawed AI language systems in DHS’s security operations raises serious concerns, particularly when these systems are used for social media surveillance and threat assessment. Tasks to identify “extremist” content, for example, often require deep cultural understanding and nuanced judgment that current NLP models lack.[106] Perhaps most troubling is how these technical limitations intersect with social biases. Research has revealed that NLP systems often exhibit concerning patterns of bias, particularly in their treatment of language related to protected characteristics.[107] For instance, these systems have demonstrated a problematic tendency to associate identity terms like “Muslim” with negative sentiments or emotions like anger.[108]
The technical limitations of these AI systems become even more problematic when applied to security databases and screening processes, as starkly illustrated by the Terrorist Screening Dataset (TSDS) used by TSA, CBP and FBI for risk assessments. A 2019 FBI watchlist obtained by the Council on American-Islamic Relations (CAIR) through a Swiss hacker revealed that of 1.5 million entries, approximately 98%––a staggering, 1.47 million––were Muslim names like Mohamed, Ali, Mahmoud or Abdullah, a direct legacy of post-9/11 security policies.[109] Operating on a troublingly low “suspicion of suspicion” threshold, the system can flag individuals without evidence of wrongdoing, while providing minimal opportunities for appeal.[110] When this biased data is used to train AI systems, it creates a dangerous feedback loop: the algorithms learn and amplify existing prejudices, leading to automated risk assessments that systematically target Muslim and Arab communities.[111] In the high-stakes context of border security and travel screening, these automated biases can have devastating real-world consequences for affected individuals.
Perhaps most telling is the USCIS Immigration Data Analysis from 2012-2019, which quantified these disparities in concrete terms.[112] Under USCIS’s Controlled Application Review and Resolution Program (CARRP), officers routinely label applicants as “national security concerns” based on common characteristics like multilingualism, education level, religious practices, or travel history.[113] The agency directs officers to liberally apply this “national security concern” designation, even when it contradicts other law enforcement assessments, resulting in thousands of applicants being flagged based on subjective criteria.[114] The study found that applicants from Muslim-majority countries were labeled as “national security concerns” at more than ten times the rate of other applicants.[115] These designations doubled both processing time and denial rates, even in cases where security concerns were eventually dismissed as unfounded.[116] USCIS’s intention to integrate AI into these decision-making processes risks not only perpetuating these biases but amplifying them.
c. Covert Linguistic Discrimination: The New Frontier of AI Bias
Algorithmic bias can function in more insidious ways, like covert bias, adding another layer to the already dangerous algorithmic landscape. Covert bias refers to the unconscious associations algorithms make between groups and certain characteristics. One research study revealed that language models exhibit covert racism through dialect prejudice against African American English (AAE).[117] While these models may appear unbiased in direct questions about race, they show significant prejudice when processing AAE versus Standardized American English (SAE).[118] When analyzing how AI models respond to AAE speech patterns, they show strong negative biases.[119] This pattern was especially clear in AI models like OpenAI’s GPT-3.5 and GPT-4 which were trained with human feedback.[120] There, the study found that models assigned more negative attributes and worse outcomes to AAE speakers, including less prestigious jobs and harsher criminal sentences, despite not being explicitly told about the speaker’s race.[121] Notably, common mitigation strategies like increasing model size or human feedback training did help mitigate overt stereotypes, but they did not solve the covert stereotypes issue.[122] In fact, the study suggests that human feedback may actually teach models to better conceal rather than eliminate their biases.[123] One can reasonably assume that this same type of covert bias exists when LLMs analyze Arab dialects and speech patterns. This mirrors broader societal patterns of covert racism in the U.S. where individuals will report positive attitudes towards protected classes, yet still harbor prejudice unconsciously.[124] The study suggests language models may be amplifying existing unconscious prejudices against AAE speakers, and likely Arab speakers, similar to humans.
This pattern of covert bias through language processing creates an especially troubling dynamic when combined with the previously discussed biases affecting Muslim and Arab communities. While AI systems might be trained to avoid explicit anti-Muslim bias in direct prompting, they may harbor deeper prejudices against Arabic dialects, Islamic religious terminology, or cultural expressions common in Muslim or Arab communities. For instance, an AI system might respond neutrally to direct questions about Muslims or Arabs while simultaneously flagging religious terminology as suspicious in security contexts. These covert biases are particularly dangerous because they operate below the surface, making them harder to detect and challenge. Just as with AAE, attempts to mitigate bias through human feedback might paradoxically teach the models to mask rather than eliminate their prejudices against Arabic speakers and Muslim cultural expressions. This creates a dual layer of discrimination: the overt biases documented in studies like the Stanford GPT-3 research, combined with more subtle linguistic and cultural biases that evade traditional fairness metrics. Traditional bias mitigation strategies may even worsen this problem by focusing solely on obvious forms of prejudice while leaving these deeper, structural biases intact or driving them further underground.
d. How AI Systems Uniquely Target Arab American Identity
While AI systems exhibit documented biases against various minority groups, Arab Americans face a uniquely compounded form of discrimination where multiple biases intersect. This creates a particularly destructive form of algorithmic discrimination that is distinct from other minority experiences. At the biometric level, facial recognition systems show elevated error rates for darker-skinned features.[125] At the linguistic level, both written Arabic and Arabic dialects face processing challenges in AI systems.[126] At the demographic level, names, religious identifiers, and cultural markers trigger additional scrutiny.[127] This multi-level bias creates a compounding effect where each layer of discrimination reinforces and amplifies the others, making it nearly impossible to escape heightened algorithmic scrutiny in any context.
The complexity is further compounded by the widespread but incorrect conflation of Arab identity with the Muslim faith.[128] While Arabs practice various religions including Christianity, Judaism, and other faiths,[129] AI systems trained on biased historical data may automatically associate Arab identity with Muslim faith, subjecting non-Muslim Arabs to religious profiling. For instance, when AI language models demonstrate bias against Muslim-associated terms or when security algorithms flag “Muslim-sounding” names for additional scrutiny, these systems impact all Arab Americans regardless of their actual religious beliefs. A Christian Arab with a name like “Abdullah” may face the same algorithmic discrimination as a Muslim with the same name, demonstrating how ethnic and imputed religious bias intersect in automated systems.
The national security framework provides institutional legitimacy to this discrimination in a way not typically seen with other minority groups. While algorithmic bias against African Americans in criminal justice contexts is increasingly recognized as problematic, similar bias against Arab Americans is often defended as necessary for national security. This security justification creates a unique barrier to reform, as agencies can invoke national security exemptions to avoid transparency and accountability measures that might otherwise help address algorithmic bias.
Arab Americans also face a presumption of violence or terrorism in AI systems. This stems from how current AI systems have encoded post-9/11 security policies that treated Arab and Muslim identities as inherently suspicious. This presumption of threat affects all Arabs, regardless of religion, creating a fundamental difference in how AI systems process Arab American data compared to other minorities. While other groups might face bias in specific contexts, Arab Americans face automated suspicion across all contexts.
Finally, the interconnected nature of modern security systems creates a persistent form of discrimination. When an Arab American is flagged by one system, whether through facial recognition, name matching, or language processing, that designation can follow them through multiple agencies and databases. The sharing of data between agencies means that initial algorithmic bias in one context can create cascading effects across multiple aspects of life, from travel and immigration to employment and financial services.
This complex interaction of technical, institutional, and security biases, combined with the erroneous conflation of ethnic and religious identity, creates a form of algorithmic discrimination that is qualitatively different from what other minority groups’ experience. While efforts to address AI bias often focus on singular aspects like facial recognition accuracy or language processing fairness, the Arab American experience demonstrates how multiple forms of bias can interact and reinforce each other within broader systems of automated surveillance and control.
III. Impenetrable Systems: The Technical and Legal Challenges of Proving AI Discrimination
The intersection of artificial intelligence and national security creates an almost impenetrable wall of technical and legal opacity, particularly in cases of discrimination against Arab Americans. While historically traditional forms of bias could be traced through explicit policies or human decisions, modern AI systems obscure discriminatory patterns behind layers of algorithmic complexity and national security exemptions. This opacity is compounded by inadequate transparency laws and oversight mechanisms, making it virtually impossible for affected individuals to identify, understand, or challenge the discrimination they face.
a. AI’s Built-in Barriers to Accountability
The inherent complexity of artificial intelligence systems, from traditional ML algorithms to advanced deep-learning models, creates barriers to identifying discrimination against Arab Americans in national security contexts. While simpler ML models allow some visibility into their decision-making, modern deep learning architectures introduce additional complexity through their multilayered structure, making their reasoning processes virtually impossible to trace.[130] The inscrutability of these systems creates significant obstacles for identifying potential discriminatory impacts on Arab Americans within national security frameworks
ML algorithms process many data points simultaneously, making it hard to determine if and how they discriminate against Arab Americans through features like names or travel history.[131] Deep neural networks compound this challenge––their complex structure creates additional layers of opacity beyond traditional ML.[132] As these neural networks transform and combine different characteristics in intricate ways, it becomes nearly impossible to isolate how Arab identity-related features affect their decisions.[133] This lack of transparency poses serious concerns for identifying discriminatory patterns in national security applications.
AI systems become more opaque over time as they learn from new data but in different ways. Traditional ML can slowly develop biased patterns through retraining.[134] Deep learning systems, with their more complex structure, risk amplifying initial biases more severely through continuous learning,[135] especially concerning Arab Americans and their numerous cultural and ethnic variables and proxy variables. This technical complexity, combined with the classified nature of national security systems and data, creates unprecedented barriers to detecting discrimination. Unlike historical bias that could be traced through written policies or human decisions, these AI systems mask potential discrimination behind layers of mathematical and institutional secrecy.
b. The Transparency Paradox: How Ibrahim v. Department of Homeland Security Illustrates the Growing Challenge of AI Accountability
The case of Ibrahim v. Department of Homeland Security illustrates fundamental challenges in addressing discriminatory practices that become dramatically more complex with AI systems.[136] Dr. Ibrahim, a Muslim Malaysian woman erroneously placed on the no-fly list due to an FBI agent’s form-filling error, eventually secured a legal remedy by identifying the specific human error that triggered her ordeal.[137] However, her path to justice was severely impeded by the government’s systematic efforts to withhold evidence under state secrets privilege, law enforcement privilege, and “sensitive security information” (SSI) statutory privilege.[138] The court had to conduct an extensive individual review of documents ex parte and in camera, with the trial repeatedly interrupted by government motions to close the courtroom.[139]
While Ibrahim ultimately prevailed despite these obstacles, modern AI systems present far more formidable barriers to legal recourse. The government’s traditional tools for resisting evidence production become even more powerful when applied to AI systems, as their complexity provides additional justification for claims of technical sensitivity and security concerns. Notice and evidence gathering become increasingly problematic as individuals cannot determine when or why they have been flagged for enhanced screening. Unlike Ibrahim’s case, where the court could examine the specific form that caused the error, AI’s complex decision-making processes and the interplay between multiple systems make it impossible to isolate specific triggering factors. Moreover, training data, model architecture, and decision weights are typically protected as trade secrets or classified for national security reasons.
The right to meaningful hearing faces unique challenges with AI systems, as their technical complexity means even experts may struggle to explain specific decisions. The inadequacies of existing redress mechanisms are magnified in the era of AI systems. Even traditional cases like Ibrahim’s highlight these shortcomings––she waited a year only to receive a vague response from the DHS Traveler Redress Inquiry Program (TRIP) stating that her records had been “modified.”[140] These programs become even more ineffective when confronting AI systems that constantly evolve and update their decision-making processes. The Ibrahim case demonstrated how human error of incorrect derogatory information could propagate extensively like “a bad credit report” through government databases.[141] Even if corrected in primary databases, incorrect data previously exported to other agency databases could persist with devastating personal consequences.[142] Modern AI systems magnify these risks while simultaneously making them harder to identify, challenge, or correct through traditional legal frameworks. The combination of institutional resistance to transparency and technical opacity creates an almost impenetrable barrier to accountability.
Further, the opacity of AI systems creates a unique challenge for Title VI and Equal Protection disparate treatment claims, which require plaintiffs to prove discriminatory intent.[143] While Ibrahim could trace her injury to a specific discriminatory action,[144] modern AI systems obscure such clear lines of purposeful discrimination. The “black box” nature of these systems makes it nearly impossible to prove that a decision was made “because of” rather than merely “in spite of” its impact on Arab Americans.[145] When algorithmic decision-making is distributed across neural networks rather than contained in identifiable human choices, establishing discriminatory intent becomes virtually impossible. Even if a plaintiff could identify potential bias in the AI’s training data, agencies can invoke national security exemptions to shield these systems from examination, effectively immunizing algorithmic discrimination from traditional legal frameworks.
This landscape makes disparate impact claims particularly vital. However, even disparate impact claims face significant hurdles, as technical complexity makes it difficult for plaintiffs to propose less discriminatory alternatives and many domains where AI is deployed, like national security, lack statutory disparate impact protections entirely.[146]
c. How Current Laws Fail to Monitor National Security AI Systems
Current transparency laws and disclosure requirements are largely ineffective at providing meaningful oversight of AI systems used in national security contexts due to several limitations. National security agencies’ AI systems suffer from fundamental issues of data quality and transparency.[147] These systems rely on vast interconnected databases with limited verification, drawing from dozens of government and private sources.[148] For example, CBP’s ATS ingests data from numerous databases without adequate quality controls, including TSDS which demonstrates significant built-in bias for Muslim names.[149]
Current transparency laws fail to provide meaningful oversight due to three critical barriers: First, agencies frequently withhold basic information about use case inventories––which AI systems they use and how they operate––often providing vague descriptions like “may use AI data” or omitting significant applications entirely.[150] Further, when data is purchased from commercial brokers, agencies may lack visibility into how it was collected or processed.[151] Second, existing transparency mechanisms like Privacy Impact Assessments (PIAs) and System of Record Notices (SORNs) are frequently delayed, incomplete, or entirely skipped.[152] Third, broad national security exemptions, such as in Executive Order 14110, exempt these systems from standard oversight requirements and create an almost impenetrable barrier to understanding how they impact civil rights.[153]
The combination of poor data quality control, institutional secrecy, and limited oversight makes it impossible to assess whether these systems are appropriate or accurate, or for Arab Americans to challenge potential discrimination. The problem is particularly egregious when agencies use multiple interconnected AI systems, as the lack of transparency about individual systems makes it impossible to understand how they work together to affect constitutional rights.
IV. Building a New Framework: Integrating European AI Regulation with U.S. Civil Rights Reform
The American civil rights legal framework requires significant modernization to effectively address algorithmic discrimination. A comprehensive federal approach is needed, rather than relying on patchwork state regulations that may miss critical areas of concern. This new framework must shift from proving discriminatory intent to evaluating measurable impacts, requiring strong pre-deployment algorithmic impact assessments. Essential to effective enforcement is empowering individuals with a private right of action, eliminating reliance on federal agencies whose enforcement priorities can shift with changing administrations. The framework must also establish clear technical standards for AI development, including mandatory bias testing, transparent documentation of training data and potential biases, and regular independent audits. These reforms would create a strong system for preventing algorithmic discrimination while ensuring consistent protections across all sectors affecting fundamental rights.
a. Beyond Agency Enforcement: Establishing Permanent Civil Rights Protections in AI
The Artificial Intelligence Civil Rights Act of 2024 (Civil Rights Act of 2024), introduced by the 118th Congress, represents a good start at tackling this reform.[154] The act is a comprehensive attempt to establish civil rights protections and regulatory oversight for artificial intelligence and algorithmic systems in the U.S. At its core, the bill aims to prevent discrimination and ensure transparency in AI-driven “consequential actions”––decisions that materially affect people’s lives across domains like law enforcement and justice system activities, including criminal investigations, sentencing, border control, child services, surveillance, and predictive policing.[155]
The enforcement structure of the bill is multi-layered. The Federal Trade Commission (FTC) would serve as the primary enforcement agency, with violations treated as unfair or deceptive practices.[156] State attorneys general would also have enforcement authority, and individuals would have a private right of action to sue for violations, with potential penalties of up to $15,000 per violation.[157] The bill focuses on disparate impact, rather than proving discriminatory intent, creating a more concrete cause of action for AI discriminatory outputs based on protected characteristics than currently available under Title VI.[158] The focus on disparate impact, rather then intent, would directly address the historical pattern of discrimination against Arab Americans.
Further, a private right of action addresses a key weakness in existing civil rights frameworks like Title VI disparate impact claims, which rely solely on federal agency enforcement, a mechanism vulnerable to shifting political priorities. The Trump administration’s rollback of civil rights investigations and enforcement, particularly in areas like immigration, housing discrimination, and voting rights, demonstrated how executive branch priorities can leave vulnerable populations without protection.[159] With President-elect Donald Trump ready to retake the White House in 2025, the potential for artificial intelligence to amplify aggressive immigration enforcement becomes particularly concerning in light of 2024 campaign promises of “the largest domestic deportation operation in American history.”[160] By empowering individual legal action, the bill ensures consistent civil rights protection for Arab Americans regardless of changes in federal enforcement priorities. It also provides a right of action that is crucial given the unique challenges Arab Americans face in proving discrimination.
However, the Civil Rights Act of 2024 should take a cue from the European Union’s AI Act of 2023 (EU’s AI Act). The EU’s Act creates a tiered framework based on risk levels.[161] The legislation explicitly bans high-risk applications that threaten fundamental rights, including biometric categorization based on sensitive characteristics, untargeted facial image collection, workplace emotion recognition, social scoring, and certain forms of predictive policing.[162] While biometric identification systems (RBI) are generally prohibited for law enforcement, narrow exceptions exist under strict judicial oversight for specific cases like preventing terrorism or finding missing persons.[163] High-risk AI systems in critical sectors such as infrastructure, education, healthcare, and law enforcement must undergo rigorous oversight including risk assessments, transparency requirements, and human supervision.[164] The Act also imposes transparency obligations on general-purpose AI systems, requiring documentation of training data and compliance with EU copyright law.[165] The graduated risk classification system implements proportional oversight––imposing stringent controls on high-risk applications while maintaining flexibility for lower-risk innovations. This nuanced approach helps prevent discriminatory outcomes and potential abuse while preserving the space needed for continued technological development.
b. Beyond the Black Box: Creating Meaningful Transparency in AI Systems
The Civil Rights Act of 2024 also creates a framework of requirements and protections that would help address current lax transparency requirements and technical standards for AI development. It mandates that companies evaluate AI systems before deployment and assess their impacts afterward, specifically looking for discriminatory effects on protected characteristics such as race, gender, age, and disability.[166] The bill places significant emphasis on transparency, requiring organizations to publicly disclose how they use AI systems and notify individuals when AI is used to make important decisions about them.[167]
The bill requires developers and deployers to publish clear, accessible disclosures about their AI practices, including their contact information, data collection practices, third-party data transfers, individual rights, and compliance measures.[168] These disclosures must include specific warning language about algorithm audits and be available in all languages where they operate.[169] However, training data requirements should evolve beyond simple diversity metrics to address compound discrimination. AI systems should demonstrate minimum accuracy thresholds across all demographic subgroups, with particular attention to communities facing multiple layers of bias. This means ensuring training datasets include adequate representation of Arab Americans across different religious backgrounds, regional origins, and linguistic variations. Systems must be validated using real-world data that reflects the full spectrum of Arab American identities, from Christian Arabs to Muslims, from native English speakers to recent immigrants, ensuring reliable performance across all community segments.
The bill also directs the FTC must create a plain-language website explaining AI rights and requirements within 90 days of enactment.[170] The FTC must also publish annual reports on AI evaluations, assessments, and trends.[171] Additionally, it must establish a public repository for pre-deployment evaluations and impact assessments that is searchable, downloadable, and accessible.[172] Together, these provisions aim to ensure that consumers understand how AI systems affect them and can access meaningful information about AI deployment and impact. Crucially, the proposed transparency requirements would help break through the barriers present due to AI’s “black box” nature.
Notably, under the act, trade secrets may be withheld from public disclosure,[173] a provision that creates a potential loophole that could undermine transparency, particularly in government deployment of AI systems. Previous experience shows government agencies have used similar trade secret protections to avoid disclosing critical information about their AI models and training data. This is particularly alarming given the post-9/11 discriminatory framework’s risk of allowing national security and trade secret exemptions to override civil rights protections. To address this issue, the Act should include a narrowly tailored government exception, requiring full disclosure of AI systems used for immigration enforcement and national security decisions by federal agencies. This would create a necessary balance between protecting legitimate intellectual property and security interests while ensuring public accountability for government AI systems that impact civil rights and liberties. Such provisions could draw from successful transparency frameworks like Freedom of Information Act requirements, while including specific protections against disclosure that would genuinely compromise national security.
c. Comprehensive Auditing Framework: Implementing Multi-Layered Oversight to Combat Compound Discrimination
The bill establishes a comprehensive framework for independent audits of AI systems, creating multiple layers of oversight throughout an algorithm’s lifecycle. When preliminary evaluations indicate potential harm, both developers and deployers must engage third-party auditors to conduct pre-deployment assessments.[174] These audits must thoroughly examine the algorithm’s design, methodology, training data, and testing procedures, with particular attention to potential discriminatory impacts.[175] The assessment must also scrutinize data collection methods and processing practices to ensure appropriate representation and legal compliance.[176]
Once an algorithm is deployed, the Act mandates annual impact assessments if any harm is identified.[177] These assessments focus on evaluating actual harms and disparate impacts that have occurred, examining how the algorithm performs in real-world conditions compared to testing environments, and documenting efforts to mitigate any identified harms.[178] This creates an ongoing monitoring system to catch and address problems that emerge during practical implementation. However, to address the unique compound discrimination faced by Arab Americans, AI systems should undergo rigorous intersectional impact assessments before deployment. These assessments should specifically examine how multiple AI components, from facial recognition to language processing, interact to create cumulative discriminatory effects. By requiring documentation of these compound effects, these monitoring systems could identify and prevent discriminatory feedback loops before they become embedded in national security systems
The Act also establishes strict reporting requirements to ensure transparency and accountability. Independent auditors must submit their findings to the developer or deployer, who must then file these reports with the FTC within 30 days.[179] Companies must publish public summaries on their websites and retain all assessments for at least five years.[180] The focus on mandatory impact assessment and third-party audits would help detect and prevent the pernicious compound discrimination that Arab Americans currently face under government agencies use of AI across national security sectors.
However, to address limitations in previous civil rights frameworks, the Act should include several additional provisions. First, it should establish specific qualification requirements and accreditation standards for third-party auditors to ensure their independence and expertise. Second, the Act should mandate immediate disclosure of any identified harms to affected individuals and communities, rather than waiting for annual assessments, and require concrete remediation plans with specific timelines. These additions would help prevent the enforcement gaps and accountability issues that have limited the effectiveness of previous civil rights protections.
V. Conclusion
The integration of AI systems into national security frameworks represents a dangerous evolution of post-9/11 discriminatory practices against Arab Americans, creating an unprecedented form of automated bias that is both more pervasive and harder to challenge than traditional discrimination. The compound nature of this discrimination—operating simultaneously through facial recognition errors, language processing biases, and name-matching systems—creates a uniquely destructive form of algorithmic bias that follows Arab Americans across multiple federal agencies and contexts. When combined with broad national security exemptions and AI systems’ inherent opacity, this technical and institutional framework makes it nearly impossible for affected individuals to identify, understand, or challenge the discrimination they face through traditional legal mechanisms.
While the proposed Civil Rights Act of 2024 offers promising steps toward reform through its focus on disparate impact claims and mandatory impact assessments, more comprehensive protections are needed to address the unique challenges of compound discrimination against Arab Americans. Drawing from the EU AI Act’s tiered risk framework, future legislation must establish rigorous oversight mechanisms specifically designed to detect and prevent intersectional bias, require transparent documentation of training data and decision processes, and create meaningful redress mechanisms that can pierce the technical and legal opacity surrounding national security AI systems. Most crucially, reforms must limit national security exemptions that currently shield discriminatory systems from oversight, ensuring that the historical pattern of sacrificing Arab American civil rights in the name of national security is not encoded into the automated systems that will shape the future of national security and immigration control, breaking the invidious digital extension of historical bias.
[1] See Khaled A. Beydoun, “Muslim Bans” and the (Re)making of Political Islamophobia, 2017 U. Ill. L. Rev. 1733, 1747-48 (2017).
[2] See id.
[3] See id. at 1748-49.
[4] See id. (citing Khaled A. Beydoun, Between Indigence, Islamophobia, and Erasure: Poor and Muslim in “War on Terror” America, 104 Cal. L. Rev. 1463, 1479 (2016)).
[5] See Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act of 2001 (Patriot Act), Pub. L. No. 107-56, 115 Stat. 272 (2001) (granting broad swaths of authority to intercept or require the production of wire, oral and electronic communications related to terrorism).
[6] Sejal H. Patel, Sorry, That’s Classified: Post-9/11 Surveillance Powers, the Sixth Amendment, and Niebuhrian Ethics, 23 B.U. Pub. Int. L.J. 287, 290 (2014).
[7] See Beydoun, supra, note 1, at 1749.
[8] See Michael P. O’Connor & Celia M. Rumann, Fanning the Flames of Hatred: Torture, Targeting, and Support for Terrorism, 48 Washburn L.J. 633, 639 (2009).
[9] See Beydoun, supra, note 1, at 1750.
[10] See Comment from Am. C.L. Union & Brennan Ctr. for Just. to Priv. & C.L. Oversight Bd. on Artificial Intelligence in Counterterrorism and Related National Security Programs 5 (Jan. 2024).
[11] See Beydoun, supra, note 1, at 1749.
[12] See id.
[13] See National Security Entry-Exit Registration System, ACLU, https://www.aclu.org/issues/immigrants-rights/immigrants-rights-and-detention/national-security-entry-exit-registration (last visited Nov. 26, 2024) (“Recognizing the ineffectiveness of the program, the Department of Homeland Security (DHS) de-listed the countries under NSEERS in April 2011, but kept the regulatory structure for NSEERS intact. It wasn’t until five years later, in December 2016, that DHS finally dismantled the dormant and discriminatory regulations that kept NSEERS in place.”).
[14] See Brian Levin et. al., U.S. Hate Crime Trends: What Disaggregation of Three Decades of Data Reveals About A Changing Threat and an Invisible Record, 112 J. Crim. L. & Criminology 749, 774 (2022).
[15] See id. at 774-75.
[16] See id.
[17] See The legal doctrine that will be key to preventing AI discrimination, Bʀᴏᴏᴋɪɴɢs (Sep. 13, 2024) https://www.brookings.edu/articles/the-legal-doctrine-that-will-be-key-to-preventing-ai-discrimination/.
[18] See How artificial intelligence is transforming the world, Bʀᴏᴏᴋɪɴɢs, (April 24, 2018) https://www.brookings.edu/articles/how-artificial-intelligence-is-transforming-the-world/.
[19] See Dave Bergmann & Cole Stryker, What is backpropagation?, IBM (Jul. 2, 2024), https://www.ibm.com/think/topics/backpropagation.
[20] See id.
[21] See id.
[22] See id.
[23] See id.
[24] See id.
[25] See id.
[26] See How artificial intelligence is transforming the world, supra note 18.
[27] See Cary Coglianese & Alicia Lai, Algorithm vs. Algorithm, 71 Duke L.J. 1281, 1293-99 (2022).
[28] See Joy Buolamwini, Facial Recognition Technologies: A Primer, 3 (May 29, 2020), https://cdn.prod.website-files.com/5e027ca188c99e3515b404b7/5ed1002058516c11edc66a14_FRTsPrimerMay2020.pdf.
[29] See id. at 4.
[30] See id. at 5.
[31] See id.
[32] See id.
[33] See id.
[34] See id.
[35] See id. at 8-10.
[36] See id.
[37] See id.
[38] See id.
[39] See id.
[40] See id. at 12-14.
[41] See id.
[42] See, e.g., Abubakar Abid, Maheen Farooqi & James Zou, Persistent Anti-Muslim Bias in Large Language Models, (Jan. 14, 2021), https://arxiv.org/pdf/2101.05783 (showing that one algorithmic model associated Musliums with terrorism and violence).
[43] See id. at 1314-15.
[44] See Bʀᴏᴏᴋɪɴɢs, supra, note 16.
[45] See Eᴜʀ. Uɴɪᴏɴ Aɢᴇɴᴄʏ ғᴏʀ Fᴜɴᴅᴀᴍᴇɴᴛᴀʟ Rᴛs., Bias in Algorithms-Artificial Intelligence and Discrimination, 23-24 (2022) https://fra.europa.eu/sites/default/files/fra_uploads/fra-2022-bias-in-algorithms_en.pdf.
[46] See id.
[47] See id.
[48] See id.
[49] See id.
[50] See id.
[51] See id. at 29.
[52] See Gender Shades, MIT Media Lab, https://www.media.mit.edu/projects/gender-shades/overview/ (last visited Nov. 26, 2024).
[53] See Blueprint for an AI Bill of Rights, ᴛʜᴇ ᴡʜɪᴛᴇ ʜᴏᴜsᴇ, https://www.whitehouse.gov/ostp/ai-bill-of-rights/ (last visited Nov. 26, 2024) (“Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.”).
[54] See ACLU & Brennan Ctr. Comment, supra note 10, at 5.
[55] See Privacy Impact Assessment for the Homeland Advanced Recognition Technology System (HART), Dᴇᴘᴀʀᴛᴍᴇɴᴛ ᴏғ ʜᴏᴍᴇʟᴀɴᴅ sᴇᴄᴜʀɪᴛʏ, https://www.dhs.gov/sites/default/files/2024-08/24_0826_priv_pia-obim-004a-HART-update.pdf, 2 (August 12, 2024).
[56] See Artificial Intelligence Use Case Inventory, DHS, https://www.dhs.gov/data/AI_inventory (last visited Nov. 27, 2024).
[57] See id.
[58] See id.
[59] See id. at 16.
[60] See TSA Testing Face Recognition at Security Entrances, Opening Door to Massive Expansion of the Technology, ACLU (Aug. 27, 2019) https://www.aclu.org/news/privacy-technology/tsa-testing-face-recognition-security.
[61] See ACLU & Brennan Ctr. Comment, supra note 10, at 10.
[62]See Center for Democracy & Technology (CDT), Mixed Messages? The Limits of Automated Social
Media Content Analysis (Nov. 2017) (“Natural language processing (NLP) is a discipline of computer science that focuses on techniques for using computers to parse text. For the NLP tools described in this paper, the goal of this parsing is usually to predict something about the meaning of the text, such as whether it expresses a positive or negative opinion.”).
[63] See Sentiment Analysis, Oxford Eng. Dictionary Online (Oxford Univ. Press 2024), https://www.oed.com/dictionary/sentiment-analysis.
[64] See What Is Predictive Analytics?, Google Cloud, https://cloud.google.com/learn/what-is-predictive-analytics (last visited Nov. 27, 2024).
[65] See The FBI Has Access to Over 640 Million Photos of US Through Its Facial Recognition Database, ACLU (June 7, 2019), https://www.aclu.org/news/privacy-technology/fbi-has-access-over-640-million-photos-us-through (“The FBI now has the ability to match against or request matches against over 640 million photos . . . [and] from October 2017 to April 2019, the FBI ran over 152,000 searches of its face recognition system that matches against mugshots. That number does not even include searches on external databases, like passport photos.”).
[66] See FBI “Assessments”: Cato FOIA Lawsuit Edition, ᴄᴀᴛᴏ, https://www.cato.org/blog/fbi-assessments-cato-foia-lawsuit-edition#:~:text=And%20as%20a%20Brennan%20Center,of%20a%20person’s%20public%20movements.%E2%80%9D (Apr. 16, 2021).
[67] See ACLU & Brennan Ctr. Comment, supra note 10, at 10.
[68] See id.
[69] See ACLU & Brennan Ctr. Comment, supra note 10, at 10.
[70] See id.
[71] See id.
[72] See id.
[73] See Yavar Bathaee, The Artificial Intelligence Black Box and the Failure of Intent and Causation, 31 Harv. J.L. & Tech. 889, 901 (2018).
[74] See id.
[75] See id.
[76] See 42 U.S.C. § 2000d (barring discrimination based on “race, color, or national origin . . . under any program or activity receiving Federal financial assistance”).
[77] See U.S. Const. amend. XIV, § 2.
[78] See Pers. Adm’r of Mass. v. Feeney, 442 U.S. 256, 279 (1979).
[79] See Renata M. O’Donnell, Challenging Racist Predictive Policing Algorithms Under the Equal Protection Clause, 94 N.Y.U. L. Rev. 544, 572-73 (2019).
[80] See id. at 574-75.
[81] See The legal doctrine that will be key to preventing AI discrimination, supra note 17.
[82] See id.
[83] See id.
[84] See id.
[85] See id.
[86] See Joy Buolamwini, Gender Shades, MIT MEDIA LAB, https://www.media.mit.edu/projects/gender-shades/overview/ (last visited Dec. 12, 2024).
[87] See id.
[88] See id.
[89] See id.
[90] See id.
[91] See id.
[92] See id.
[93] See id.
[94] See ACLU & Brennan Ctr. for Just, Comment to PCLOB, supra note 33, at 13.
[95] See Andrew Meyers, Rooting Out Anti-Muslim Bias in Popular Language Model GPT-3, Sᴛᴀɴғᴏʀᴅ ᴜɴɪᴠᴇʀsɪᴛʏ https://hai.stanford.edu/news/rooting-out-anti-muslim-bias-popular-language-model-gpt-3 (Jul. 22, 2021).
[96] See id.
[97] See id.
[98] See id.
[99] See Gabriel Nicholas & Aliya Bhatia, Lost in Translation Large Language Models in Non-English Content Analysis, Ctr. for Democracy & Tech. 15 (May 2023), https://cdt.org/wp-content/uploads/2023/05/non-en-content-analysis-primer-051223-1203.pdf.
[100] See id.
[101] See id. at 16.
[102] See id. at 18-19.
[103] See id.
[104] See id. at 6.
[105] See id.
[106] See ACLU & Brennan Ctr. Comment, supra note 10, at 12.
[107] See id.
[108] See id.
[109] See Twenty Years too Many A Call to Stop the FBI’s Secret Watchlist, CAIR, https://islamophobia.org/wp-content/uploads/2023/09/watchlistreport-1.pdf (last visited Dec. 10, 2024).
[110] See ACLU & Brennan Ctr. Comment, supra note 10, at 12.
[111] See, e.g., EU Agency for Fundamental Rts., supra note 45, at 23-24.
[112] See ACLU & Brennan Ctr. Comment, supra note 10, at 17.
[113] See id.
[114] See id.
[115] See id.
[116] See id.
[117] See Valentin Hofmann et al., Dialect Prejudice Predicts AI Decisions About People’s Character, Employability, and Criminality, 1-2 (Mar. 1, 2024), https://arxiv.org/pdf/2403.00742.
[118] See id.
[119] See id.
[120] See id.
[121] See id. at 1-2.
[122] See id.
[123] See id.
[124] See, e.g., id.
[125] See Buolamwini, supra note 87.
[126] See Nicholas & Bhatia, supra note 100, at 15.
[127] See ACLU & Brennan Ctr. Comment, supra note 10, at 17.
[128] See Shibley Telhami, Arab and Muslim America: A Snapshot, Bʀᴏᴏᴋɪɴɢs ɪɴsᴛɪᴛᴜᴛᴇ (Dec. 1, 2002), https://www.brookings.edu/articles/arab-and-muslim-america-a-snapshot/.
[129] See, e.g., id.
[130] See, e.g., Bergmann & Stryker, supra note 19.
[131] See How artificial intelligence is transforming the world, supra note 18.
[132] See, e.g., Bergmann & Stryker, supra note 19.
[133] See id.
[134] See How artificial intelligence is transforming the world, supra note 18.
[135] See, e.g., Bergmann & Stryker, supra note 19.
[136] See Ibrahim v. Dep’t of Homeland Sec., 62 F. Supp. 3d 909, 911 (N.D. Cal. 2014).
[137] See id. At 911, 927-31.
[138] See id. at 914.
[139] See id. at 913-14.
[140] See id. at 929-30.
[141] See id. at 928.
[142] See id.
[143] See Renata M. O’Donnell, supra at 80.
[144] See Ibrahim, 62 F. Supp. 3d at 927-28.
[145] See, e.g., Pers. Adm’r of Mass., 442 U.S. 256 at 279.
[146] See The legal doctrine that will be key to preventing AI discrimination, supra note 17.
[147] See ACLU & Brennan Ctr. Comment, supra note 10, at 2.
[148] See, e.g., Artificial Intelligence Use Case Inventory, supra note 57.
[149] See ACLU & Brennan Ctr. Comment, supra note 10, at 14.
[150] See id. at 17.
[151] See generally ODNI Senior Advisory Group, Report to the Director of National Intelligence (Jan. 27, 2022) https://www.dni.gov/files/ODNI/documents/assessments/ODNI-Declassified-Report-on-CAI-January2022.pdf.
[152] See ACLU & Brennan Ctr. Comment, supra note 10, at 2.
[153] See id.
[154] See Artificial Intelligence Civil Rights Act of 2024, S. 5152, 118th Cong. (2024).
[155] See id. § 2(3).
[156] See id. § 401(a).
[157] See id. § 403.
[158] See id. § 102(2)(A)(III).
[159] See Trump Administration Civil and Human Rights Rollbacks, Leadership Conf. on Civ. & Hum. Rts., https://civilrights.org/trump-rollbacks/ (last visited Dec. 10, 2024).
[160] See Trump on Immigration, ACLU, https://www.aclu.org/trump-on-immigration (last visited Dec. 10, 2024).
[161] See Artificial Intelligence Act: MEPs Adopt Landmark Law, Eᴜʀ. Pᴀʀʟɪᴀᴍᴇɴᴛ Nᴇᴡs, (Mar. 13, 2024), https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law.
[162] See id.
[163] See id.
[164] See id.
[165] See id.
[166] See id. § 2(15).
[167] See id. § 301.
[168] See id. § 301(a)-(g).
[169] See id.
[170] See id.
[171] See id.
[172] See id.
[173] See id.
[174] See id. § 102(1)-(2).
[175] See id.
[176] See id.
[177] See id. § 102(b).
[178] See id.
[179] See id. § 102(e).
[180] See id.