Here’s the Company That Sold DHS ICE’s Notorious Face Recognition App

DHS Unveils New Details on Notorious Face Recognition App Used by Immigration Agents

The Department of Homeland Security (DHS) has published new details about Mobile Fortify, a face recognition app used by federal immigration agents to identify people in the field, undocumented immigrants, and US citizens. The information was released as part of DHS's 2025 AI Use Case Inventory, which is required by law for federal agencies.

According to the inventory, Mobile Fortify has been deployed by both Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE), with CBP saying it became operational in May last year and ICE gaining access to it on May 20, 2025 - about a month before it was first reported. The app's vendor is NEC, which has previously been unknown publicly.

The inventory reveals that Mobile Fortify can perform one-to-many searches or one-to-one matches against databases of any size. CBP says the app helps quickly confirm people's identity, while ICE claims it assists in confirming identities when officers and agents have limited information and access to multiple disparate systems.

ICE also extracts text from identity documents for additional checks, stating that it doesn't own or interact directly with AI models, which belong to CBP. The agency has acknowledged the potential consequences of incorrect matches and is working on developing an appeals process and incorporating feedback from users and the public.

The use of face recognition technology by federal immigration agents has raised concerns about accuracy, bias, and civil liberties. While CBP claims to have sufficient monitoring protocols in place, ICE says it's still working on developing these.

It remains unclear how NEC's face recognition solution was trained or fine-tuned, but CBP has stated that the Vetting/Border Crossing Information/Trusted Traveler Information system was used for this purpose. This information has sparked concerns about the potential misuse of facial recognition technology by law enforcement agencies.
 
🤔 I'm so not comfy with this new info on Mobile Fortify, it's like they're snoopin' on us even when we're in public! 📸 One-to-many searches? That's just too much power for our feds to wield. What if they match a wrong face or identity? The potential for bias and errors is huge 🤕 And what about people who don't have access to tech, like the elderly or those in rural areas? How are they gonna get verified then? 🚫 It's like they're relying on us to be 'digitally literate' but forgetting about the rest of us who aren't 📊 We need more transparency and accountability from our government agencies 👮‍♀️
 
I'm really worried about these new face recognition apps being used by immigration agents 🤯. I mean, think about it - we're talking about identifying people on the fly with 99% accuracy or more, but what if that's not good enough? What if there are false positives or negatives? It's not just about security, it's about our civil liberties too 👮‍♀️. I want to know how this tech is being trained and fine-tuned so we can trust it, but so far there isn't much info 🤔. I'm all for safety, but we need to be careful with our personal data and make sure these apps aren't being misused 🚫. We should be having a national conversation about this ASAP 💬.
 
I'm still trying to wrap my head around this Mobile Fortify app 🤯. I mean, it's like they're using this high-tech face recognition tech to identify people in real-life situations, but at the same time, we don't really know how well it works or if it can be hacked 🚨. ICE says it helps confirm identities when they have limited info, but what about those who might not fit the mold? And what's with CBP saying it speeds up checks while also acknowledging monitoring protocols? It just doesn't add up to me 🤔. I'm still waiting for more concrete evidence on how this app is used and its accuracy rate 👀.
 
so yeah I'm a bit worried about these face rec apps being used by immigration agents, but at the same time I think it's cool that DHS is being all transparent about it 🤔👀. like, we need to know what's going on in our country and how these agencies are using tech, right? 💻💡

I'm also kinda curious about how NEC's face rec solution was trained... like, who does the training for this stuff, and how do they make sure it's accurate? 🤷‍♀️🔍

but what really gets me is that ICE is acknowledging the potential consequences of incorrect matches and is working on an appeals process 😊. that's like, super positive, right? 🌞 it means they're taking these concerns seriously and trying to make things right.

anyway, I'm not gonna freak out about this face rec app thing just yet... but I do think we need to stay vigilant and keep pushing for transparency and accountability 💪👊.
 
omg yaaas i'm lowkey freaked out rn about this new info on Mobile Fortify 🤯 like how did we not know it existed till now? and what's up with NEC, a vendor that's been unknown to the public lol. anyway, i'm glad ICE is acknowledging the potential consequences of incorrect matches, but i wish they'd do more to address the concerns about accuracy & bias... like, we all know facial recognition tech ain't perfect 💁‍♀️👀 what if it mistakenly identifies us as undocumented? that's just too much for me 😱. i'm also kinda curious how CBP is monitoring these protocols, but i'm sure they'll figure it out 🤔.
 
I'm gettin' a bad vibe from this whole face rec app thing 🤔... like, I get it, security is important, but we gotta make sure our tech is accurate and fair for everyone 🤝... can't have some random US citizen mistaken for an undocumented immigrant or whatever 😬... what's the threshold for accuracy here? Like, how many wrong matches can we afford before we start thinkin' this tech is unreliable? 💡... also, why doesn't ICE say who trains and fine-tunes these AI models? Transparency matters, you know? 🤐
 
Ugh, another tech company making a ton of cash off our faces 🤦‍♂️... I mean, NEC's Mobile Fortify app is being used by the government to track and identify people without our consent, which is super sketchy. The lack of transparency on how this app was trained or fine-tuned is even more concerning 🙅‍♂️. And let's not forget that ICE claims it doesn't own any AI models but uses CBP's instead - sounds like a dodgy deal to me 😒... the government's 2025 AI Use Case Inventory better include some accountability and oversight, or we'll keep questioning the ethics of these tech giants 👀
 
I'm not sure I'm too comfortable with the idea of face recognition apps being used by immigration agents... 🤔 it's a lot to unpack, but essentially, these apps can perform one-to-many searches or one-to-one matches against massive databases. The fact that ICE is extracting text from identity documents adds another layer of complexity and raises questions about data protection. I also think it's interesting that NEC, the vendor behind Mobile Fortify, hasn't been transparent about how their solution was trained or fine-tuned... that lack of transparency is a bit unsettling. The potential for bias in these systems is real, and I worry that we're playing with fire when it comes to accuracy and civil liberties.
 
OMG 🤯 I'm literally shakin' my head over this face rec app thingy... like, what's up with our gov't? Are they really using this tech to scan people's faces without consent or anything? It sounds super sketchy and biased too 🤔. I mean, how can we be sure it's accurate and doesn't lead to false positives? We gotta have more info on how this app was trained and who's behind it. The fact that ICE is saying they don't own the AI models but are still using them is just red flag after red flag 🚨.

And what about all these civil liberties concerns? Like, we need to make sure our gov't isn't abusing its power here. I know some people will say "it's for security" and all that, but what about when it goes wrong? We gotta be careful with this kind of tech. Can't wait to see how this all plays out 🤞.
 
Omg, can you even imagine having your face scanned on the street and matched to a database without knowing? 🤯 It's wild that the government is deploying a face recognition app like Mobile Fortify on immigration agents, especially with how much bias there can be in these tech systems. I mean, what if an officer makes a wrong match and it leads to someone being deported or worse? 🚫 The fact that ICE didn't own or interact directly with AI models raises so many red flags... who's really in control of this technology? 😬
 
Ugh, great, just what we need - more ways for the government to get our info wrong 🤦‍♂️. I mean, who needs a vetting process when you've got an app that can make mistakes? And now it's out in the open, so we know exactly how much data they're collecting on us... not. I'm sure it'll be used for good, like to find out who's really in our country and who's not 🙄. Like, what's the point of having an app that can do a one-to-many search if it's just going to give us a bunch of wrong matches? And ICE thinks they're working on it, but I bet it'll still be a mess by the time they figure it out 😒. We should all just get used to having our faces scanned and wondering what other info they've got on us 💭.
 
I'm getting a major chill just thinking about this Mobile Fortify app 🤯💻. Like, how do we know it's not being misused? I mean, we already have enough issues with bias in tech and law enforcement... it's like they're playing with fire 🔥. And what's up with the lack of transparency on how this thing was trained or fine-tuned? That's just a recipe for disaster 🌪️. And can someone please tell me why NEC's not being more open about their AI models? This whole thing is giving me major pause 🤔. We need to be super careful about who's handling our biometric data, you know? 💸
 
I'm low-key freaking out about this face recognition app 🤯🚨. Like, I get that it's supposed to help with immigration and border control, but the lack of transparency around how this tech is being used is super worrying 🙅‍♂️. We don't know how accurate it is or if there's any room for bias, which makes me nervous about its potential misuse 💔.

And what's up with ICE extracting text from identity documents without owning the AI models? Sounds like they're playing with fire 🔥, especially when it comes to civil liberties. I'm all for improving efficiency and accuracy, but we need to make sure this tech is being used responsibly 🤝.

The fact that CBP says they have monitoring protocols in place, but ICE admits they're still working on developing them, makes me think there's more going on behind the scenes 🔮. We deserve better than a black box of AI and facial recognition without proper oversight 👀.

Anyway, this just feels like another example of how tech is being used to further polarize our society 🤝. Can't we find ways to balance security with accountability and transparency? 💕
 
omg did u guys know that 99.8% of face rec apps r trained on datasets containing mostly white ppl lol its like how can we be sure it works for ppl of color or those with dark skin? 🤔 anyway, Mobile Fortify has been used by CBP and ICE since may last year... thats like, what is the timeline for reviewing these apps 4 accuracy & bias issues 📊

and btw did u know that only 20% of facial rec errors r due to biometrics itself but mostly cuz of human error or dataset quality issues 👀 so let's be real, we need more transparency here 💡
 
OMG you guys I cant even believe whats going on with this face rec app 🤯 DHS just dropped more deets and its getting weirder Mobile Fortify is literally used by immigration agents to ID people in the field & undocumented immigrants like wut?! CBP says it helps confirm identities but ICE claims its for when they got limited info & access to multiple systems which is just shady. And get this NEC is the vendor & has never been public lol. The use of face rec tech has raised so many concerns about accuracy bias & civil liberties but CBP says they have monitoring protocols in place which sounds like a total cop-out 🚫 ICE is working on an appeals process tho so that's a plus 🤞
 
🤔 I'm not surprised to see Mobile Fortify being used by ICE and CBP, it's only a matter of time before the tech is implemented in our daily lives, right? 🚀 The question remains though, are these face recognition apps really accurate? I mean, what if they're wrong? How do we hold these agencies accountable when there's no clear way to dispute an incorrect match? 💼 It also got me thinking, who really owns the AI models here? Is it CBP or ICE? 🤝 The lack of transparency around this is concerning, especially with regards to how NEC's face recognition solution was trained. We need more open discussions about these kinds of technologies before they're widely adopted. 🔍
 
Ugh, I'm so over these face rec apps 🤯. Like, what's next? They're gonna start using them on us citizens too? 😒 I mean, I get it, security is important and all that jazz, but can't they see how this tech is being misused? The fact that ICE is using a system that doesn't even own the AI models is just red flag alert 🚨. And don't even get me started on the whole "one-to-many" searches... what if it matches my face with some random person from the database? 🤔

And have you seen the timeline here? ICE gets access to this app in May, but nobody knew about it until a month later? That's just sloppy governance 🙄. And CBP is all like "oh, we've got protocols in place"... yeah right 😒. I'm not buying it.

I just wish these agencies would be more transparent about how they're using this tech and what kind of safeguards are in place to prevent misuse. Until then, I'll be keeping a close eye on things... 👀
 
OMG 🤯 just read that Mobile Fortify is being used by both CBP and ICE to identify people, including undocumented immigrants and US citizens 🙅‍♂️ I'm so worried about this app's accuracy and bias - what if it misidentifies innocent ppl? 🤔 And what about the lack of transparency around how it was trained and fine-tuned? 📊 It's like, we're moving into a whole new era where facial recognition is being used to control people's lives... 🚫
 
omg, another tech update 🤖! so DHS is releasing info on this face rec app Mobile Fortify and I gotta say, it's kinda wild how much detail they're giving out 📊. ICE says it helps confirm identities when they don't have all the info, but what about false positives? that's like, super scary 😱. and CBP claims it's got monitoring protocols in place, but we already know those don't always work... like remember that whole Amazon Rekognition debacle a few years back 🤦‍♀️. I'm not saying this tech is bad per se, but we need to be super careful about how we use it 💡.
 
Back
Top