Tag Archives: fake news

Dupe Culture & Digital Deception Inside AI-Driven Counterfeit Boom

While generative AI transforms how Americans shop, it’s also quietly powering a counterfeit crisis now spiraling out of control. A groundbreaking new report from Red Points and OnePoll, The Counterfeit Buyer Teardown, reveals that AI is no longer just helping consumers find the best deals—it’s helping them find fakes. From influencer-driven “dupe culture” to hyper-realistic fake storefronts, the study exposes a booming underground economy that’s been supercharged by technology. With 28% of counterfeit buyers now using AI tools to seek out knock-offs, and fraudulent social media ads spiking 179% in just one year, the findings deliver a wake-up call for brands, regulators, and shoppers alike. Red Points execs are available to break down the data, discuss solutions, and explain why this rapidly evolving trend is both a technological and ethical crisis for the digital marketplace. Interest here as we hope?

AI Supercharging U.S.and Other E-Commerce Counterfeit Crisis


Courtesy of Red Points 3.jpg

An explosive new report, “The Counterfeit Buyer Teardown, ” paints a concerning picture of a rapidly evolving and increasingly sophisticated counterfeit goods market, driven by a new factor: Artificial Intelligence. Forget the back alleys; findings from the research—conducted by market research firm OnePoll and AI company Red Points in February 2025—highlight that the future of fakes is digital, AI-assisted, and alarmingly mainstream. 

The convergence of technology, social media, and shifting consumer mindsets is reshaping e-commerce—and not always for the better. As AI accelerates both the spread and appeal of counterfeit goods, the challenge is no longer just spotting fakes—it’s confronting a counterfeit economy that’s growing smarter, faster, and harder to contain.

“As counterfeiters adopt advanced tools like AI, the fight against fakes is becoming more complex and more urgent,” said Laura Urquizu, CEO & President of Red Points. “We’re now seeing AI shape both the threat and the solution. In 2024 alone, our firm detected 4.3 million counterfeit infringements online—an alarming 15% increase year-over-year.”

info2.png

Alarming indeed. Here are 5 key revelations from the study.

1. AI is the New Enabler of Counterfeiting – A Two-Sided Threat:

  • The Counterfeiters’ Edge: AI is dramatically lowering the barrier to entry for bad actors. They can now mimic brand listings, and impersonate social media accounts with unprecedented ease and speed. They can also effortlessly create professional-looking fake websites—a situation that, according to Red Points’ data, is projected to surge 70% in 2025.This isn’t just about cheap knock-offs anymore; it’s about sophisticated deception at scale.
  • The Consumers’ Assistant: Shockingly, 28% of online shoppers who bought fake goods used AI tools to find them. This isn’t a fringe behavior; it’s a growing trend, especially among Gen X, suggesting consumers are actively leveraging AI in their pursuit of cheaper alternatives. This fundamentally shifts the narrative – it’s not just about being tricked; some are actively seeking fakes with AI’s help.

2. Accidental Counterfeiting is a Major Problem – Trust Signals are Being Hijacked:

  • 1 in 4 luxury counterfeit purchases are unintentional. This shatters the perception that buyers knowingly seek out high-end fakes. Realistic pricing, secure payment promises, and active (but fake) social media presence are successfully deceiving consumers. AI-generated legitimacy cues are becoming indistinguishable from the real deal.
  • Brands are Paying the Price for These Mistakes: A staggering one in three shoppers stop buying from the genuine brand after an accidental counterfeit experience. This highlights the significant damage to brand loyalty and future sales, even when the brand isn’t directly selling the fake. High-trust categories like luxury and toys are particularly vulnerable.

3. The “Dupe Economy” is Real and Influencer-Driven:

  • Nearly a third (31%) of intentional counterfeit buyers were swayed by influencer promotions. Social media is driving the demand for “dupes” – budget-friendly replicas. Authenticity is taking a backseat to price and perceived identical appearance, especially among younger demographics.
  • This isn’t just about saving money; it’s a shift in consumer mindset. The report suggests a growing acceptance of fakes as clever alternatives, fueled by social validation and influencer endorsements.

info1.jpg

4. Marketplaces Remain Key, But Social Media and Fake Websites are Surging:

  • Marketplaces (both US and China-based) are still the primary channels for counterfeit purchases. However, fake websites (accounting for 34% of unintentional purchases) and social media are rapidly gaining ground as sophisticated avenues for distribution, amplified by AI’s ability to create convincing facades.
  • Social media ads redirecting to infringing websites saw a massive 179% year-over-year growth. This highlights the increasing sophistication of counterfeiters in leveraging advertising platforms to drive traffic to their fake storefronts.

5. Younger Generations are More Vulnerable in Key Categories:

  • Millennials are significantly more likely to have their personal data stolen after purchasing from fake websites (44% vs. 34% average). This suggests a higher susceptibility to sophisticated phishing scams disguised as legitimate e-commerce sites.
  • Gen Z and Millennials are 2-4 times more likely to accidentally purchase counterfeit luxury goods and toys compared to Baby Boomers. Their online savviness might be a double-edged sword, making them more exposed to deceptive listings.

This study serves as both a consumer alert and a brand wake-up call. The rise of AI as a tool for both counterfeiters and consumers is a seismic shift that demands urgent attention. With compelling data and a clear-eyed look at accidental purchases, influencer-driven “dupe culture,” and the growing sophistication of fake storefronts, the findings paint a stark warning for the future of online shopping. 

“Counterfeiting poses a serious and evolving threat to innovative businesses and consumer safety,” notes Piotr Stryszowski, Senior Economist at the Organization for Economic Co-operation and Development (OECD). “Criminals constantly adapt, exploiting new technologies and shifting market trends—particularly in the online environment. To effectively counter this threat, policymakers need detailed, up-to-date information. This study makes an important contribution to our understanding of how counterfeiters operate and how consumers behave online.”
info3.png
Ultimately, The Counterfeit Buyer Teardown report underscores a new reality: counterfeiting is no longer confined to shady sellers or easily spotted scams—it’s embedded in the very technologies shaping modern commerce. As AI continues to blur the lines between real and fake, the pressure is on for brands, platforms, and policymakers to respond with equal speed and sophistication. Combating this growing threat will require more than just awareness—it demands collaboration, innovation, and a commitment to restoring trust in the digital marketplace before the counterfeit economy becomes the new normal. For the Silo, Merilee Kern.

Merilee Kern, MBA is a brand strategist and analyst who reports on industry change makers, movers, shakers and innovators: field experts and thought leaders, brands, products, services, destinations and events. Merilee is a regular contributor to the Silo. Connect with her at 
www.TheLuxeList.com and LinkedIN www.LinkedIn.com/in/MerileeKern

Source: https://get.redpoints.com/the-counterfeit-buyer-teardown-2025

Feds False News Checker Tool To Use AI- At Risk Of Language & Political Bias

Ottawa-Funded Misinformation Detection Tool to Rely on Artificial Intelligence

Ottawa-Funded Misinformation Detection Tool to Rely on Artificial Intelligence
Canadian Heritage Minister Pascale St-Onge speaks to reporters on Parliament Hill after Bell Media announces job cuts, in Ottawa on Feb. 8, 2024. (The Canadian Press/Patrick Doyle)

A new federally funded tool being developed with the aim of helping Canadians detect online misinformation will rely on artificial intelligence (AI), Ottawa has announced.

Heritage Minister Pascale St-Onge said on July 29 that Ottawa is providing almost $300,000 cad to researchers at Université de Montréal (UdeM) to develop the tool.

“Polls confirm that most Canadians are very concerned about the rise of mis- and disinformation,” St-Onge wrote on social media. “We’re fighting for Canadians to get the facts” by supporting the university’s independent project, she added.

Canadian Heritage says the project will develop a website and web browser extension dedicated to detecting misinformation.

The department says the project will use large AI language models capable of detecting misinformation across different languages in various formats such as text or video, and contained within different sources of information.

“This technology will help implement effective behavioral nudges to mitigate the proliferation of ‘fake news’ stories in online communities,” says Canadian Heritage.

Related-

OpenAI, Google DeepMind Employees Warn of ‘Serious Risks’ Posed by AI Technology

OpenAI, Google DeepMind Employees Warn of ‘Serious Risks’ Posed by AI Technology

With the browser extension, users will be notified if they come across potential misinformation, which the department says will reduce the likelihood of the content being shared.

Project lead and UdeM professor Jean-François Godbout said in an email that the tool will rely mostly on AI-based systems such as OpenAI’s ChatGPT.

“The system uses mostly a large language model, such as ChatGPT, to verify the validity of a proposition or a statement by relying on its corpus (the data which served for its training),” Godbout wrote in French.

The political science professor added the system will also be able to consult “distinct and reliable external sources.” After considering all the information, the system will produce an evaluation to determine whether the content is true or false, he said, while qualifying its degree of certainty.

Godbout said the reasoning for the decision will be provided to the user, along with the references that were relied upon, and that in some cases the system could say there’s insufficient information to make a judgment.

Asked about concerns that the detection model could be tainted by AI shortcomings such as bias, Godbout said his previous research has demonstrated his sources are “not significantly ideologically biased.”

“That said, our system should rely on a variety of sources, and we continue to explore working with diversified and balanced sources,” he said. “We realize that generative AI models have their limits, but we believe they can be used to help Canadians obtain better information.”

The professor said that the fundamental research behind the project was conducted before receiving the federal grant, which only supports the development of a web application.

Bias Concerns

The reliance on AI to determine what is true or false could have some pitfalls, with large language models being criticized for having political biases.

Such concerns about the neutrality of AI have been raised by billionaire Elon Musk, who owns X and its AI chatbot Grok.

British and Brazilian researchers from the University of East Anglia published a study in January that sought to measure ChatGPT’s political bias.

“We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK,” they wrote. Researchers said there are real concerns that ChatGPT and other large language models in general can “extend or even amplify the existing challenges involving political processes posed by the Internet and social media.”

OpenAI says ChatGPT is “not free from biases and stereotypes, so users and educators should carefully review its content.”

Misinformation and Disinformation

The federal government’s initiatives to tackle misinformation and disinformation have been multifaceted.

The funds provided to the Université de Montréal are part of a larger program to shape online information, the Digital Citizen Initiative. The program supports researchers and civil society organizations that promote a “healthy information ecosystem,” according to Canadian Heritage.

The Liberal government has also passed major bills, such as C-11 and C-18, which impact the information environment.

Bill C-11 has revamped the Broadcasting Act, creating rules for the production and discoverability of Canadian content and giving increased regulatory powers to the CRTC over online content.

Bill C-18 created the obligation for large online platforms to share revenues with news organizations for the display of links. This legislation was promoted by then-Heritage Minister Pablo Rodriguez as a tool to strengthen news media in a “time of greater mistrust and disinformation.”

These two pieces of legislation were followed by Bill C-63 in February to enact the Online Harms Act. Along with seeking to better protect children online, it would create steep penalties for saying things deemed hateful on the web.

There is some confusion about what the latest initiative with UdeM specifically targets. Canadian Heritage says the project aims to counter misinformation, whereas the university says it’s aimed at disinformation. The two concepts are often used in the same sentence when officials signal an intent to crack down on content they deem inappropriate, but a key characteristic distinguishes the two.

The Canadian Centre for Cyber Security defines misinformation as “false information that is not intended to cause harm”—which means it could have been posted inadvertently.

Meanwhile, the Centre defines disinformation as being “intended to manipulate, cause damage and guide people, organizations and countries in the wrong direction.” It can be crafted by sophisticated foreign state actors seeking to gain politically.

Minister St-Onge’s office has not responded to a request for clarification as of this posts publication.

In describing its project to counter disinformation, UdeM said events like the Jan. 6 Capitol breach, the Brexit referendum, and the COVID-19 pandemic have “demonstrated the limits of current methods to detect fake news which have trouble following the volume and rapid evolution of disinformation.” For the Silo, Noe Chartier/ The Epoch Times.

The Canadian Press contributed to this report.