Tag Archives: disinformation

In The Future Cyberwar Will Be Primary Theater For Superpowers

Cybersecurity expert explains how virtual wars are fought

With the Russia-Ukraine war in full swing, cybersecurity experts point to a cyber front that had been forming online long before Russian troops crossed the border. Even in the months leading up to the outbreak of war, Ukrainian websites were attacked and altered to display threatening messages about the coming invasion.

“In response to Russian warfare actions, the hacking collective Anonymous launched a series of attacks against Russia, with the country’s state media being the main target. So we can see cyber warfare in action with new types of malware flooding both countries, thousands of sites crashing under DDoS (distributed denial-of-service) attacks, and hacktivism thriving on both sides of barricades,” Daniel Markuson, a cybersecurity expert at NordVPN, says.

The methods of cyberwarfare

In the past decade, the amount of time people spend online has risen drastically. Research by NordVPN has shown that Americans spend around 21 years of their lives online. With our life so dependent on the internet, cyber wars can cause very real damage. Some of the goals online “soldiers” are trying to pursue include:

  • Sabotage and terrorism

The intent of many cyber warfare actions is to sabotage and cause indiscriminate damage. From taking a site offline with a DDoS attack to defacing webpages with political messages, cyber terrorists launch multiple operations every year. One event that had the most impact happened in Turkey when Iranian hackers managed to knock out the power grid for around twelve hours, affecting more than 40 million people.

  • Espionage

While cyber espionage also occurs between corporations, with competitors vying for patents and sensitive information, it’s an essential strategy for governments engaging in covert warfare. Chinese intelligence services are regularly named as the culprits in such operations, although they consistently deny the accusations.

  • Civilian activism (hacktivism)

The growing trend of hacktivism has seen civilian cyber activists take on governments and authorities around the world. One example of hacktivism is Anonymous, a group that has claimed responsibility for assaults on government agencies in the US. In 2022, Anonymous began a targeted cyber campaign against Russia after it invaded Ukraine in an attempt to disrupt government systems and combat Russian propaganda.

  • Propaganda and disinformation

In 2020, 81 countries were found to have used some form of social media manipulation. This type of manipulation was usually ordered by government agencies, political parties, or politicians. Such campaigns, which largely involve the spread of fake news, tended to focus on three key goals – distract or divert conversations away from important issues, increase polarization between religious, political, or social groups, and suppress fundamental human rights, such as the right to freedom of expression or freedom of information.

The future of cyber warfare

“Governments, corporations, and the public need to understand this emerging landscape and protect themselves by taking care of their physical security as well as cybersecurity. From the mass cyberattacks of 2008’s Russo-Georgian War to the cyber onslaught faced by Ukraine today, this is the new battleground for both civil and international conflicts,” Daniel Markuson says.

Markuson predicts that in the future, cyber war will become the primary theater of war for global superpowers. He also thinks that terrorist cells may focus their efforts on targeting civilian infrastructure and other high-risk networks: terrorists would be even harder to detect and could launch attacks anywhere in the world. Lastly, Markuson thinks that activism will become more virtual and allow citizens to hold large governmental authorities to account.

A regular person can’t do much to fight in a cyber war or to protect themselves from the consequences.

However, educating yourself, paying attention to the reliability of sources of information, and maintaining a critical attitude  to everything you read online could help  increase your awareness and feel less affected by propaganda.  For the Silo, Darija Grobova.

Feds False News Checker Tool To Use AI- At Risk Of Language & Political Bias

Ottawa-Funded Misinformation Detection Tool to Rely on Artificial Intelligence

Ottawa-Funded Misinformation Detection Tool to Rely on Artificial Intelligence
Canadian Heritage Minister Pascale St-Onge speaks to reporters on Parliament Hill after Bell Media announces job cuts, in Ottawa on Feb. 8, 2024. (The Canadian Press/Patrick Doyle)

A new federally funded tool being developed with the aim of helping Canadians detect online misinformation will rely on artificial intelligence (AI), Ottawa has announced.

Heritage Minister Pascale St-Onge said on July 29 that Ottawa is providing almost $300,000 cad to researchers at Université de Montréal (UdeM) to develop the tool.

“Polls confirm that most Canadians are very concerned about the rise of mis- and disinformation,” St-Onge wrote on social media. “We’re fighting for Canadians to get the facts” by supporting the university’s independent project, she added.

Canadian Heritage says the project will develop a website and web browser extension dedicated to detecting misinformation.

The department says the project will use large AI language models capable of detecting misinformation across different languages in various formats such as text or video, and contained within different sources of information.

“This technology will help implement effective behavioral nudges to mitigate the proliferation of ‘fake news’ stories in online communities,” says Canadian Heritage.

Related-

OpenAI, Google DeepMind Employees Warn of ‘Serious Risks’ Posed by AI Technology

OpenAI, Google DeepMind Employees Warn of ‘Serious Risks’ Posed by AI Technology

With the browser extension, users will be notified if they come across potential misinformation, which the department says will reduce the likelihood of the content being shared.

Project lead and UdeM professor Jean-François Godbout said in an email that the tool will rely mostly on AI-based systems such as OpenAI’s ChatGPT.

“The system uses mostly a large language model, such as ChatGPT, to verify the validity of a proposition or a statement by relying on its corpus (the data which served for its training),” Godbout wrote in French.

The political science professor added the system will also be able to consult “distinct and reliable external sources.” After considering all the information, the system will produce an evaluation to determine whether the content is true or false, he said, while qualifying its degree of certainty.

Godbout said the reasoning for the decision will be provided to the user, along with the references that were relied upon, and that in some cases the system could say there’s insufficient information to make a judgment.

Asked about concerns that the detection model could be tainted by AI shortcomings such as bias, Godbout said his previous research has demonstrated his sources are “not significantly ideologically biased.”

“That said, our system should rely on a variety of sources, and we continue to explore working with diversified and balanced sources,” he said. “We realize that generative AI models have their limits, but we believe they can be used to help Canadians obtain better information.”

The professor said that the fundamental research behind the project was conducted before receiving the federal grant, which only supports the development of a web application.

Bias Concerns

The reliance on AI to determine what is true or false could have some pitfalls, with large language models being criticized for having political biases.

Such concerns about the neutrality of AI have been raised by billionaire Elon Musk, who owns X and its AI chatbot Grok.

British and Brazilian researchers from the University of East Anglia published a study in January that sought to measure ChatGPT’s political bias.

“We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK,” they wrote. Researchers said there are real concerns that ChatGPT and other large language models in general can “extend or even amplify the existing challenges involving political processes posed by the Internet and social media.”

OpenAI says ChatGPT is “not free from biases and stereotypes, so users and educators should carefully review its content.”

Misinformation and Disinformation

The federal government’s initiatives to tackle misinformation and disinformation have been multifaceted.

The funds provided to the Université de Montréal are part of a larger program to shape online information, the Digital Citizen Initiative. The program supports researchers and civil society organizations that promote a “healthy information ecosystem,” according to Canadian Heritage.

The Liberal government has also passed major bills, such as C-11 and C-18, which impact the information environment.

Bill C-11 has revamped the Broadcasting Act, creating rules for the production and discoverability of Canadian content and giving increased regulatory powers to the CRTC over online content.

Bill C-18 created the obligation for large online platforms to share revenues with news organizations for the display of links. This legislation was promoted by then-Heritage Minister Pablo Rodriguez as a tool to strengthen news media in a “time of greater mistrust and disinformation.”

These two pieces of legislation were followed by Bill C-63 in February to enact the Online Harms Act. Along with seeking to better protect children online, it would create steep penalties for saying things deemed hateful on the web.

There is some confusion about what the latest initiative with UdeM specifically targets. Canadian Heritage says the project aims to counter misinformation, whereas the university says it’s aimed at disinformation. The two concepts are often used in the same sentence when officials signal an intent to crack down on content they deem inappropriate, but a key characteristic distinguishes the two.

The Canadian Centre for Cyber Security defines misinformation as “false information that is not intended to cause harm”—which means it could have been posted inadvertently.

Meanwhile, the Centre defines disinformation as being “intended to manipulate, cause damage and guide people, organizations and countries in the wrong direction.” It can be crafted by sophisticated foreign state actors seeking to gain politically.

Minister St-Onge’s office has not responded to a request for clarification as of this posts publication.

In describing its project to counter disinformation, UdeM said events like the Jan. 6 Capitol breach, the Brexit referendum, and the COVID-19 pandemic have “demonstrated the limits of current methods to detect fake news which have trouble following the volume and rapid evolution of disinformation.” For the Silo, Noe Chartier/ The Epoch Times.

The Canadian Press contributed to this report.

Disinformation Tops Global Risks 2024 

  • Misinformation and disinformation are biggest short-term risks, while extreme weather and critical change to Earth systems are greatest long-term concern, according to Global Risks Report 2024.
  • Two-thirds of global experts anticipate a multipolar or fragmented order to take shape over the next decade.
  • Report warns that cooperation on urgent global issues could be in short supply, requiring new approaches and solutions.
  • Read the Global Risks Report 2024 here, discover the Global Risks Initiative, watch the press conference here, and join the conversation using #risks24.

Geneva, Switzerland, January 2024 – Drawing on nearly two decades of original risks perception data, the World Economic Forum’s Global Risks Report 2024 warns of a global risks landscape in which progress in human development is being chipped away slowly, leaving states and individuals vulnerable to new and resurgent risks. Against a backdrop of systemic shifts in global power dynamics, climate, technology and demographics, global risks are stretching the world’s adaptative capacity to its limit.

These are the findings of the Global Risks Report 2024, released today, which argues that cooperation on urgent global issues could be in increasingly short supply, requiring new approaches to addressing risks. Two-thirds of global experts anticipate a multipolar or fragmented order to take shape over the next decade, in which middle and great powers contest and set – but also enforce – new rules and norms.

The report, produced in partnership with Zurich Insurance Group and Marsh McLennan, draws on the views of over 1,400 global risks experts, policy-makers and industry leaders surveyed in September 2023. Results highlight a predominantly negative outlook for the world in the short term that is expected to worsen over the long term. While 30% of global experts expect an elevated chance of global catastrophes in the next two years, nearly two thirds expect this in the next 10 years.

“An unstable global order characterized by polarizing narratives and insecurity, the worsening impacts of extreme weather and economic uncertainty are causing accelerating risks – including misinformation and disinformation – to propagate,” said Saadia Zahidi, Managing Director, World Economic Forum. “World leaders must come together to address short-term crises as well as lay the groundwork for a more resilient, sustainable, inclusive future.” 

Rise of disinformation and conflict

Concerns over a persistent cost-of-living crisis and the intertwined risks of AI-driven misinformation and disinformation, and societal polarization dominated the risks outlook for 2024. The nexus between falsified information and societal unrest will take centre stage amid elections in several major economies that are set to take place in the next two years. Interstate armed conflict is a top five concern over the next two years. With several live conflicts under way, underlying geopolitical tensions and corroding societal resilience risk are creating conflict contagion.

Economic uncertainty and development in decline
The coming years will be marked by persistent economic uncertainty and growing economic and technological divides. Lack of economic opportunity is ranked sixth in the next two years. Over the longer term, barriers to economic mobility could build, locking out large segments of the population from economic opportunities. Conflict-prone or climate-vulnerable countries may increasingly be isolated from investment, technologies and related job creation. In the absence of pathways to safe and secure livelihoods, individuals may be more prone to crime, militarization or radicalization.

Planet in peril


Environmental risks continue to dominate the risks landscape over all timeframes. Two-thirds of global experts are worried about extreme weather events in 2024. Extreme weather, critical change to Earth systems, biodiversity loss and ecosystem collapse, natural resource shortages and pollution represent five of the top 10 most severe risks perceived to be faced over the next decade. However, expert respondents disagreed on the urgency of risks posed – private sector respondents believe that most environmental risks will materialize over a longer timeframe than civil society or government, pointing to the growing risk of getting past a point of no return.

Responding to risks


The report calls on leaders to rethink action to address global risks. The report recommends focusing global cooperation on rapidly building guardrails for the most disruptive emerging risks, such as agreements addressing the integration of AI in conflict decision-making. However, the report also explores other types of action that need not be exclusively dependent on cross-border cooperation, such as shoring up individual and state resilience through digital literacy campaigns on misinformation and disinformation, or fostering greater research and development on climate modelling and technologies with the potential to speed up the energy transition, with both public and private sectors playing a role.

Carolina Klint, Chief Commercial Officer, Europe, Marsh McLennan, said: “Artificial intelligence breakthroughs will radically disrupt the risk outlook for organizations with many struggling to react to threats arising from misinformation, disintermediation and strategic miscalculation. At the same time, companies are having to negotiate supply chains made more complex by geopolitics and climate change and cyber threats from a growing number of malicious actors. It will take a relentless focus to build resilience at organizational, country and international levels – and greater cooperation between the public and private sectors – to navigate this rapidly evolving risk landscape.”

John Scott, Head of Sustainability Risk, Zurich Insurance Group, said: “The world is undergoing significant structural transformations with AI, climate change, geopolitical shifts and demographic transitions. Ninety-one per cent of risk experts surveyed express pessimism over the 10-year horizon. Known risks are intensifying and new risks are emerging – but they also provide opportunities. Collective and coordinated cross-border actions play their part, but localized strategies are critical for reducing the impact of global risks. The individual actions of citizens, countries and companies can move the needle on global risk reduction, contributing to a brighter, safer world.”

About the Global Risks Initiative


The Global Risks Report is a key pillar of the Forum’s Global Risks Initiative, which works to raise awareness and build consensus on the risks the world faces, to enable learning on risk preparedness and resilience. The Global Risks Consortium, a group of business, government and academic leaders, plays a critical role in translating risk foresight into ideas for proactive action and supporting leaders with the knowledge and tools to navigate emerging crises and shape a more stable, resilient world.