Tag Archives: misinformation

A Pathway To Trusted AI

Artificial Intelligence (AI) has infiltrated our lives for decades, but since the public launch of ChatGPT showcasing generative AI in 2022, society has faced unprecedented technological evolution. 

With digital technology already a constant part of our lives, AI has the potential to alter the way we live, work, and play – but exponentially faster than conventional computers have. With AI comes staggering possibilities for both advancement and threat.

The AI industry creates unique and dangerous opportunities and challenges. AI can do amazing things humans can’t, but in many situations, referred to as the black box problem, experts cannot explain why particular decisions or sources of information are created. These outcomes can, sometimes, be inaccurate because of flawed data, bad decisions or infamous AI hallucinations. There is little regulation or guidance in software and effectively no regulations or guidelines in AI.

How do researchers find a way to build and deploy valuable, trusted AI when there are so many concerns about the technology’s reliability, accuracy and security?

That was the subject of a recent C.D. Howe Institute conference. In my keynote address, I commented that it all comes down to software. Software is already deeply intertwined in our lives, from health, banking, and communications to transportation and entertainment. Along with its benefits, there is huge potential for the disruption and tampering of societal structures: Power grids, airports, hospital systems, private data, trusted sources of information, and more.  

Consumers might not incur great consequences if a shopping application goes awry, but our transportation, financial or medical transactions demand rock-solid technology.

The good news is that experts have the knowledge and expertise to build reliable, secure, high-quality software, as demonstrated across Class A medical devices, airplanes, surgical robots, and more. The bad news is this is rarely standard practice. 

As a society, we have often tolerated compromised software for the sake of convenience. We trade privacy, security, and reliability for ease of use and corporate profitability. We have come to view software crashes, identity theft, cybersecurity breaches and the spread of misinformation as everyday occurrences. We are so used to these trade-offs with software that most users don’t even realize that reliable, secure solutions are possible.

With the expected potential of AI, creating trusted technology becomes ever more crucial. Allowing unverifiable AI in our frameworks is akin to building skyscrapers on silt. Security and functionality by design trump whack-a-mole retrofitting. Data must be accurate, protected, and used in the way it’s intended.

Striking a balance between security, quality, functionality, and profit is a complex dance. The BlackBerry phone, for example, set a standard for secure, trusted devices. Data was kept private, activities and information were secure, and operations were never hacked. Devices were used and trusted by prime ministers, CEOs and presidents worldwide. The security features it pioneered live on and are widely used in the devices that outcompeted Blackberry. 

Innovators have the know-how and expertise to create quality products. But often the drive for profits takes precedence over painstaking design. In the AI universe, however, where issues of data privacy, inaccuracies, generation of harmful content and exposure of vulnerabilities have far-reaching effects, trust is easily lost.

So, how do we build and maintain trust? Educating end-users and leaders is an excellent place to start. They need to be informed enough to demand better, and corporations need to strike a balance between caution and innovation.

Companies can build trust through a strong adherence to safe software practices, education in AI evolution and adherence to evolving regulations. Governments and corporate leaders can keep abreast of how other organizations and countries are enacting policies that support technological evolution, institute accreditation, and financial incentives that support best practices. Across the globe, countries and regions are already developing strategies and laws to encourage responsible use of AI. 

Recent years have seen the creation of codes of conduct and regulatory initiatives such as:

  • Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, September 2023, signed by AI powerhouses such as the Vector Institute, Mila-Quebec Artificial Intelligence Institute and the Alberta Machine Intelligence Institute;
  • The Bletchley Declaration, Nov. 2023, an international agreement to cooperate on the development of safe AI, has been signed by 28 countries;
  • US President Biden’s 2023 executive order on the safe, secure and trustworthy development and use of AI; and
  • Governing AI for Humanity, UN Advisory Body Report, September 2024.

We have the expertise to build solid foundations for AI. It’s now up to leaders and corporations to ensure that much-needed practices, guidelines, policies and regulations are in place and followed. It is also up to end-users to demand quality and accountability. 

Now is the time to take steps to mitigate AI’s potential perils so we can build the trust that is needed to harness AI’s extraordinary potential. For the Silo, Charles Eagan. Charles Eagan is the former CTO of Blackberry and a technical advisor to AIE Inc.

Feds False News Checker Tool To Use AI- At Risk Of Language & Political Bias

Ottawa-Funded Misinformation Detection Tool to Rely on Artificial Intelligence

Ottawa-Funded Misinformation Detection Tool to Rely on Artificial Intelligence
Canadian Heritage Minister Pascale St-Onge speaks to reporters on Parliament Hill after Bell Media announces job cuts, in Ottawa on Feb. 8, 2024. (The Canadian Press/Patrick Doyle)

A new federally funded tool being developed with the aim of helping Canadians detect online misinformation will rely on artificial intelligence (AI), Ottawa has announced.

Heritage Minister Pascale St-Onge said on July 29 that Ottawa is providing almost $300,000 cad to researchers at Université de Montréal (UdeM) to develop the tool.

“Polls confirm that most Canadians are very concerned about the rise of mis- and disinformation,” St-Onge wrote on social media. “We’re fighting for Canadians to get the facts” by supporting the university’s independent project, she added.

Canadian Heritage says the project will develop a website and web browser extension dedicated to detecting misinformation.

The department says the project will use large AI language models capable of detecting misinformation across different languages in various formats such as text or video, and contained within different sources of information.

“This technology will help implement effective behavioral nudges to mitigate the proliferation of ‘fake news’ stories in online communities,” says Canadian Heritage.

Related-

OpenAI, Google DeepMind Employees Warn of ‘Serious Risks’ Posed by AI Technology

OpenAI, Google DeepMind Employees Warn of ‘Serious Risks’ Posed by AI Technology

With the browser extension, users will be notified if they come across potential misinformation, which the department says will reduce the likelihood of the content being shared.

Project lead and UdeM professor Jean-François Godbout said in an email that the tool will rely mostly on AI-based systems such as OpenAI’s ChatGPT.

“The system uses mostly a large language model, such as ChatGPT, to verify the validity of a proposition or a statement by relying on its corpus (the data which served for its training),” Godbout wrote in French.

The political science professor added the system will also be able to consult “distinct and reliable external sources.” After considering all the information, the system will produce an evaluation to determine whether the content is true or false, he said, while qualifying its degree of certainty.

Godbout said the reasoning for the decision will be provided to the user, along with the references that were relied upon, and that in some cases the system could say there’s insufficient information to make a judgment.

Asked about concerns that the detection model could be tainted by AI shortcomings such as bias, Godbout said his previous research has demonstrated his sources are “not significantly ideologically biased.”

“That said, our system should rely on a variety of sources, and we continue to explore working with diversified and balanced sources,” he said. “We realize that generative AI models have their limits, but we believe they can be used to help Canadians obtain better information.”

The professor said that the fundamental research behind the project was conducted before receiving the federal grant, which only supports the development of a web application.

Bias Concerns

The reliance on AI to determine what is true or false could have some pitfalls, with large language models being criticized for having political biases.

Such concerns about the neutrality of AI have been raised by billionaire Elon Musk, who owns X and its AI chatbot Grok.

British and Brazilian researchers from the University of East Anglia published a study in January that sought to measure ChatGPT’s political bias.

“We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK,” they wrote. Researchers said there are real concerns that ChatGPT and other large language models in general can “extend or even amplify the existing challenges involving political processes posed by the Internet and social media.”

OpenAI says ChatGPT is “not free from biases and stereotypes, so users and educators should carefully review its content.”

Misinformation and Disinformation

The federal government’s initiatives to tackle misinformation and disinformation have been multifaceted.

The funds provided to the Université de Montréal are part of a larger program to shape online information, the Digital Citizen Initiative. The program supports researchers and civil society organizations that promote a “healthy information ecosystem,” according to Canadian Heritage.

The Liberal government has also passed major bills, such as C-11 and C-18, which impact the information environment.

Bill C-11 has revamped the Broadcasting Act, creating rules for the production and discoverability of Canadian content and giving increased regulatory powers to the CRTC over online content.

Bill C-18 created the obligation for large online platforms to share revenues with news organizations for the display of links. This legislation was promoted by then-Heritage Minister Pablo Rodriguez as a tool to strengthen news media in a “time of greater mistrust and disinformation.”

These two pieces of legislation were followed by Bill C-63 in February to enact the Online Harms Act. Along with seeking to better protect children online, it would create steep penalties for saying things deemed hateful on the web.

There is some confusion about what the latest initiative with UdeM specifically targets. Canadian Heritage says the project aims to counter misinformation, whereas the university says it’s aimed at disinformation. The two concepts are often used in the same sentence when officials signal an intent to crack down on content they deem inappropriate, but a key characteristic distinguishes the two.

The Canadian Centre for Cyber Security defines misinformation as “false information that is not intended to cause harm”—which means it could have been posted inadvertently.

Meanwhile, the Centre defines disinformation as being “intended to manipulate, cause damage and guide people, organizations and countries in the wrong direction.” It can be crafted by sophisticated foreign state actors seeking to gain politically.

Minister St-Onge’s office has not responded to a request for clarification as of this posts publication.

In describing its project to counter disinformation, UdeM said events like the Jan. 6 Capitol breach, the Brexit referendum, and the COVID-19 pandemic have “demonstrated the limits of current methods to detect fake news which have trouble following the volume and rapid evolution of disinformation.” For the Silo, Noe Chartier/ The Epoch Times.

The Canadian Press contributed to this report.

Disinformation Tops Global Risks 2024 

  • Misinformation and disinformation are biggest short-term risks, while extreme weather and critical change to Earth systems are greatest long-term concern, according to Global Risks Report 2024.
  • Two-thirds of global experts anticipate a multipolar or fragmented order to take shape over the next decade.
  • Report warns that cooperation on urgent global issues could be in short supply, requiring new approaches and solutions.
  • Read the Global Risks Report 2024 here, discover the Global Risks Initiative, watch the press conference here, and join the conversation using #risks24.

Geneva, Switzerland, January 2024 – Drawing on nearly two decades of original risks perception data, the World Economic Forum’s Global Risks Report 2024 warns of a global risks landscape in which progress in human development is being chipped away slowly, leaving states and individuals vulnerable to new and resurgent risks. Against a backdrop of systemic shifts in global power dynamics, climate, technology and demographics, global risks are stretching the world’s adaptative capacity to its limit.

These are the findings of the Global Risks Report 2024, released today, which argues that cooperation on urgent global issues could be in increasingly short supply, requiring new approaches to addressing risks. Two-thirds of global experts anticipate a multipolar or fragmented order to take shape over the next decade, in which middle and great powers contest and set – but also enforce – new rules and norms.

The report, produced in partnership with Zurich Insurance Group and Marsh McLennan, draws on the views of over 1,400 global risks experts, policy-makers and industry leaders surveyed in September 2023. Results highlight a predominantly negative outlook for the world in the short term that is expected to worsen over the long term. While 30% of global experts expect an elevated chance of global catastrophes in the next two years, nearly two thirds expect this in the next 10 years.

“An unstable global order characterized by polarizing narratives and insecurity, the worsening impacts of extreme weather and economic uncertainty are causing accelerating risks – including misinformation and disinformation – to propagate,” said Saadia Zahidi, Managing Director, World Economic Forum. “World leaders must come together to address short-term crises as well as lay the groundwork for a more resilient, sustainable, inclusive future.” 

Rise of disinformation and conflict

Concerns over a persistent cost-of-living crisis and the intertwined risks of AI-driven misinformation and disinformation, and societal polarization dominated the risks outlook for 2024. The nexus between falsified information and societal unrest will take centre stage amid elections in several major economies that are set to take place in the next two years. Interstate armed conflict is a top five concern over the next two years. With several live conflicts under way, underlying geopolitical tensions and corroding societal resilience risk are creating conflict contagion.

Economic uncertainty and development in decline
The coming years will be marked by persistent economic uncertainty and growing economic and technological divides. Lack of economic opportunity is ranked sixth in the next two years. Over the longer term, barriers to economic mobility could build, locking out large segments of the population from economic opportunities. Conflict-prone or climate-vulnerable countries may increasingly be isolated from investment, technologies and related job creation. In the absence of pathways to safe and secure livelihoods, individuals may be more prone to crime, militarization or radicalization.

Planet in peril


Environmental risks continue to dominate the risks landscape over all timeframes. Two-thirds of global experts are worried about extreme weather events in 2024. Extreme weather, critical change to Earth systems, biodiversity loss and ecosystem collapse, natural resource shortages and pollution represent five of the top 10 most severe risks perceived to be faced over the next decade. However, expert respondents disagreed on the urgency of risks posed – private sector respondents believe that most environmental risks will materialize over a longer timeframe than civil society or government, pointing to the growing risk of getting past a point of no return.

Responding to risks


The report calls on leaders to rethink action to address global risks. The report recommends focusing global cooperation on rapidly building guardrails for the most disruptive emerging risks, such as agreements addressing the integration of AI in conflict decision-making. However, the report also explores other types of action that need not be exclusively dependent on cross-border cooperation, such as shoring up individual and state resilience through digital literacy campaigns on misinformation and disinformation, or fostering greater research and development on climate modelling and technologies with the potential to speed up the energy transition, with both public and private sectors playing a role.

Carolina Klint, Chief Commercial Officer, Europe, Marsh McLennan, said: “Artificial intelligence breakthroughs will radically disrupt the risk outlook for organizations with many struggling to react to threats arising from misinformation, disintermediation and strategic miscalculation. At the same time, companies are having to negotiate supply chains made more complex by geopolitics and climate change and cyber threats from a growing number of malicious actors. It will take a relentless focus to build resilience at organizational, country and international levels – and greater cooperation between the public and private sectors – to navigate this rapidly evolving risk landscape.”

John Scott, Head of Sustainability Risk, Zurich Insurance Group, said: “The world is undergoing significant structural transformations with AI, climate change, geopolitical shifts and demographic transitions. Ninety-one per cent of risk experts surveyed express pessimism over the 10-year horizon. Known risks are intensifying and new risks are emerging – but they also provide opportunities. Collective and coordinated cross-border actions play their part, but localized strategies are critical for reducing the impact of global risks. The individual actions of citizens, countries and companies can move the needle on global risk reduction, contributing to a brighter, safer world.”

About the Global Risks Initiative


The Global Risks Report is a key pillar of the Forum’s Global Risks Initiative, which works to raise awareness and build consensus on the risks the world faces, to enable learning on risk preparedness and resilience. The Global Risks Consortium, a group of business, government and academic leaders, plays a critical role in translating risk foresight into ideas for proactive action and supporting leaders with the knowledge and tools to navigate emerging crises and shape a more stable, resilient world.