Tag Archives: ChatGPT

A Pathway To Trusted AI

Artificial Intelligence (AI) has infiltrated our lives for decades, but since the public launch of ChatGPT showcasing generative AI in 2022, society has faced unprecedented technological evolution. 

With digital technology already a constant part of our lives, AI has the potential to alter the way we live, work, and play – but exponentially faster than conventional computers have. With AI comes staggering possibilities for both advancement and threat.

The AI industry creates unique and dangerous opportunities and challenges. AI can do amazing things humans can’t, but in many situations, referred to as the black box problem, experts cannot explain why particular decisions or sources of information are created. These outcomes can, sometimes, be inaccurate because of flawed data, bad decisions or infamous AI hallucinations. There is little regulation or guidance in software and effectively no regulations or guidelines in AI.

How do researchers find a way to build and deploy valuable, trusted AI when there are so many concerns about the technology’s reliability, accuracy and security?

That was the subject of a recent C.D. Howe Institute conference. In my keynote address, I commented that it all comes down to software. Software is already deeply intertwined in our lives, from health, banking, and communications to transportation and entertainment. Along with its benefits, there is huge potential for the disruption and tampering of societal structures: Power grids, airports, hospital systems, private data, trusted sources of information, and more.  

Consumers might not incur great consequences if a shopping application goes awry, but our transportation, financial or medical transactions demand rock-solid technology.

The good news is that experts have the knowledge and expertise to build reliable, secure, high-quality software, as demonstrated across Class A medical devices, airplanes, surgical robots, and more. The bad news is this is rarely standard practice. 

As a society, we have often tolerated compromised software for the sake of convenience. We trade privacy, security, and reliability for ease of use and corporate profitability. We have come to view software crashes, identity theft, cybersecurity breaches and the spread of misinformation as everyday occurrences. We are so used to these trade-offs with software that most users don’t even realize that reliable, secure solutions are possible.

With the expected potential of AI, creating trusted technology becomes ever more crucial. Allowing unverifiable AI in our frameworks is akin to building skyscrapers on silt. Security and functionality by design trump whack-a-mole retrofitting. Data must be accurate, protected, and used in the way it’s intended.

Striking a balance between security, quality, functionality, and profit is a complex dance. The BlackBerry phone, for example, set a standard for secure, trusted devices. Data was kept private, activities and information were secure, and operations were never hacked. Devices were used and trusted by prime ministers, CEOs and presidents worldwide. The security features it pioneered live on and are widely used in the devices that outcompeted Blackberry. 

Innovators have the know-how and expertise to create quality products. But often the drive for profits takes precedence over painstaking design. In the AI universe, however, where issues of data privacy, inaccuracies, generation of harmful content and exposure of vulnerabilities have far-reaching effects, trust is easily lost.

So, how do we build and maintain trust? Educating end-users and leaders is an excellent place to start. They need to be informed enough to demand better, and corporations need to strike a balance between caution and innovation.

Companies can build trust through a strong adherence to safe software practices, education in AI evolution and adherence to evolving regulations. Governments and corporate leaders can keep abreast of how other organizations and countries are enacting policies that support technological evolution, institute accreditation, and financial incentives that support best practices. Across the globe, countries and regions are already developing strategies and laws to encourage responsible use of AI. 

Recent years have seen the creation of codes of conduct and regulatory initiatives such as:

  • Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, September 2023, signed by AI powerhouses such as the Vector Institute, Mila-Quebec Artificial Intelligence Institute and the Alberta Machine Intelligence Institute;
  • The Bletchley Declaration, Nov. 2023, an international agreement to cooperate on the development of safe AI, has been signed by 28 countries;
  • US President Biden’s 2023 executive order on the safe, secure and trustworthy development and use of AI; and
  • Governing AI for Humanity, UN Advisory Body Report, September 2024.

We have the expertise to build solid foundations for AI. It’s now up to leaders and corporations to ensure that much-needed practices, guidelines, policies and regulations are in place and followed. It is also up to end-users to demand quality and accountability. 

Now is the time to take steps to mitigate AI’s potential perils so we can build the trust that is needed to harness AI’s extraordinary potential. For the Silo, Charles Eagan. Charles Eagan is the former CTO of Blackberry and a technical advisor to AIE Inc.

OPED: Made by Human: The Threat of Artificial Intelligence on Human Labor

This Article is 95.6% Made by Human / 4.4% by Artificial Intelligence

One of the most concerning uncertainties surrounding the emergence of artificial intelligence is the impact on human jobs.

100% Satisfaction Guarantee

Let us start with a specific example – the customer support specialist. This is a human-facing role. The primary objective of a Customer Support Specialist is to ensure customer satisfaction.

The Gradual Extinction of Customer Support Roles

Within the past decade or so, several milestone transformations have influenced the decline of customer support specialists. Automated responses for customer support telephone lines. Globalization. And chat-bots. 

Chat-bots evolved with the human input of information to service clients. SaaS-based products soon engineered fancy pop-ups for everyone. Just look at Uber if you want a solid case-study – getting through to a person is like trying to contact the King of Thailand. 

The introduction of new artificial intelligence for customer support solutions will make chat-bots look like an AM/FM frequency radio at the antique market. 

The Raging Battle: A Salute to Those on the Front Lines

There are a handful of professions waging a battle against the ominous presence of artificial intelligence. This is a new frontier – not only for technology, but for legal precedent and our appetite for consumption. 

OpenAI is serving our appetite in two fundamental ways: text-based content (i.e. ChatGPT) and visual-based content (i.e. DALL·E). How we consume this content boils down to our own taste-buds, perceptions and individual needs. It is all very human-driven, and it is our degrees of palpable fulfillment that will ultimately dictate how far this penetrates the fate of other professions. 

Sarah Silverman, writer, comedian and actress sued the ChatGPT developer OpenAI and Mark Zuckerberg’s Meta for copyright infringement. 

We need a way to leave a human mark. Literally, a Made by Human insignia that traces origins of our labor, like certifying products as “organic”.

If we’re building the weapon that threatens our very livelihood, we can engineer the solution that safeguards it. 

The Ouroboros Effect

If we seek retribution for labor and the preservation of human work, we need to remain ahead of innovation. There are several action-items that may safeguard human interests:

  • Consolidation of Interest. Concentration of efforts within formal structures or establish new ones tailored to this subject;
  • Litigation. Swift legal action based on existing laws to remedy breaches and establish legal precedents for future litigation;
  • Technological Innovation. Cutting-edge technology that: (a) engineers firewalls for preventing AI scraping technologies; (b) analyzes human work products; and (c) permits tracking of intellectual property.
  • Regulatory Oversight. Formation of a robust framework for monitoring, enforcing and balancing critical issues arising from artificial intelligence. United Nations, but without the thick, glacial layers of bureaucracy.  

These front-line professionals are just the first wave – yet if this front falls, it will be a fatal blow to intellectual property rights. We will have denied ourselves the ideological shields and weapons needed to preserve and protect origins of human creativity

At present, the influence of artificial intelligence on labor markets is in our own hands. If you think this is circular reasoning, like some ouroboros, you would be correct. The very nature of artificial intelligence relies on humans.

Ouroboros expresses the unity of all things, material and spiritual, which never disappear but perpetually change form in an eternal cycle of destruction and re-creation.

Equitable Remuneration 

Human productivity will continue to blend with artificial intelligence. We need to account for what is of human origin versus what has been interwoven with artificial intelligence. Like royalties for streaming music, with the notes of your original melody plucked-out. Even if it’s mashed-up, Mixed by Berry and sold overseas. 

These are complex quantum-powered algorithms. The technology exists. It is along the same lines of code that is empowering artificial intelligence. Consider a brief example: 

A 16-year old boy named Olu decides to write a book about growing-up in a war torn nation. 

 Congratulations on your work, Olu! 

47.893% Human /  52.107% Artificial

Meanwhile, back in London, a 57-year old historian named Elizabeth receives an email:

 Congratulations Elizabeth, your work has been recycled! 

34.546% of your writing on the civil war torn nation has been used in an upcoming book publication. Click here to learn more.

We need a framework that preserves and protects sweat-of-the-brow labor. 

As those on the front-line know: Progress begets progress while flying under the banner of innovation. If we’re going to spill blood to save our income streams – from content writers and hand models to lawyers and software engineers – the fruit of our labor cannot be genetically modified without equitable remuneration. 

Feds False News Checker Tool To Use AI- At Risk Of Language & Political Bias

Ottawa-Funded Misinformation Detection Tool to Rely on Artificial Intelligence

Ottawa-Funded Misinformation Detection Tool to Rely on Artificial Intelligence
Canadian Heritage Minister Pascale St-Onge speaks to reporters on Parliament Hill after Bell Media announces job cuts, in Ottawa on Feb. 8, 2024. (The Canadian Press/Patrick Doyle)

A new federally funded tool being developed with the aim of helping Canadians detect online misinformation will rely on artificial intelligence (AI), Ottawa has announced.

Heritage Minister Pascale St-Onge said on July 29 that Ottawa is providing almost $300,000 cad to researchers at Université de Montréal (UdeM) to develop the tool.

“Polls confirm that most Canadians are very concerned about the rise of mis- and disinformation,” St-Onge wrote on social media. “We’re fighting for Canadians to get the facts” by supporting the university’s independent project, she added.

Canadian Heritage says the project will develop a website and web browser extension dedicated to detecting misinformation.

The department says the project will use large AI language models capable of detecting misinformation across different languages in various formats such as text or video, and contained within different sources of information.

“This technology will help implement effective behavioral nudges to mitigate the proliferation of ‘fake news’ stories in online communities,” says Canadian Heritage.

Related-

OpenAI, Google DeepMind Employees Warn of ‘Serious Risks’ Posed by AI Technology

OpenAI, Google DeepMind Employees Warn of ‘Serious Risks’ Posed by AI Technology

With the browser extension, users will be notified if they come across potential misinformation, which the department says will reduce the likelihood of the content being shared.

Project lead and UdeM professor Jean-François Godbout said in an email that the tool will rely mostly on AI-based systems such as OpenAI’s ChatGPT.

“The system uses mostly a large language model, such as ChatGPT, to verify the validity of a proposition or a statement by relying on its corpus (the data which served for its training),” Godbout wrote in French.

The political science professor added the system will also be able to consult “distinct and reliable external sources.” After considering all the information, the system will produce an evaluation to determine whether the content is true or false, he said, while qualifying its degree of certainty.

Godbout said the reasoning for the decision will be provided to the user, along with the references that were relied upon, and that in some cases the system could say there’s insufficient information to make a judgment.

Asked about concerns that the detection model could be tainted by AI shortcomings such as bias, Godbout said his previous research has demonstrated his sources are “not significantly ideologically biased.”

“That said, our system should rely on a variety of sources, and we continue to explore working with diversified and balanced sources,” he said. “We realize that generative AI models have their limits, but we believe they can be used to help Canadians obtain better information.”

The professor said that the fundamental research behind the project was conducted before receiving the federal grant, which only supports the development of a web application.

Bias Concerns

The reliance on AI to determine what is true or false could have some pitfalls, with large language models being criticized for having political biases.

Such concerns about the neutrality of AI have been raised by billionaire Elon Musk, who owns X and its AI chatbot Grok.

British and Brazilian researchers from the University of East Anglia published a study in January that sought to measure ChatGPT’s political bias.

“We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK,” they wrote. Researchers said there are real concerns that ChatGPT and other large language models in general can “extend or even amplify the existing challenges involving political processes posed by the Internet and social media.”

OpenAI says ChatGPT is “not free from biases and stereotypes, so users and educators should carefully review its content.”

Misinformation and Disinformation

The federal government’s initiatives to tackle misinformation and disinformation have been multifaceted.

The funds provided to the Université de Montréal are part of a larger program to shape online information, the Digital Citizen Initiative. The program supports researchers and civil society organizations that promote a “healthy information ecosystem,” according to Canadian Heritage.

The Liberal government has also passed major bills, such as C-11 and C-18, which impact the information environment.

Bill C-11 has revamped the Broadcasting Act, creating rules for the production and discoverability of Canadian content and giving increased regulatory powers to the CRTC over online content.

Bill C-18 created the obligation for large online platforms to share revenues with news organizations for the display of links. This legislation was promoted by then-Heritage Minister Pablo Rodriguez as a tool to strengthen news media in a “time of greater mistrust and disinformation.”

These two pieces of legislation were followed by Bill C-63 in February to enact the Online Harms Act. Along with seeking to better protect children online, it would create steep penalties for saying things deemed hateful on the web.

There is some confusion about what the latest initiative with UdeM specifically targets. Canadian Heritage says the project aims to counter misinformation, whereas the university says it’s aimed at disinformation. The two concepts are often used in the same sentence when officials signal an intent to crack down on content they deem inappropriate, but a key characteristic distinguishes the two.

The Canadian Centre for Cyber Security defines misinformation as “false information that is not intended to cause harm”—which means it could have been posted inadvertently.

Meanwhile, the Centre defines disinformation as being “intended to manipulate, cause damage and guide people, organizations and countries in the wrong direction.” It can be crafted by sophisticated foreign state actors seeking to gain politically.

Minister St-Onge’s office has not responded to a request for clarification as of this posts publication.

In describing its project to counter disinformation, UdeM said events like the Jan. 6 Capitol breach, the Brexit referendum, and the COVID-19 pandemic have “demonstrated the limits of current methods to detect fake news which have trouble following the volume and rapid evolution of disinformation.” For the Silo, Noe Chartier/ The Epoch Times.

The Canadian Press contributed to this report.

AI Aggregates, But Dyslexia Innovates

The rise of AI is truly remarkable. It is transforming the way we work, live, and interact with each other, and with so many other touchpoints of our lives. However, while AI aggregates, dyslexic thinking skills innovate. If used in the right way, AI could be the perfect co-pilot for dyslexics to really move the world forward. In light of this, Virgin and Made By Dyslexia have launched a brilliant campaign to show what is possible if AI and dyslexic thinking come together. The film below says it all.

As the film shows, AI can’t replace the soft skills that index high in dyslexics – such as innovating, lateral thinking, complex problem solving, and communicating.

If you ask AI for advice on how to scale a brand that has a record company – it offers valuable insights, but the solution lacks creative instinct and spontaneous decision making. If I hadn’t relied on my intuition, lateral thinking and willingness to take a risk, I would have never jumped from scaling a record company to launching an airline – which was a move that scaled Virgin into the brand it is today.

Together, dyslexic thinkers and AI are an unstoppable force, so it’s great to see that 72% of dyslexics see AI tools (like ChatGPT) as a vital starting point for their projects and ideas – according to new research by Made By Dyslexia and Randstad Enterprise. With help from AI, dyslexics have limitless power to change the world, but we need everyone to welcome our dyslexic minds. If businesses fail to do this, they risk being left behind. As the Value of Dyslexia report highlighted, dyslexic skillsets will mirror the World Economic Forum’s future skills needs by 2025. Given the speed at which technology and AI have progressed, this cross-over has arrived two years earlier than predicted.

No alt text provided for this image

With all of this in mind, it’s concerning to see a big difference between how HR departments think they understand and support dyslexia in the workplace, versus the experience of dyslexic people themselves.

 The new research also shows that 66% of HR professionals believe they have support structures in place for dyslexia, yet only 16% of dyslexics feel supported in the workplace. It’s even sadder to see that only 14% of dyslexic employees believe their workplace understands the value of dyslexic thinking. There is clearly work to be done here.

To empower dyslexic thinking in the workplace (which has the two-fold benefit of bringing out the best in your people and in your business), you need to understand dyslexic thinking skills. To help with this, Made By Dyslexia is launching a workplace training course later this year on LinkedIn Learning – and you can sign up for it now. The course will be free to access, and I’m delighted that Virgin companies from all across the world have signed up for it – from Virgin Australia, to Virgin Active Singapore, to Virgin Plus Canada and Virgin Voyages. It’s such an insightful course, designed by experts at Made By Dyslexia to educate people on how to understand, support, and empower dyslexic thinking in the workplace, and make sure businesses are ready for the future.

No alt text provided for this image

It’s always inspiring to see how Made By Dyslexia empowers dyslexics, and shows the world the limitless power of dyslexic thinking. If businesses can harness this power, and if dyslexics can harness the power of AI – we can really drive the future forward.  Richard Branson, Founder at Virgin Group.

Can C3.AI Stock Keep Rallying with AI in the Spotlight?

The recent rise of Artificial intelligence (AI) programs such as ChatGPT has created a frenzy around AI-related stocks.


C3.AI, a pure play AI stock, is up over 100% since late December.

But is this rally sustainable? After all, the public was already surrounded by AI without realizing it. Almost everything people use in daily life is affected by AI already: 

  • advertising
  • entertainment streaming services
  • social media
  • cars (collision detection and blind spot monitoring)
  • fraud prevention
  • screening job applicants
  • email spam filters
  • many other applications

C3.AI is a company that creates software to help other companies deploy AI projects. C3 software is being used in multiple ways, including managing inventories, monitoring for energy inefficiencies, and predicting system failures. [Of particular note is one new product from C3 called ex machina which allows users to program AI initiatives without using any coding at all but instead via a series of visual programming tools. CP]

AI stocks, and technology stocks as a whole, were a neglected market in 2022. The Nasdaq 100, an index heavy in technology stock, fell more than 30% in 2022. C3.AI fell over 65% in 2022, and is currently down almost 90% from its 2020 high (even after the 100% rally in 2023). All currency quotes that follow are in USD.

C3.AI recently peaked at $30.92 on February 6. It then reached a low of $20.31 on March 1 before rallying back to $29.98. It has since fallen and is back near the $20.33 low.

This puts the stock at a crucial level.

An analyst from SafeTradeBinaryOptions.com had this input: “Right now, the stock is in an uptrend, albeit a precarious one. The price has been making higher swing lows and higher swing highs throughout 2023. But if the price drops much below $20, that will no longer be the case. The price will have made a lower high on March 6 (compared to February 6) and if the price drops below the March 2 low, that is a lower low. These are signs of a downtrend starting — not an uptrend.”

All facets of our modern world are already in the embrace of A.I. whether we know it or not.

This $20 region is important because if the area holds, this indicates the price is moving in a range, with the possibility of the price moving back up to the top of the range near $29. If that happens, there is still hope that the price will eventually break out of the range to upside, continuing its advance to $40, for example. 

However, if the price drops below the $20 region, the range is broken and the uptrend is in jeopardy. 

It’s important to watch C3.AI to see how investors are perceiving the future of AI, and what that may mean for the industry’s future. 

As of March 2023, C3 doesn’t have a lot of direct competition. The company is not yet even profitable. How the stock moves is based on whether investors believe the company can eventually generate profits — and in this case, its profits largely depend on whether AI becomes even more widespread than it already is. For the Silo, Kat Fleischman.