One of only two McLaren F1 road cars finished in the striking Marlboro White exterior
Attractive interior configuration with light blue Alcantara driver’s seat and dark blue/grey leather and Alcantara passenger seats
Incredibly low mileage with just 1,291 kilometers (802 miles) from new
Unaltered and maintained exclusively by McLaren Special Operations in Woking throughout its life
Received a fuel cell replacement in 2021, followed by comprehensive recent maintenance in November 2024
Documented ownership history from new, beginning with Japanese racing team owner Kazumichi Goh
Complete with original owner’s manuals, fitted luggage set, tool roll, and Facom toolchest
Chassis No. SA9AB5AC6S1048053
The McLaren F1 emerged from what might be the most consequential airport delay in automotive history. In 1988, following the Italian Grand Prix, TAG-McLaren Group executives Ron Dennis and Mansour Ojjeh found themselves stranded at Linate Airport alongside McLaren’s Technical Director Gordon Murray and head of marketing Creighton Brown. Their conversation turned to creating the ultimate road car—not just another supercar, but in Dennis’s words, “…the finest sports car the world had ever seen.”
In May 1992 at Le Sporting Club Monaco, the McLaren F1 redefined the supercar genre upon its unveiling. Built around a carbon fiber monocoque—a world first for a production road car—and powered by a bespoke 6.1-liter BMW Motorsport V12 engine, the F1 delivered 627 horsepower and a power-to-weight ratio of 550 horsepower per ton. Its unique central driving position, gold-lined engine bay, and no-compromise approach to performance and driver engagement set new standards that remain unmatched to this day. Limited to just 106 examples across all variants, only 64 were built as standard road cars, making them the most revered and sought after supercar of the modern era.
This superlative example of the F1, chassis number 053, was ordered on 31 March 1995, and assigned production sequence 044. Assembly began on 28 July 1995, with the car being officially delivered “ex-works” on 27 November 1995, showing just 193 kilometers on the odometer. The original purchaser was Kazumichi Goh, the Japanese businessman behind Team Goh, which would go on to win the All Japan Grand Touring Car Championship (JGTC) in 1996 with a pair of McLaren F1 GTRs sponsored by Philip Morris cigarette brand Lark.
Chassis 053 featured a highly distinctive specification highlighted by its Marlboro White exterior finish—one of only two F1 road cars to wear this color. The cabin was finished with a unique blue-themed interior featuring dark blue/grey leather with pierced blue/grey Alcantara cloth central panels for the passenger seats, while the central driver’s seat was entirely covered in light blue Alcantara. This bespoke specification was completed with blue Wilton carpet, light blue Alcantara headlining, and an optional black suede steering wheel. The car was also delivered with a matching bespoke luggage set in dark grey leather with a blue Alcantara strip carrying the embossed chassis number. Factory driver settings were configured with the steering wheel at height position A (highest), pedals at position C (long), reach at position A (near), clutch foot rest at position D (extra long), and standard seat with extra long rails.
In 2004, chassis 053 was purchased by another Japanese collector. By late 2006, the car showed just 432 kilometers when it was sold by WHA Corporation of Nagoya, Japan, to dealer SPS Automotive Ltd. (Hong Kong) on 28 November. The car subsequently came to Europe in 2007 when it was acquired by dealer Lukas Huni AG in Switzerland on behalf of a European client with a recorded mileage of 482 kilometers. On 14 March 2014, chassis 053 was sold via Morris & Welford to collector in the United States. During this ownership, the car spent time in both the U.S. and U.K., and the mileage had increased to 1,108 kilometers. On 16 November 2016, the car was acquired by its next long-term European owner via McLaren Special Operations in Woking and subsequently registered in the U.K. with the appropriate license plate “53 MCL.”
Throughout its life, chassis 053 has been meticulously maintained by McLaren Special Operations. The service book records regular maintenance with all work completed at McLaren’s Woking headquarters on December 12, 2006 (481 kilometers), 14 June 2010 (998 kilometers), 25 October 2016 (1,185 kilometers), 24 April 2018 (1,238 kilometers), and most recently on 12 November 2024 (1,290 kilometers). In July 2021, the car received a comprehensive fuel cell service at McLaren Special Operations totaling £52,061.55 (excluding VAT), which included £31,624.50 in labor charges and £15,472.74 for the fuel cell unit itself. A covering letter from McLaren Heritage Manager Thomas Reinhold noted the return of a “favourite F1” to MSO, an F1 that also “drives extremely well.”
Further work was carried out in November 2021, including replacement of various pipes, fittings, suspension rose-joints and bushes, plus a new water pump at a cost of £23,992.05 (excluding VAT). Most recently, the car returned to MSO in late 2024 for a “3 Year Life Items” service, during which the steering wheel battery, instrument binnacle battery, key fob battery, air conditioning receiver dryer, engine oil and filters, gearbox oil, and coolant were all replaced. The car also underwent a full suspension set-up and headlamp alignment at a cost of £4,861.10 (excluding VAT). Heritage Manager Michael Wrigley’s covering email following this most recent service summed up the car’s exceptional condition: “It’s a truly lovely example so there is very little of note to comment on!”
With just 1,291 kilometers recorded from new, chassis 053 represents one of the lowest mileage and most original McLaren F1 road cars in existence. Its unique color combination, low mileage, comprehensive documentation, complete set of factory tools, owner’s manuals, and fitted luggage make it an unparalleled example of Gordon Murray’s masterpiece. Maintained throughout its life without regard to cost and exclusively serviced by McLaren Special Operations, this McLaren F1 offers its next custodian the opportunity to acquire the ultimate modern collector car in truly museum-quality condition.
A RIG THAT’S SEEN IT ALL (And would probably rather not have)
Born in the 50s for the People’s Liberation Army, the type 56 Chicom Chest Rig is without a doubt a Cold War Legend of the East. The Viet Cong rocked it in Vietnam, Soviet Spetsnaz snagged it in Afghanistan, and every commie-aligned rebel and LARP-ist from Rhodesia to the borders of South Africa copied it.
Naturally the Type 56 caught the attention of the US Special forces in Vietnam and other Western countries too. The US used the rig in conjunction with the family of AKMs borrowed from downed VCs & blend in with the enemy as much as a 6’4 Iowan MACVSOG commando could – it’s high speed and ease of use changed the western world’s opinions on belt-mounted kit as a means of combatting Insurgency.
A spiritual evolution to the bandoleers of old, the Type 56 would go on to inspire the Russian Lifchik, and spread the gospel of chest-stowed-ammo to the western world via Soldiers of Fortune in Rhodesia & South Africa. It still saw use deep within enemy territory in the past few decades. And of course, we could even credit the USA’s very own Pattern 84 rig to the Type 56’s legacy.
To us Zoomers It’s been made famous again by its depictions in cyberspace and on the big screen – CoD Black Ops, Escape from Tarkov, and hit films like Apocalypse Now & Platoon all show the influence of the OD canvas OG.
So stay loaded, unhinged and within the limits of Xi Jinping’s social credit system with the type 56.Or go hog wild and modify it. It’s only $30usd / $42.94cad from our friends at kommandostore.com and is great to get your sewing and seam ripping practice in. Just make sure the party isn’t watching, they don’t like when us filthy capitalists misuse their gear, we wouldn’t want a Cold War II: 电动布加洛.
In case this message is seen by CCP members: Zǎo shang hǎo zhōng guó! Xiàn zài wǒ yǒu Chicom rig—wǒ hěn xǐ huān! We love TEMU, Alibaba, and Xiaohongshu!
Boulder, Colorado, March, 2025 – PS Audio announces the release of The Audiophile’s Guide, a comprehensive 10-volume series on every aspect of audio system setup, equipment selection, analog and digital technology, speaker placement, room acoustics, and other topics related to getting the most musical enjoyment from an audio system. Written by PS Audio CEO Paul McGowan, it’s the most complete body of high-end audio knowledge available anywhere.
The Audiophile’s Guide hardcover book series is filled with clear, practical wisdom and real-life examples that guide readers into getting the most from their audio systems, regardless of cost or complexity. The book includes how-to tips, step-by-step instructions, and real-world stories and examples including actual listening rooms and systems. Paul McGowan noted, “think of it as sitting down with a knowledgeable friend who’s sharing hard-won wisdom about how to make music come alive in your home.”
The 10 books in the series include:
The Stereo – learn the essential techniques that transform good systems into great ones, including speaker placement, system matching, developing critical listening skills, and more.
The Loudspeaker – even the world’s finest loudspeakers will not perform to their potential without proper setup. Master the techniques that help speakers disappear, leaving the music to float in three-dimensional space.
Analog Audio – navigate the world of turntables, phono cartridges, preamps and power amplifiers, and vacuum tubes, and find out about how analog sound continues to offer an extraordinary listening experience.
Digital Audio – from sampling an audio signal to reconstructing it in high-resolution sound, this volume explains and demystifies the digital audio signal path and the various technologies involved in achieving ultimate digital sound quality.
Vinyl – discover the secrets behind achieving the full potential of analog playback in this volume that covers every aspect of turntable setup, cartridge alignment, and phono stage optimization.
The Listening Room – the space in which we listen is a critical yet often overlooked aspect of musical enjoyment. This volume tells how to transform even challenging spaces into ideal listening environments.
The Subwoofer – explore the world of deep bass reproduction, its impact on music and movies, and how to achieve the best low-frequency performance in any listening room.
Headphones – learn about dynamic, planar magnetic, electrostatic, closed-back and open-air models and more, and how headphones can create an intimate connection to your favorite music.
Home Theater – enjoy movies and TV with the thrilling, immersive sound that a great multichannel audio setup can deliver. The book explains how to bring the cinema experience home.
The Collection – this volume distills the knowledge of the above books into everything learned from more than 50 years of Paul McGowan’s experience in audio. Like the other volumes in the series, it’s written in an accessible style yet filled with technical depth, to provide the ultimate roadmap to audio excellence and musical magic.
Volumes one through nine of The Audiophile’s Guide are available for a suggested retail price of $39.99 usd , with Volume 10, The Collection, offered at $49.99 usd. In addition, The Audiophile’s Guide Limited Run Collectors’ Edition is available as a deluxe series with case binding, with the books presented in a custom-made slipcase. Each Collectors’ Edition set is available at $499.00 usd with complimentary worldwide shipping.
About PS Audio Celebrating 50 years of bringing music to life, PS Audio has earned a worldwide reputation for excellence in manufacturing innovative, high-value, leading-edge audio products. Located in Boulder, Colorado at the foothills of the Rocky Mountains, PS Audio’s staff of talented designers, engineers, production and support people build each product to deliver extraordinary performance and musical satisfaction. The company’s wide range of award-winning products include the all-in-one Sprout100 integrated amplifier, audio components, power regenerators and power conditioners.
ArtyA unveils an avant-garde horological creation:
“Purity Wavy HMS Mirror” A fully in-house caliber reimagined through masterful handcraftsmanship. The perfect union of design and comfort, encased in the groundbreaking Wavy case, crafted from titanium with a transparent protective DLC coating. The first-ever mirror casebackStairway To Heaven: The Movement At the heart of this exceptional timepiece, ArtyA’s latest in-house caliber: Stairway To Heaven HMS. This manual-winding movement embodies the Manufacture’s dedication to both visual spectacle and horological excellence: • Microbead-frosted and hand-chamfered minute wheel train and balance bridges. • Pulsing at 4 Hz, the spectacularly “starified” escapement is positioned like a podium centerpiece, suspended in mid-air. • Traditional fine regulation – a hallmark of haute horlogerie that ensures optimal precision. This process involves meticulously adjusting the balance wheel’s inertia using peripheral weights (inertia blocks) to maintain the hairspring’s steady and consistent oscillation. The result: optimized caliber performance and lasting chronometric stability. • Twin barrels, working in parallel, equipped with longer, finer mainsprings for a more stable and linear energy release. The polished barrel blade reduces friction for improved efficiency. The redesigned drum barrels, with fluid, curving lines, seamlessly integrate with the bold bridge architecture of the movement. Proudly bearing the manufacture’s name and caliber designation, this subtle detail completes the movement’s refined aesthetic. Wavy Titanium Case A bold evolution in the Wavy collection, this is the first case crafted from grade 5 titanium, a material prized for its strength and lightness. • Ultra-light yet incredibly strong, titanium embodies both modernity and innovation, delivering exceptional comfort without compromising durability. • The matte finish results from meticulous hand polishing, followed by microbead frosting for a refined texture. • A transparent DLC coating boosts resistance to scratches, shocks, and fingerprints. For comparison, stainless steel has a Vickers hardness of 200 HV, titanium 400 HV, and transparent DLC-treated titanium an impressive 1,200 HV (1,800 HV for the black DLC version). • A mirror-polished lug-to-lug contour adds a discreet yet sophisticated touch, enhancing the timepiece’s elegance without diluting its avant-garde appeal. This meticulous finish – exceptionally complex to achieve on titanium – creates a striking contrast with the case’s matte surface, balancing power with refinement. • Designed by Jérémie Arpa, son of Yvan Arpa, the case embodies independent, family-driven watchmaking at its finest. Its flowing, organic contours evoke the power of ocean waves, an effect heightened by titanium’s natural opacity – delivering a case design unlike anything seen before in haute horlogerie.
Mirror Effects The Wavy Titanium’s mirror caseback introduces an unprecedented innovation, a world first in watchmaking: • A fully reflective surface that offers a striking new way to experience the movement. • From the front, seeing through to the mirrored bottom creates the illusion of depth, with the movement seemingly floating in space, enhancing the ethereal purity of the skeletonized caliber’s aesthetic. • From the back, the one-way mirror effect teases the complexity of the movement without fully revealing it complexity, adding an element of mystery and sophistication.
Limited Edition of 99 pieces Case Grade 5 titanium, satin-finished, mirror-polished lug-to-lug contour Transparent or black DLC protective coating Diameter40 mm Thickness13 mm Water resistance50 meters Caseback Screw-in, engraved, fitted with a one-way mirror Crystal Sapphire, triple ant-reflective coating, laser-engraved chapter ring Hands Brushed and diamond-polished Crown Engraved with the ArtyA signet Caliber ArtyA Purity Stairway To Heaven HMS in-house movement Winding Manual Indications Hours, minutes, and central seconds Power reserve Minimum 72 hours, thanks to twin parallel barrels Frequency 4 Hz (28,800 vph) Finishes Fine regulation through precision adjustment of inertia blocks on the balance wheelMicrobead-frosted minute wheel train and balance bridgesHand-chamfered edgesPolished mainsprings to optimize friction in the barrel assemblies Strap Alligator or grey nubuck leather Buckle ArtyA pin buckle, available with or without black DLC coating Swiss Made Entirely designed and crafted between Geneva and the Swiss Jura Price (excluding VAT)Titanium & Black Titanium CHF 25,900 (reference price) EUR 27,900 (subject to exchange rate) USD 29,900 (subject to exchange rate) CAD 42,754 Also available with transparent, hued or NanoSaphir case From CHF 44,900 (reference price) From EUR 47,900 (subject to exchange rate) From USD 50,900 (subject to exchange rate) VERSION FRANÇAISE
If you have them should you keep them? Read on via this interesting article from our friends at Hagerty.
The nuts and bolts that make up our beloved automobiles have not changed that much over the last 150 years. But the tools needed to maintain them? Those have changed a lot. Software has cemented itself as part of a service technician’s day-to-day regimen, relegating a handful of tools to the history books. (Or, perhaps, to niche shops or private garages that keep many aging cars alive and on the road.)
How many of these now-obsolete tools do you have in your garage? More to the point, which are you still regularly using?
Though spark-plug gap tools can still be found in the “impulse buy” section of your favorite parts store, these have been all but eliminated from regular use by the growing popularity of iridium and platinum plugs. These rare-earth metals are extremely resistant to degradation but, when it comes time to set the proper gap between the ground strap and electrode, they are very delicate. That’s why the factory sets the gap when the plug is produced.
These modern plugs often work well in older engines, meaning that gapping plugs is left for luddites—those who like doing things the old way just because. Nothing wrong with that; but don’t be surprised if dedicated plug-gapping tools fade from common usage fairly quickly.
Verdict: Keep. Takes up no real space.
Dwell meter
sodor/eBay
50 years ago, a tuneup of an engine centered on the ignition system. The breaker points are critical to a properly functioning ignition system, and timing how long those points are closed (the “dwell”) determines how much charge is built up in the ignition coil and thus discharged through the spark plug. Poorly timed ignition discharge is wasted energy, but points-based ignition systems disappeared from factory floors decades ago, and drop-in electronic ignition setups have never been more reliable (or polarizing—but we’ll leave that verdict up to you.)
Setting the point gap properly is usually enough to keep an engine running well, and modern multifunction timing lights can include a dwell meter for those who really need it. A dedicated dwell meter is an outdated tool for a modern mechanic, and thus most of the vintage ones are left to estate sales and online auction sites.
Verdict: Toss once it stops working. Modern versions are affordable and multifunctional.
Distributor wrench
Snap On tools
When mechanics did a lot of regular timing adjustments and tuning, a purposely bent distributor wrench made their lives much easier. However, much like ignition points, distributors have all but disappeared. Thanks to coil-on-plug ignition systems and computer-controlled timing, the distributor is little more than a messenger: It simply tells the computer where the engine is at in its rotation.
Timing adjustments have become so uncommon that a job-specific tool is likely a waste of space. If you’ve got room in your tool chest, keep yours around; but know that a standard box-end wrench can usually get the job done and is only fractionally less convenient than the specialized version.
Verdict: Keep if you have them. No need to buy if you don’t.
Prior to the required standardization of on-board diagnostic computers by the U.S. in 1996, a single car could host a wild mix of analog and digital diagnostic methods. OBDII, which stands for On-Board Diagnostic II, wasn’t the first time that a small computer was used to pull information from the vehicle via an electronic connection; it merely standardized the language.
Throughout the 1980s and early 1990s each OEM had its own version of a scan tool. Now those tools can be reverse-engineered and functionally spoofed by a modern computer, allowing access to diagnostic info tools that, at the time, were only available to dealers. Since many pre-OBDII cars are now treated as classics or antiques and driven far less frequently, the need for period-correct diagnostic tools is dropping.
Verdict: Keep. These will only get harder to find with time, and working versions will be even rarer.
Distributor machine
A distributor is simple in concept. Trying to balance the performance and economy of the ignition system, with the distributor attached to a running engine, and achieving proper operation starts to get pretty complicated. That’s where a distributor machine comes in.
A distributor is attached to the apparatus and spun at engine speed by an electric motor. This allows you to literally see how the points are opening and closing. You can also evaluate the function of vacuum or mechanical advance systems. These machines are still great but the frequency that this service is needed these days is few and far between, especially when trying to justify keeping a large tool around and properly calibrated.
Verdict: Keep, if you are a specialty shop or tool collector.
Engine analyzer
ajpperant
Even a casual enthusiast can see there is a lot more information that can be gleaned from a running engine than whatever readouts might be on the dash. Enter the engine analyzer, a rolling cabinet of sensors and processors designed to fill in the data gaps between everything that is happening in a car and what its gauges report.
An engine analyzer is essentially a handful of additional instruments packaged into a small box hanging around the bottom of your tool drawers. It can also house a lot of sensors in a giant cabinet, which was likely wheeled into the corner of the shop in 1989 and left to gather dust. Now engine analyzers can be found listed online for as cheap as $200usd/ $287cad.
The funny thing is that many of the sensors in these engine analyzers are often the same systems that come built into modern dynamometer tuning systems. In a dyno, the sensors allow the operator to see more than max power; they also show how changes to an engine’s tune affect emissions. Maybe engine analyzers didn’t disappear so much as change clothes.
Verdict: Toss. The opportunity cost of the space these take up can be tough for most home garages. Sensors went out of calibration decades ago so the information you might get from one is dubious at best.
Most pneumatic tools (for home shops)
Ingersoll Rand
Air tools hold an odd place in the hearts of many gearheads. For many years the high-pitched zizzzz and chugging hammers of air-driven die grinders and impact drills were the marks of a pro. Or, at least, of someone who decided that plumbing high-pressure air lines around the shop was easier than installing outlets and maintaining corded tools. Air tools are fantastic for heavy use, as they are much easier to maintain and can be rebuilt and serviced.
Those tools can really suffer in lack of use, though, since pneumatic tools rely on seals and valves, neither of which deal well with dry storage. Battery tools have caught up to air tools for most DIY folk. No more air lines or compressors taking up space in the shop—and requiring additional maintenance—and, in return, a similarly sized yet more agile tool.
Verdict: Keep, if you already have the compressor. Don’t have one? Invest in battery tools.
Babbitt bearing molds/machining jigs
Every engine rebuild has to have bearings made for it in some fashion. Today’s cars use insert bearings that are mass-produced to surgical tolerances for a multitude of applications. If you wanted—or more accurately needed—new bearings in your Model T circa 1920, you needed to produce your own … in place … inside the engine. Welcome to Babbitt bearings.
The process is a true art form, from the setup of the jigs to the chemistry of pouring molten metal and machining the resulting orbs to actually fit the crankshaft and connecting rods. Now there are newly cast blocks for your T that replace the Babbitt with insert bearings. Since those antique Ford engines just don’t get abused the way they used to, and lead fairly pampered lives, they need rebuilding far less often than they did in-period. Modern oils also do a better job of protecting these delicate bearings. Since they are less and less in demand, the tooling and knowledge to make Babbitt bearings are difficult to find, and precious when you do.
Verdict: Keep. It’s literally critical to keeping a generation of cars alive.
Split-rim tire tools
Universal Rim Tool Company
Among the realm of scary-looking tools that have earned their infamy, split-rim tools hold court. The concept is simple: The rim is sectioned, allowing it to contort into a slight spiral that can be “screwed” into a tire. (This is almost the reverse of a modern tire machine, which stretches the tire around a solid wheel rim.) When tires needed tubes, both tire and rim were relatively fragile, and the roads were rough, split rims were popular—and for good reason. Now the tooling for drop-center wheels is ubiquitous and shops often won’t take on split-rim work. Success is hard to guarantee, even if techs are familiar with split rims—and they rarely are.
Verdict: Keep. No substitute for the right tools with this job.
These tools might not make much sense in a dealership technician’s work bay, but that doesn’t mean they should disappear forever. Knowing how to service antiquated technology is as important as ever, whether using old tools or new ones. If you’ve got any of these items, consider it your responsibility to document what the tool does and how to safely use it. Keeping alive the knowledge of where our modern tools came from is powerful.
JG O’Donoghue imagines a ‘versus’ scenario to demonstrate the struggle of ‘languages at risk’
There is a mass decline in linguistic diversity happening all over the planet and in places geographically far apart and I think that if things don’t change, the loss of language diversity will be immense.
In the book, Irish in the globalcontext, Suzanne Romaine mentions that linguists believe 50 to 90 % of the world’s estimated 6,900 languages will simply vanish over next 100 years.
At this moment in time, 85% of the world’s languages have fewer than 100,000 speakers and over half of the world’s remaining languages are spoken by just .2 % of the world’s population. These facts have informed my work and have become the wider subject of my illustrations, specifically the linguistic decline of the Irish language.
In some ways the battle between the Irish and the English languages is one of the defining features in modern Irish culture, but it is Irish which defines this island more, and the Irish language tells the entire history of Ireland in its influences and in its form.
Ruairí Ó hUiginn said in his book The Irish language: you have influences of Latin from the Christianization of Ireland in ecclesiastical words, influences from Viking invasions in words for “seafaring, fishing and trade”, influences from the militaristic Normans [ French CP] in words for “architecture, administration and warfare”, and from English colonialism you get English in every day words.
“To create my intended mood, the english words are given a general typography while the Irish words are given a distinctive script reminiscent of Geoffrey Keating’s book Foras Feasa ar Eirinn”
Each influence shows an aspect of Irish culture. What people forget to realize is that a language is much more than something spoken to express oneself. Ancient peoples created language in an attempt to describe the world around them and the world within them, in other words their worldview.
An example in Irish is- you don’t say ‘I’m angry’, you say ‘tá fearg orm’, which means ‘I have an anger on me’.
Nevertheless, Irish is important internationally too, and Irish is the third oldest written language in Europe, after Latin and Greek, and as a spoken language it may even be older than both.
How should an artist illustrate a language? And more specifically the struggle of one language with another? I choose nature as my metaphor, from the ancient forests of Ireland, mostly gone now, to Islands which stand for thousands of years but are slowly worn away by the tide. The words that make up these landscapes are either ‘for’ or ‘against’.
My illustrations therefore visualize the real life drama of ancient language versus modern language.
I imagine a “versus” scenario. On the “against” side I chose English words plucked from peoples statements in online forums and in letters to newspapers. On the “for” side I chose Irish words, and they were chosen from recent investigations into the creation of the ancient Irish language. Irish words in my illustrations such as “dúchas (heritage), tír (country), litríocht(literature), and stair(history)” reflect the Irish language’s cultural importance, while “Todhchaí(future), féinmhuinín(self-confidence), beatha(life), and anam(soul)” reflect its importance in a metaphysical way to Ireland.
The Irish language forest- An Coill Teanga Gaeilge
The english ‘against’ words can range from the practical benefits of english within subjects such as “tourism, movies, business, and comics,” to words that reflect the interaction of English speakers with Irish. To illustrate the concept, I chose words like “conform, bend, harass, and adapt”.
To create my intended mood, the english words are given a general indistinctive typography reflecting uniform mono-linguilism, while the Irish words are given a distinctive Irish manuscript/Gaelic script reminiscent of Geoffrey Keating’s 17th century book- Foras Feasa ar Éirinn/History of Ireland.
The core message in my illustrations is a positive one, the sun is rising for a new day as the Irish language holds on, like a lot of minority languages. It is diminished but not beyond hope. I believe it can make a comeback, and this is exactly what is happening all over this country today, because of the work of people far more dedicated than myself. I hope my work can help reinforce linguistic diversity as well as all forms of heritage. I have the will to preserve these for future generations, so they too can live in a world full of diversity spending their lives discovering and exploring it in all its beautiful variety.
In 2024, Canada’s labour market showed modest growth, with job creation continuing but lagging rapid population growth. This led to an increase in the unemployment rate, reflecting a mismatch between labour force expansion and job creation rather than a decline in sector-specific labour shortages.
Ongoing challenges persist, such as declining labour productivity, sector-specific labour shortages, underemployment, demographic shifts and disparities, and regional imbalances.
Our international comparisons show that Canada typically ranks at or below the Organisation for Economic Co-operation and Development (OECD) average in terms of labour force participation and employment rates for certain population segments. This is largely due to weaker performance in specific regions, such as the Atlantic provinces, and pension policies that incentivize early retirement.
This labour market review emphasizes the need for tailored policies to improve labour market outcomes for seniors and immigrants. Recommendations include gradually increasing the retirement age, offering high-quality training support, and easing labour mobility barriers.
Introduction
The labour market is where economic changes most directly affect working-age Canadians, influencing their job opportunities and income. The supply of labour also determines the availability of Canadians’ skills and knowledge to employers who combine them with capital to produce goods and services that drive our national income and its distribution among income classes. Therefore, the labour market is one of the most important components of Canada’s – or any – economy.
In 2024, Canada’s labour market saw moderate growth, with employment rising to 20.7 million jobs. However, the employment rate declined to 61.3 percent, down from 62.2 percent in 2023, and remains below the pre-pandemic level of 62.3 percent in 2019. While over 1.7 million employed persons have been added since 2019, employment growth has lagged behind population growth, partly due to an aging population, despite high levels of immigration.1 The unemployment rate also increased, reflecting a gap between job creation and labour force expansion, partly due to limited absorptive capacity to keep pace with population growth.
Job vacancies have decreased since mid-2022, but over half a million positions remained unfilled during the third quarter of 2024 (12 percent higher than the pre-pandemic level). Of these vacancies, the majority were full-time (432,810 positions), with more than 31 percent remaining vacant for the long term – persisting for over 90 days. Despite high full-time vacancies, more than half a million workers were underemployed in 2024, seeking full-time work while employed part-time, indicating mismatches between the skills needed by employers and the skills offered by job seekers. Among sectors facing labour shortages, factors such as better relative wages and working conditions appear to be helping, particularly in industries like construction. Healthcare, on the other hand, may benefit from raising wages and reducing training costs to better attract and retain workers.
Further, Canada faces declining labour productivity, which can be attributed to factors such as stagnant capital investment and automation, high reliance on temporary foreign workers to fill low-paying positions, underemployment (including immigrants’ overqualification), a growing public sector with lower productivity, and shifts in industry composition.
This inaugural C.D. Howe Institute labour market review highlights major differences in the labour market across provinces and sectors and among socio-economic groups. It shows that labour force participation and employment of older workers and recent immigrants still have room for improvement.
Canada needs targeted workforce development policies to improve labour market participation and outcomes for diverse population groups and encourage a longer working life (Holland 2018 and 2019). Our recommendations are to:
Gradually raise the normal retirement age from 65 to 67 and delay pension access.
Support older workers with flexible work, part-time options, and self-employment, especially in the Atlantic provinces.
Invest in high-quality training programs for underrepresented groups, focusing on digital skills and job search strategies.
Streamline credential recognition and licensure for skilled immigrants and ease labour mobility in regulated occupations while maintaining the quality of professional services.
Enhance settlement strategies for immigrants, including workplace-focused language training.
Businesses should integrate automation and artificial intelligence (AI) to boost productivity while improving retention and encouraging later retirement by offering training2 and flexible scheduling (Mahboubi and Zhang 2023).Finally, better informing Canadians about learning and training opportunities and addressing financial and non-financial barriers would improve their training participation rates and empower them to acquire the skills needed in a changing labour market.
Overview of Canada’s Labour Market
Canada’s labour market has undergone major changes over time, influenced by factors such as the COVID-19 pandemic, globalization, technological progress, and demographic shifts. These forces have affected the functioning of the labour market, with demographic changes playing a particularly important role. This section reviews key indicators (i.e., labour-force participation, employment and unemployment) and highlights the major trends and disparities in provincial and national labour markets.
The labour force has grown steadily since 1976 but experienced a decline in 2020 due to the pandemic. The lockdowns and public health measures significantly reduced worker participation, especially among women, in the labour market. However, once the restrictions were lifted, workers returned, and the labour force fully recovered. By 2024, Canada had 22.1 million people in the labour force, an increase of about 1.9 million from 2019, mainly driven by the expansionary immigration policy that the country has followed until recently.3 Immigrants accounted for 56 percent of this increase in the labour force, while non-permanent residents made up 32 percent.4
Although the labour force has grown over time, the labour force participation rate (LFPR) has trended downward over the last two decades. This trend is largely driven by an aging population, as participation rates drop sharply after age 54 and continue to decline with age. While the LFPR among prime-aged workers (25-54) reached a record high in 2023, the overall rate remained below pre-pandemic levels and declined further in 2024, reaching 65.5 percent despite high levels of immigration.5 Three factors contributed to this decline compared to pre-pandemic levels: a lower participation rate among youth, a substantial increase in the older population (aged 55 and over) and a decline in the latter group’s participation rate. This decline in older workers’ participation is primarily due to aging, as the proportion of seniors aged 65 and over within the 55-and-over age group increased from 54.8 percent in 2019 to 60 percent in 2024.
The employment rate is more sensitive to economic conditions and fluctuates with cyclical changes in the unemployment rate. It is also influenced by factors such as government policies on education, training, and income support, as well as employers’ investments in skill development and their effectiveness in matching people to jobs. Despite some volatility during economic booms and recessions, the employment rate trended upward until 2008 but has declined since then, mirroring the impact of an aging population on the participation rate (Figure 1). The pandemic caused a sharp decline in the employment rate, followed by a modest recovery. In 2024, the rate, however, declined again by approximately one percentage point to 61.3 percent, as employment growth (1.9 percent) failed to keep pace with the population growth (3 percent).
Regional disparities in employment persist across Canada. Alberta consistently maintains the highest employment rate, while Newfoundland and Labrador lags. Despite significant improvements since 1976, the Atlantic provinces continue to face challenges with employment. For its part, Ontario’s employment rate – historically the second highest in the country – has been below the national average since 2008. Regional differences in economic development, sectoral specialization patterns, educational attainment, family policy, and demographic characteristics are factors behind these employment disparities. For example, Newfoundland and Labrador and New Brunswick had the highest old-age dependency ratios (OADs) in 2024 at 39 and 37 percent, respectively, while Alberta remains the youngest province with an OAD ratio of less than 23 percent.6
The unemployment rate, a key short-term indicator, tends to rise during economic downturns and fall back during recovery, affecting employment outcomes in the opposite direction (Figure 1). The onset of the pandemic in 2020 led to a temporary surge in the unemployment rate to 9.7 percent – a four-percentage point hike from the previous year. As the economy recovered, the unemployment rate plummeted to a record low of 5.3 percent in 2022. However, by 2024, it had risen to 6.3 percent, a figure that remains relatively low by historical standards but higher than the pre-pandemic rate in 2019.
While employment grew by 1.7 million people between 2019 and 2024, the labour force expanded even faster, increasing by 1.9 million people. This imbalance – where the labour force grew more quickly than employment – pushed the unemployment rate higher, reflecting a loosening labour market and making it more challenging for job seekers to secure employment.
Overall, the labour force and employment in Canada have been expanding due to a surge in immigration. Despite unemployment rates remaining higher than the pre-pandemic level, this primarily reflects the exceptional growth in the labour force rather than a lack of job creation. The labour market continues to adjust to the increase in labour supply through strong job creation.
Looking ahead, several uncertainties and factors could influence unemployment rates. For example, the imposition of trade tariffs by the United States poses a direct risk to export-related jobs. In 2024, 8.8 percent of workers – equivalent to 1.8 million people – were employed in industries dependent on US demand for Canadian exports.7 Sectors most vulnerable to these risks include oil and gas extraction, pipeline transportation, and primary metal manufacturing.
On the other hand, stricter immigration policies that limit the inflow of permanent and non-permanent residents may reduce the growth of the labour force, which could, in turn, place downward pressure on the unemployment rate. However, the ongoing arrival of refugees, which contributes to the growing population of non-permanent residents, could lead to higher unemployment rates, particularly if newcomers face significant challenges integrating into the labour market.
To mitigate the negative impacts of aging on the labour market and address labour needs, it is important to encourage greater participation of underrepresented groups and seniors, ensure new entrants and young workers are equipped with the relevant skills to meet the labour market needs and enhance the productivity of the existing workforce. However, declining labour productivity poses an additional challenge that requires urgent attention.
Trends in Labour Productivity
Labour productivity8 in Canada has generally trended upward until the pandemic, but with a general downward trend in its growth rate. In 2020, average productivity surged to $68.5 per hour worked (in 2017 dollars), mainly driven by compositional changes in employment towards more productive jobs, particularly in the business sector, since most job losses were among low-wage workers. However, this gain proved short-lived; by 2023, productivity fell to $63.6, returning to nearly the same level as in 2019 (Figure 2).
Declining productivity has contributed to a reduction in real GDP per capita, which is a key indicator of Canadians’ living standards. Although Canada’s GDP rose by 6.9 percent (in 2017 dollars) between Q4 2019 and Q4 2023, GDP per capita decreased by 0.2 percent over that period. Since 2020, Canada’s GDP per capita growth has averaged an annual decline of 1.3 percent, compared to a growth rate of 1 percent per year between 2010 and 2019 (Wang 2022). Labour productivity continued to decline in 2024 as real GDP growth fell short of the growth of hours worked. This stands in stark contrast to the robust growth of labour productivity seen in the US during the same period.
Several factors, including human capital stock, skills utilization, overqualification, the concentration of immigrants in low-skilled jobs, limited capital investment, and slow adoption of technology, have likely contributed to recent poor labour productivity trends (Wang 2022; Robson and Bafale 2023, 2024). Notably, the combined influx of immigrants and non-permanent residents has driven the majority of employment growth between 2019 and 2024, accounting for 89 percent of the total increase in employment. Although immigrants and non-permanent residents are more likely than Canadian-born workers to have a university education, many are overqualified and work in jobs that require only a high-school diploma (Mahboubi and Zhang 2024). According to the 2021 census, the overqualification rate among immigrants9 and non-permanent residents was 21 percent and 32.4 percent, respectively, while only 8.8 percent of Canadian-born individuals with a bachelor’s degree or higher were overqualified (Schimmele and Hou 2024). With rising immigration, Canada’s productivity will increasingly depend on how effectively it leverages and develops the skills of new immigrants (Rogers 2024).
The recent influx of newcomers can help mitigate the impact of an aging population as they tend to be younger, typically being at their prime working age (Maestas, Mullen and Powell 2023). However, the concentration of immigrants and non-permanent residents in lower-skilled, low-paying sectors and occupations reduces productivity and, consequently, their contribution to GDP per capita. According to Lu and Hou (2023), between 2010 and 2019, non-permanent residents (work permit holders) were increasingly concentrated in several low-paying industries: accommodation and food services, retail trade, and administrative and support, waste management and remediation services.10 Collectively, these industries accounted for 45 percent of all temporary foreign workers in 2019. With the surge of non-permanent residents, one would expect the situation to have worsened in 2023 since the cap for hiring low-wage temporary foreign workers in 2022 increased from 10 percent to 30 percent in seven sectors, including accommodation and food services and to 20 percent for other industries.11 Similarly, Picot and Mehdi (2024) found that immigrants contribute approximately equal amounts of lower-skilled and higher-skilled labour, with 35 percent of those who landed in 2018 or 2019 working in lower-skilled jobs by 2021.
Relying on temporary foreign workers and immigrants to fill lower-skilled, low-paying jobs means that labour becomes a cheaper option than capital, which naturally disincentivizes businesses from investing in productivity-enhancing technology.12 Increases in the supply of labour also discourage business investment in skills upgrading for the existing workforce (Acemoglu and Pischke 1999).
Increases in labour supply without corresponding higher capital investment will also depress productivity. According to Robson and Bafale (2023), a larger labour force resulting from high immigration will not lead to higher living standards if workers are not equipped with better tools to produce and compete. Young and Lalonde (2024) also found that two-thirds of productivity declines since 2021 stem from this population shock.
Technological advancements, particularly digitalization and AI, offer opportunities to boost productivity. Mischke et al. (2024) find that digitalization and other technological advances could add up to 1.5 percentage points to annual productivity growth in advanced economies. Nevertheless, Canada has been slow in capital investment, automation and AI adoption.
The expansion of the public sector also poses challenges. Compared to 2019, public-sector employment increased by 19.6 percent in 2024, while private sector employment only saw an 8.5 percent increase. Consequently, public-sector jobs in 2024 accounted for 21.5 percent of all employment in Canada, up from 19.6 percent in 2019. However, public-sector productivity has lagged the business sector since 2019. In 2023, it was $58.20 per hour worked, 1.5 percent lower than its 2019 level and 1.5 percent below that of the business sector. With a higher share of public employment in the economy, this lower productivity in the public sector reduces overall labour productivity.
Lastly, significant variations in productivity across industries within the business sector shape Canada’s overall performance (Appendix Figure A1). Some industries, such as educational services, experienced notable productivity gains of 25 percent between 2019 and 2023. In contrast, some low-productivity industries faced substantial declines, with that of holding companies decreasing by 60 percent and construction and transportation dropping by 10 percent.13 Labour productivity in industries with the largest employment gains remained unchanged (professional, scientific, and technical services) or declined (public administration) during the same period (Appendix Figure A2). In contrast, agriculture and accommodation and food services witnessed productivity increases, likely due to investments in machinery and automation accompanying employment declines.
Therefore, the industrial distribution of jobs, shifts in industry composition, and demographic changes within industries can greatly affect Canada’s overall productivity. Tackling Canada’s productivity challenges will require substantial capital investment, targeted initiatives in skills development, technological advancements, and industry-specific strategies to promote sustainable economic growth.
Employment by Skill Level
Skill-biased technological changes – innovations that primarily benefit highly skilled workers, such as those proficient in technology, complex problem-solving, and critical thinking – have increased the demand for high-skilled labour in today’s job market. Despite the limitations of that approach, education has generally been used as a proxy for skills. In response to labour market needs, there has been a significant surge in higher education attainment among Canadians over time. The proportion of the population aged 25 and over having a postsecondary certificate, diploma or university degree rose from 37 percent in 1990 to 69 percent in 2024. According to OECD (2024), Canada has the highest postsecondary education attainment rate among core working-age individuals (25-64).
Despite these educational advancements, Canada faces productivity challenges and lags in technological adoption, particularly relative to the United States. One explanation is that although higher levels of education should translate into greater skills – leading to enhanced productivity, employability and adaptability to labour market changes – other factors such as education quality, experience, on-the-job training, capital investment, technological advancement, skill utilization, and age can substantially influence individuals’ skills levels (Mahboubi 2017b and 2019; Robson and Bafale 2023).
Skills and education levels heavily influence labour-market outcomes. For example, labour force participation, including among seniors, increases with educational attainment and those with higher education tend to remain in the labour market longer. This can mitigate some of the negative effects of an aging labour force, as significantly more seniors today possess a formal education above high school compared to decades ago and can take advantage of the ongoing shift from physical work to knowledge-based work.
In parallel with increases in the supply of highly educated labour, there has been a shift in skills requirements among employers.14 Figure 3 shows employment in high-skill-level occupations has seen remarkable growth over the past three decades, increasing by 299 percent from 1987 to 2024. Notably, during the pandemic, employment in high-skill-level roles continued to grow, even as jobs in other skill categories declined. By 2024, high-skill-level occupations accounted for 23 percent of total employment. Despite this growth, medium- and low-skill-level occupations remain predominant, employing approximately 8.1 million and 5.8 million workers, respectively, compared to 4.8 million in high-skill roles. In the last two decades, immigrants and non-permanent residents have increasingly taken both high-skilled and low-skilled jobs. Between 2001 and 2021, they accounted for half of the employment growth in professional and technical skill occupations (Picot and Hou 2024). Over the same period, employment in lower-skilled occupations decreased by half a million. However, more immigrants and non-permanent residents increasingly occupied low-skilled positions, while Canadian-born workers significantly transitioned away from these roles (Picot and Hou 2024). By 2021, immigrants were more concentrated in professional and lower-skilled occupations compared to their Canadian-born counterparts.
In general, the Canadian labour market has performed well since the pandemic, with particularly strong employment growth for high-skill level occupations. As demand for high-skilled labour continues to grow, improving education quality, promoting on-the-job training, and better utilizing the skills of the workforce are essential for maintaining this balance, maximizing the benefits of educational advancements, enhancing productivity and meeting the evolving demands of the labour market.
Imbalances of Labour Supply and Demand
Studying the relationship between unemployment and job vacancies provides insight into labour supply and demand imbalances. It allows us to examine two problems that hinder business growth and slow the economy down: the lack of sufficient employment opportunities for job seekers and the absence of people with the right skills to fill existing jobs.
This relationship is often described by the Beveridge curve, which illustrates how job vacancy rates and unemployment typically move in opposite directions. However, as noted by Blanchard, Domash, and Summers (2022), shifts in this relationship can occur due to factors such as increased labour demand or structural changes in the economy, leading to both higher vacancy rates and higher unemployment simultaneously.
From 2021 to mid-2022, Canada experienced a tight labour market, with an increase in job vacancies alongside declining unemployment. In response, the federal government relaxed several immigration policies to help address these shortages. However, Fortin (2024, 2025) found that a surge in immigration, particularly driven by temporary immigrants, may aggravate job vacancy rates in the overall economy, as observed in Canada between 2019 and 2023. While immigration can initially alleviate skilled labour shortages, it can also intensify shortages in the broader economy due to increased demand from newcomers for goods and services.
In 2024, the labour market transitioned from a state of tightness to a slackening one. In the third quarter of 2024, job vacancies in Canada totalled more than 572,000,15 marking a 12 percent increase compared to the pre-pandemic level in Q4 2019. With 1.5 million unemployed people in the labour market, there were more than two job seekers for every vacant position during that quarter. However, the provincial situations varied (Figure 4). For example, while British Columbia experienced a relatively tighter labour market, with fewer than two unemployed persons for each vacant position, there were more than four unemployed persons available per vacant position in Newfoundland and Labrador. However, the long-term vacancy rate – the share of openings that remained vacant for 90 days or more in total vacancies – in that province was 36.9 percent, which was four percentage points higher than the British Columbia rate in the third quarter of 2024. This indicates both limited employment opportunities for those unemployed and a mismatch between existing skills and those demanded by employers.
Imbalances between labour supply and demand in Canada also exist at the industry level (Figure 5). For example, while the healthcare sector faces severe labour shortages, the information, culture and recreation industry has the highest unemployment-to-vacancy ratio, indicating an excess labour supply. One interesting observation is that while both the construction and manufacturing sectors had similar levels of excess labour supply, the vacancy rate in construction was significantly higher at 3.6 percent, compared to 2.2 percent in manufacturing. This suggests that employers in the construction sector face more challenges in finding workers with the right skills.
The unemployment-to-job vacancy ratios across industries excluded some 612,000 unclassified unemployed persons: those who had never worked before or were employed more than a year earlier. According to Statistics Canada, about 43 percent of job vacancies in the third quarter of 2024 were for entry-level positions, which is helpful for those unclassified unemployed persons as these roles typically do not require prior experience. However, the specific skills and education requirements of these entry-level positions remain unclear.
An analysis of educational requirements for vacancies in the same quarter shows that 48 percent of all job vacancies required post-secondary training or education. Positions requiring post-secondary education below a bachelor’s degree had an unemployment-to-job vacancy ratio of 2.6, while those requiring a bachelor’s degree or higher faced a higher ratio of 4.1. In contrast, vacancies requiring only a high-school diploma or less had a lower unemployment-to-job vacancy ratio of 1.8. However, employers find it more challenging to secure suitable candidates for positions requiring higher educational levels and specialized skills, particularly at wage levels that candidates are willing to accept.
Wages play an important role in reducing labour market imbalances, as they affect both the supply and demand for labour and encourage labour mobility and reallocation. Between Q4 2019 and Q3 2024, the average offered hourly wage saw the largest increases in industries such as arts and entertainment, agriculture, and information and cultural industries (over 30 percent). These sectors also experienced the most significant reductions in job vacancies, suggesting that offering higher wages can help alleviate labour shortages. To address shortages more broadly, there may also need to be a restructuring of relative wages and working conditions between occupations with labour shortages and those with surplus labour.
Offered wage, or stated salary, rates for vacant positions should largely depend on the growth of job vacancies and the difficulties in finding candidates to fill them. However, Figure 6 shows that industries experiencing a surge in vacancies post-pandemic did not respond consistently. In fact, the average hourly offered wage in these industries fell short of the national average, which was 27 percent between Q4 2019 and Q3 2024. For example, despite substantial growth in vacancies and a shortage of candidates in healthcare, the average offered wage growth in this industry only increased by 23 percent. This is largely due to government control over wages, making them less responsive to market forces. Policies like Ontario’s Bill 124, which capped annual wage increases at one percent for civil servants from 2019 to 2022, have contributed to this restraint. Additionally, multi-year labour contracts and provincial efforts to reduce deficits and debt post-COVID have further limited wage growth in the sector.
In Q3 2024, the average hourly offered wage in the utilities sector only increased by 2 percent compared to the pre-pandemic level, despite a 48 percent increase in job vacancies. Employers in this sector need to raise wages to attract and retain workers with the necessary skills. Otherwise, they will rely on their current workforce to work longer hours to maintain operations, which can lead to lower productivity per additional hour of work and retention challenges.
The average offered wage rate by occupation follows a similar trend (Appendix Figure A3). For example, despite a 59 percent increase in job vacancies, the wage rate for occupations in education, law and social, community and government services only rose by 16 percent, which is below the national average. This further highlights the need for employers to raise wages and improve working conditions to attract and retain workers.
Outcomes by Demographic Characteristics
While labour market indicators point to a strong post-pandemic recovery characterized by high employment, not all working-age Canadians have equally participated in and benefited from this resurgence, highlighting untapped potential across different population groups. Notably, recent demographic trends highlight that the older population and immigrants experience distinct labour market outcomes. Seniors (aged 65 and over) have substantially lower labour force participation rates compared to other demographics, raising concerns about both their economic security and potential contributions to the workforce. Additionally, immigrants frequently face employment barriers that limit their ability to fully integrate into the labour market and contribute to addressing the challenges posed by an aging population. Understanding the labour market outcomes for these groups is important for identifying the obstacles they face and formulating targeted policy recommendations to enhance their participation and success in the workforce.16
Age
There are significant variations in labour force participation across age groups. As expected, seniors exhibit the lowest participation rates, with their engagement in the labour market declining substantially after age 65 (Figure 7). Seniors’ participation rate is low across all provinces, albeit with varying degrees. For instance, Saskatchewan has the highest participation rate for seniors at 18.5 percent, while Newfoundland and Labrador records a notably lower rate of 11.5 percent. The four provinces in the Atlantic region, where the aging problem is more severe, have the lowest participation rate. A lack of employment opportunities for seniors in this region seems to be a major driver, with their unemployment rate significantly higher than both the national average and their counterparts aged 25 to 64 (except for Nova Scotia) (Figure 8).
While seniors participate far less than other Canadians in the labour market, Figure 9 shows significant shifts in their average retirement age over time and notable differences across employment types. Self-employed workers consistently retire later than other workers, with their average retirement age exceeding 68 in recent years, while public sector workers tend to retire earlier. These trends likely reflect variations in pension structures, job security, and financial incentives across employment types. Between 1976 and 1998, the average retirement age of all workers declined by four years to 60.9, likely influenced by the introduction of early retirement pension schemes in order to free up jobs for younger workers (OECD 2017). However, this shift had no obvious impact on younger workers’ employment. Many economists also warned that these measures were shortsighted, as the aging of the baby boomer generation would eventually create new challenges. Meanwhile, concerns about the financial sustainability of pension systems grew due to the increasing life expectancy and subsequent rising costs of providing retirement income (Banks et al. 2010; Herbertsson and Orszag 2003; Jousten et al. 2008; Kalwij et al. 2010; OECD 2017).
In response, the federal government in 2012 increased financial penalties for early retirement to encourage longer working lives.17 Consequently, the average retirement age of all workers began to rise and reached 65.3 in 2024, slightly surpassing its 1976 level. However, the persistent gap between the public sector and self-employed workers suggests that policy adjustments – such as pension reform or incentives for longer careers in the public sector – could be considered to encourage more uniform retirement patterns across employment types. The recent influx of immigrants may also help to alleviate the impact of the retirement wave, as immigrants are more likely to keep working and retire later. According to Fan (2024), the average retirement age among immigrant workers is around 66 over the last decade, two years older than that for Canadian-born workers.
Accordingly, the LFPR of seniors has increased substantially from a historical low of 6 percent in 2001 to 15 percent in 2024. Termination of mandatory retirement, lack of sufficient savings, higher educational attainments, and better health conditions among seniors have contributed to these LFPR increases.18 Hicks (2012) predicts that social and economic pressures will lead to further delay in retirement in the future. For example, of all seniors aged 65 to 74, including both Canadian-born and immigrants, one in ten were employed in 2022 (Morissette and Hou 2024). Nine percent reported working by necessity, while immigrant seniors were more likely to do so than their Canadian-born counterparts.
In the long run, labour productivity growth is the primary driver of Canada’s GDP per capita growth, though the participation rate of seniors can also have a significant impact. Wang (2022) found that during the pandemic, declines in employment and participation rates driven by young people and seniors were major contributors to the sharp drop in GDP per capita. He estimated that if work intensity, the employment rate, and the participation rate had maintained their pre-pandemic momentum from 2010 onward, Canada’s GDP per capita could have been 4 percent higher in 2021 than it was.
As babyboomers are gradually retiring, their lower LFPR will continue to influence the overall participation rate. Vézina et al. (2024) found that the overall participation rate is expected to continue declining in the short term, regardless of the number of immigrants selected. Across various scenarios, the overall participation rate appears to be more sensitive to changes in the participation of seniors than to increases in immigration.19 As a result, keeping older workers, particularly those aged 55 and over, in the labour market could significantly impact the future overall participation rate. As more older workers remain employed, improvements in employment assistance, labour market flexibility, and skills upgrading will be essential (Vézina et al. 2024).
International Comparisons of Pension and Retirement Policies
An international comparison reveals that differences in pension and retirement policies play a crucial role in explaining disparities in employment and retirement decisions across countries (Figure 10). Factors such as the flexibility to choose between continuing to work or claiming a pension, legal provisions regarding age-based termination of employment, and employers’ retention strategies – such as offering on-the-job training and flexible work schedules – greatly influence retirement timing.
One of the most significant factors contributing to the variation in employment decisions across OECD countries is the normal age at which individuals can claim full pension benefits. For instance, in 2022, over 32 percent of Iceland’s population aged 65 and over was employed, although the normal retirement age is 67, with the earliest pension access at age 65. In contrast, only about 14 percent of Canada’s population in the same age group remained employed despite having a higher life expectancy. This discrepancy can be explained by Canada’s normal retirement age of 65, with pension benefits available as early as age 60.
Cross-country analyses show that policy reforms reducing financial incentives for early retirement were key drivers behind the increase in old-age employment (Coile et al. 2024). To address challenges related to aging populations, many countries such as Australia, Denmark, the UK, Japan and Italy have raised, or plan to gradually increase, the retirement age to encourage longer working lives. Denmark and Sweden have even indexed their mandatory retirement ages to life expectancy. Canada should consider similar approaches by raising the normal retirement age and delaying the earliest access age.
Immigrants
International immigration has significantly contributed to Canada’s population and labour force growth. Between 2019 and 2024, immigrants and non-permanent residents accounted for 68 percent of the population growth and over 88 percent of the increase in the labour force. However, immigrants often encounter various obstacles such as language barriers, a lack of Canadian work experience and varying recognition for foreign education and experience (Mahboubi and Zhang 2024). These challenges can limit their employment opportunities and earnings. Furthermore, as Canada faces an aging population, the challenge of integrating immigrants into the workforce becomes even more critical. While aging workers often possess valuable experience, they may struggle with the physical demands of certain jobs or require retraining. Newcomers, on the other hand, may not be immediately equipped to fill these gaps in employment. The productivity levels of immigrants can also be affected by their integration into the labour market, as they may require additional training and support to navigate workplace expectations and cultural nuances.
In 2024, immigrants aged 25 to 54 had a lower employment rate (by 4.3 percentage points) compared to non-immigrants (Figure 11). This gap has narrowed since 2006 and continued to decline even through the pandemic despite the latter’s greater impact on immigrants.20 The remaining gap is mainly due to the lower employment rate of female immigrants.
Employment outcomes of immigrants, particularly among women, depend predominantly on the number of years spent in Canada. For women aged 25-54, the employment gap between female non-immigrants and more recent immigrants (who landed less than 5 years) was 15.5 percentage points. This gap narrowed to 10.6 percentage points for immigrants who landed between 6 and 10 years and further to 6.2 percentage points for those who have been in Canada for more than 10 years.
Over the last decade, the improvements in immigrant employment rates are likely attributed to several factors. These include an increased selection of economic immigrants from non-permanent residents with Canadian work experience, the implementation of the Express Entry21 system for immigration selection, and favourable economic conditions where the demand and supply of immigrant labour are broadly aligned (Hou 2024). In addition, the growth in managerial, professional, and technical occupations accelerated in the late 2010s (Frenette 2023), which would benefit recent immigrants with a university education. Recent immigrants in the prime age group of 25 to 54 have seen faster employment rate growth since the early 2010s, with a notable increase of 13.1 percentage points from 2010 to 2024, compared to a 3.5 percentage point increase among non-immigrants.
However, it’s important to note that some of these conditions may change in the short term. For example, the employment rate for recent immigrants stalled from 2022 to 2023, a period when labour shortages eased, and levels of both permanent and non-permanent immigration rose rapidly (Hou 2024). As such, the dynamics of labour supply and demand have changed, particularly with the increases in the labour supply of new immigrants and non-permanent residents coupled with a cooling labour market and rising unemployment. This could negatively affect the employment outcomes of foreign-born residents in Canada more than those of Canadian-born individuals, as immigrants are often disproportionately affected during economic downturns. In 2024, there was a large increase in the unemployment rate of recent permanent immigrants, rising from 8 percent in 2023 to 9.9 percent. This is more than double the unemployment rate of non-immigrants, indicating the difficulties recent immigrants face in securing employment.
The employment rate of immigrants residing in some provinces is lower than the national rate, such as Ontario and PEI (Figure 12). The relatively poor employment outcomes among immigrants in these provinces may stem from specific employment barriers unique to immigrants, as the unemployment rate of non-immigrants in these provinces remains below the national rate. However, immigrants in Newfoundland and Labrador have a higher employment rate than non-immigrants. In contrast, the employment gap between immigrants and non-immigrants is most pronounced in Quebec, a province with the highest employment rate for non-immigrants in Canada. This gap can, to some extent, be due to a large gap in the unemployment rates of these two population groups. The unemployment rate of immigrants in Quebec is twice that of non-immigrants (or a gap of 3.5 percentage points). Grenier and Nadeau (2011) show that the lack of knowledge of French largely explains why the employment rate gap between immigrants and non-immigrants is larger in Montreal than in Toronto. Greater emphasis on official language training could enhance their ability to fully participate in the local labour market.
Policy Discussion
While the Canadian labour market has shown resilience post-pandemic and continued to perform relatively well in 2024, significant disparities across regions, industries, and demographic groups highlight opportunities to improve participation and employment outcomes. Further, Canada’s declining productivity poses a challenge to the labour market’s ability to drive sustained economic growth and competitiveness.
Demographic shifts, particularly an aging population, continue to affect participation rates and contribute to some shortages. Notably, the expansion of the health industry and the associated labour shortages are closely tied to Canada’s aging population. However, in some industries, average offered wages have not risen enough to attract a larger labour supply, and employers have not sufficiently adopted alternative strategies, such as capital investment and automation, to address their workforce needs.
Addressing these challenges requires a holistic approach. Beyond automation and higher wages, investing in existing workers and removing barriers to labour-market participation by underrepresented groups – such as women, youth, Indigenous Peoples, and seniors – can significantly improve labour market outcomes.
Regional differences in economic conditions contribute to provincial variations in the participation of seniors, while differences in pension and retirement policies play an important role in driving discrepancies in retirement timing across countries. Gradually increasing the normal retirement age is a strategy adopted by some countries to encourage later retirement among seniors. In Canada, the federal government in Budget 2019 offered a way to make later retirement financially more attractive by increasing the Guaranteed Income Supplement (GIS) earnings exemption, allowing seniors to retain more of their increased income if they choose to work. However, provincial measures aimed at boosting older workers’ labour force participation have had mixed results. For instance, Lacroix and Michaud (2024) found that a tax credit in Quebec designed to boost employment among older workers had no significant impact on transitions in or out of the labour force, with only modest effects on earnings for those aged 60 to 64. The study concluded that this measure was not a cost-effective way to increase public revenue or employment rates for older workers.
While the Conservative government in 2012 announced a plan to gradually raise the eligibility age for Canada’s Old Age Security benefits from 65 to 67 starting in 2023, the newly elected Liberal government cancelled the plan in 2016. However, with an aging population and increasing longevity, Canada should reconsider gradual adjustments to the normal retirement age and the earliest access age to help sustain public pension systems and ease demographic pressures. This approach aligns with successful international models, though it requires careful implementation to account for differences in job types and income levels.
Seniors today are healthier and living longer, and delaying retirement can offer both personal and economic benefits and ease demographic transitions (Robson and Mahboubi 2018). Longer working lives allow individuals to accumulate greater retirement savings, reducing the risk of financial insecurity in old age. Working longer has also been linked to better cognitive function, mental well-being, and social engagement.
That said, raising the retirement age would affect workers differently depending on their occupations and financial situations. While high-income, knowledge-based workers may benefit from extended careers through flexible work arrangements or hybrid options, many low-income workers in physically demanding jobs – such as those in construction, manufacturing, or caregiving – may find it challenging to work longer. Policies promoting flexible work options, lifelong learning initiatives, and encouraging and monitoring training program uptake22 can help older workers stay in the workforce longer and maintain their skills (Mahboubi and Mokaya 2021).23 Targeted support, such as enhanced workplace accommodations, phased retirement options, and retraining programs for workers in physically demanding jobs, could ensure that a later retirement age does not disproportionately burden lower-income individuals.
In response to population aging and existing labour shortages, Canada has increasingly relied on higher levels of immigration. However, the overqualification of immigrants’ skills and credentials, particularly among those from non-Western countries, remains a persistent issue. The successful integration of newcomers into the workforce is important to mitigate the short-term impact of an aging population on the labour market and enhance productivity. For example, recognizing the credentials of foreign-trained professionals in fields like healthcare could increase their productivity and earnings, helping to address the chronic shortage of healthcare workers. However, many skilled immigrants hold qualifications in regulated fields overseen by provincial regulatory bodies, which creates considerable barriers to entering the labour market. While these regulations aim to uphold public safety, they differ among provinces. Over the past few years, several provincial governments have taken steps to reduce barriers for foreign-trained immigrants. For instance, British Columbia and Nova Scotia have expedited credential assessments for foreign-trained healthcare professionals, which helped expand their healthcare workforce. Other provinces should consider adopting similar initiatives.
Licensed workers, either immigrants or non-immigrants, in these occupations also face barriers if they wish to change their province of residence. Easing provincial labour mobility in regulated professions could help reduce regional labour shortages in these sectors. Ensuring immigrants’ skills and qualifications are recognized and accepted by employers is also important.
Canada also needs to adopt more effective settlement strategies, with a strong emphasis on improving language proficiency for immigrants who struggle with communication skills. Language training tailored to workplace culture can also bridge language gaps and help newcomers obtain licences to integrate into the labour market. A notable example is the Health English Language Pro (HELP) program, which was launched by ACCES Employment to support internationally educated physicians. The program pairs Canadian physician volunteers with internationally trained medical graduates to help them acquire the necessary medical English skills. Furthermore, in recent years, the expansion of language training facilities has not kept pace with the explosive increase in the number of permanent and temporary immigrants. Governments need to systematically evaluate settlement service agencies to assess the returns on investment and enhance the effectiveness of these services in the labour market.
In addition to reducing regional disparities and improving labour market fluidity – making it easier for workers to transition between jobs – Canada should also focus on increasing GDP per capita by encouraging greater capital investment (Robson, Kronick and Kim 2019; Gu 2024; Robson and Bafale 2023 and 2024) and promoting the adoption of new technologies (e.g., AI, robotics, and automation), with a focus on increasing productivity and complementing the skills of the existing workforce.
Canada’s labour productivity has declined recently – a worrisome trend. Enhancing labour productivity involves addressing skill shortages, overqualification and mismatches. Policies that encourage training and promote automation, as well as higher wages in high-demand sectors, are essential. The potential of AI should also be explored to support labour productivity and mitigate skills and labour shortages (Mahboubi and Zhang 2023). However, it is equally important to provide support for the displacement of low-skilled workers who may be impacted by automation. Governments and employers should focus on training programs that align with the evolving demands of the labour market, including reskilling and upskilling initiatives for those at risk of displacement.
Conclusion
Addressing the challenges of an aging population, a lower senior participation rate, the overqualification of immigrants’ skills, and declining labour productivity requires comprehensive and targeted policy interventions. Canada’s labour market will benefit from proactive measures that support both its existing workforce and newcomers while addressing the demographic pressures ahead.
To ensure sustainable economic growth and greater labour market participation, the following policy actions should be considered:
The federal government should gradually raise the normal retirement age to 67 and assess the benefits of delaying the earliest access age for pension benefits, in line with successful international models.
Provincial governments should adopt targeted policies to support older workers, such as promoting flexible work arrangements, part-time career opportunities, and self-employment options, particularly in regions like the Atlantic provinces, where senior participation is notably low.
All levels of government should invest in high-quality training programs that equip individuals with the skills needed for the evolving labour market, such as digital skills and job search strategies, with a focus on underrepresented groups like seniors, Indigenous Peoples, and youth.
Provinces and regulatory bodies should collaborate to streamline the licensing process for skilled immigrants, enabling foreign-trained professionals to meet local regulatory requirements more efficiently. They should also work together to ease labour mobility in regulated occupations, ensuring that qualifications are recognized across regions without compromising service quality.
The federal government should invest in enhancing settlement strategies for immigrants, including providing language training tailored to workplace culture. It is also important to evaluate the effectiveness of existing programs to ensure they adequately support newcomers’ integration into the workforce.
Employers, in collaboration with governments, should integrate automation and advanced technologies such as AI to boost productivity while ensuring that workers’ skills align with the evolving demands of the economy.
By implementing these policies, Canada can better navigate labour market imbalances, enhance its labour force participation, and position itself for sustainable economic growth in the face of demographic and technological change.
The authors extend gratitude to Pierre Fortin, Mikal Skuterud, Steven Tobin, William B.P. Robson, Rosalie Wyonch, and several anonymous referees for valuable comments and suggestions. The authors retain responsibility for any errors and the views expressed.
References
Acemoglu, Daron, and Pischke, Jörn-Steffen. 1999. “The structure of wages and investment in general training.” Journal of Political Economy 107(3): 539-572.
Banks, James, et al. 2010. “Releasing Jobs for the Young? Early Retirement and Youth Unemployment in the United Kingdom.” In Social Security Programs and Retirement around the World: The Relationship to Youth Employment. University of Chicago Press, pp. 319-344.
Busby, Colin, and Ramya Muthukumaran. 2016. Precarious Positions: Policy Options to Mitigate Risks in Non-standard Employment. Commentary. Toronto: C.D. Howe Institute. December.
Casey, Bernard. 1998. “Incentives and Disincentives to Early and Late Retirement.” Working Paper AWP3.3. Paris: OECD.
Coile, Courtney, David Wise, Axel Börsch-Supan, Jonathan Gruber, Kevin Milligan, Richard Woodbury, Michael Baker, James Banks, Luc Behaghel, Melika Ben Salem, Paul Bingley, Didier Blanchet, Richard Blundell, Michele Boldrín, Antoine Bozio, Agar Brugiavini, Tabea Bucher-Koenen, Raluca Elena Buia, Eve Caroli, Naohiro Yashiro. 2025. “Social Security and Retirement around the World: Lessons from a Long-term Collaboration.” Journal of Pension Economics and Finance 24: 8-30.
Coile, Courtney, Kevin S. Milligan, and David A. Wise. 2018. “Social Security Programs and Retirement Around the World: Working Longer – Introduction and Summary.” National Bureau of Economic Research. Working Paper 24584. May.
Fortin, Pierre, 2024. “Does immigration help alleviate economy-wide labour shortages?” CLEF Working Paper Series 70, Canadian Labour Economics Forum (CLEF), University of Waterloo.
Fortin, Pierre, 2025. The Immigration Paradox: How an Influx of Newcomers Has Led to Labour Shortages. Commentary 677. Toronto: C.D. Howe Institute. February.
Gagnon, Morgan, Sithandazile Kuzviwanza. 2023. “Mapping Employment Supports for Québec’s Racialized and Immigrant English-speaking Communities.” Provincial Employment Roundtable (PERT). July.
Grenier, Gilles, and Serge Nadeau. 2011. “Immigrant access to work in Montreal and Toronto.” Canadian Journal of Regional Science 34(1): 19-32.
Gu, Wulong. 2024. “Investment Slowdown in Canada After the Mid-2000s: The Role of Competition and Intangibles.” Statistics Canada. DOI: https://doi.org/10.25318/11f0019m2024001-eng
Guillemette, Yvan. 2006. “Misplaced Talent: The Rising Dispersion of Unemployment Rates in Canada.” E-Brief 33. Toronto: C.D. Howe Institute.
Haun, Chris. 2023. “A Detailed Analysis of Canada’s Post-2000 Productivity Performance and Pandemic-Era Productivity Slowdown.” Center for the Study of Living Standards. Available at https://www.csls.ca/reports/csls2023-11.pdf
Herbertsson, Tryggvi Thor, and Mike Orszag. 2003. “The Early Retirement Burden: Assessing the Costs of the Continued Prevalence of Early Retirement in OECD Countries.” IZA Discussion Paper, No. 816, July.
Holland, Brian. 2018. “Defining and Measuring Workforce Development in a Post-Bipartisan Era.” GLO Discussion Paper Series # 234, Global Labour Organization (GLO).
___________. 2019. “The Design and Uses of an Employability Index to Evaluate the Vertical Workforce Development Process.” New Horizons in Adult Education and Human Resource Development 31(2): 41-59.
Jousten, Alain, Mathieu Lefèbvre, Sergio Perelman,and Pierre Pestieau. 2010. “The Effects of Early Retirement on Youth Unemployment: The Case of Belgium.” In Social Security Programs and Retirement Around the World: The Relationship to Youth Employment. University of Chicago Press, pp. 47-76.
Kalwij, Adriaan, Arie Kapteyn and Klass de Vos. 2010. “Retirement of Older Workers and Employment of the Young.” De Economist 158 (4): 341-359.
Lacroix, Guy, and Pierre-Carl Michaud. 2024. “Tax Incentives and Older Workers: Evidence from Canada.” HEC Montreal. Working Paper. September.
Lefebvre, Pierre, and Philip Merrigan. 2008. “Child-Care Policy and the Labour Supply of Mothers with Young Children: A Natural Experiment from Canada.” Journal of Labour Economics 26 (3): 519-48.
Mahboubi, Parisa. 2017a. The Power of Words: Improving Immigrants’ Literacy Skills. Commentary 486. Toronto: C.D. Howe Institute. May.
___________. 2017b. “Talkin’ ‘Bout My Generation: More Educated, But Less Skilled Canadians.” E-Brief. Toronto: C.D. Howe Institute.
___________. 2024. Quality Over Quantity: How Canada’s Immigration System Can Catch Up With Its Competitors. Commentary 654. Toronto: C.D. Howe Institute. February.
Mahboubi, Parisa, and Amira Higazy. 2022. Lives Put on Hold: The Impact of the COVID-19 Pandemic on Canada’s Youth. Commentary 624. Toronto: C.D. Howe Institute. July
Mahboubi, Parisa, and Momanyi Mokaya. 2021. The Skills Imperative: Workforce Development Strategies Post-COVID. Commentary 609. Toronto: C.D. Howe Institute.
Mahboubi, Parisa, and Tingting Zhang. 2023. Empty Seats: Why Labour Shortages Plague Small and Medium-Sized Businesses and What to Do About It. Commentary 648. Toronto: C.D. Howe Institute. November.
___________. 2024. Harnessing Immigrant Talent: Reducing Overqualification and Strengthening the Immigration System. Commentary 672. Toronto: C.D. Howe Institute.
______. 2022. What has been the impact of the COVID-19 pandemic on immigrants? An update on recent evidence. OECD Publishing. Paris. https://doi.org/10.1787/65cfc31c-en
Picot, Garnett, and Tahsin Mehdi. 2024. “The Provision of Higher- and Lower-skilled Immigrant Labour to the Canadian Economy.” Statistics Canada. DOI: https://doi.org/10.25318/36280001202400900005-eng
Robson, William B.P. 2019. Thin Capitalization: Weak Business Investment Undermines Canadian Workers. Commentary 550. Toronto: C.D. Howe Institute. August.
Robson, William B.P., and Mawakina Bafale. 2024. Underequipped: How Weak Capital Investment Hurts Canadian Prosperity and What to Do about It. Commentary 666. Toronto: C.D. Howe Institute. September.
Robson, William B.P, and Mawakina Bafale. 2023. Working Harder for Less: More People but Less Capital Is No Recipe for Prosperity. Commentary 647. Toronto: C.D. Howe Institute. June.
Robson, William B.P., and Parisa Mahboubi. 2018. “Inflated Expectations: More Immigrants Can’t Solve Canada’s Aging Problem on Their Own.” E-Brief. Toronto: C.D. Howe Institute. November.
Robson, William B.P., Jeremy Kronick and Jacob Kim. 2018. Tooling Up: Canada Needs More Robust Capital Investment. Commentary 520. Toronto: C.D. Howe Institute. September.
Schimmele, Christoph, and Feng Hou. 2024. “Trends in Education–Occupation Mismatch among Recent Immigrants with a Bachelor’s Degree or Higher, 2001 to 2021.” Statistics Canada. May 22. DOI: https://doi.org/10.25318/36280001202400500002-eng
Skuterud, Mikal. 2023. “Canada’s Missing Workers: Temporary Residents Working in Canada.” E-Brief 345. Toronto: C.D. Howe Institute. September.
Skuterud, Mikal. 2025. “The Growing Data Gap on Canada’s Temporary Resident Workforce.” E-Brief 367. Toronto: C.D. Howe Institute. February.
Looking for the ultimate destination to celebrate Irish culture this March 17? The best cities to celebrate St Patrick’s Day have been revealed.
Our friends at Casino.com have taken the guesswork out of it, calculating the number of Irish bars in the biggest North American cities. By analyzing the number of Irish pubs and the population in each city, the best cities to grab a Guinness this St Patrick’s Day have been ranked.
The best cities to celebrate St Patrick’s Day
Rank
City
State/ Province
Country
Number of Irish bars per 1 million people
1
Boston
Massachusetts
United States
41.5
2
St. John’s
Newfoundland and Labrador
Canada
32.3
3
San Francisco
California
United States
19.8
4
Kelowna
British Columbia
Canada
16.5
5
Seattle
Washington
United States
16.0
6
Washington City
District of Columbia
United States
13.4
7
Baltimore
Maryland
United States
12.3
8
Philadelphia
Pennsylvania
United States
10.2
9
Regina
Saskatchewan
Canada
8.9
10
Kitchener
Ontario
Canada
7.6
Boston, Massachusetts, is the best city to celebrate St Patrick’s Day, with 41.5 Irish bars per 1 million people. Boston has a longstanding Irish heritage, with Irish immigrants arriving in the city as early as 1654 and many more following suit during the Great Irish Famine in the late 1840s and early 1850s. Unsurprisingly, Boston is the ultimate St Patrick’s Day destination, with 41.5 Irish bars per one million people, including Solas Irish Pub and Mr. Dooley’s, which are both open until late.
Celebrate the luck of the Irish in St. John’s, Newfoundland and Labrador, with 32.3 Irish bars to choose from per 1 million people. Newfoundland is a hub for Irish culture, often dubbed the ‘most Irish island in the world’, with many Irish settlers choosing to live in the province’s most populous city, St. John’s. With such a prominent Irish community in the city, St. John’s ranks second as one of the best cities to spend St Patrick’s Day in. There are about 17 Irish bars for a population of about half a million people.
San Francisco is the third-best city to celebrate St Patrick’s Day, with 19.8 Irish bars per 1 million people. San Francisco’s Irish community was largely established during the late 1840s and early 1850s, at the time of the Great Irish Famine. It is often thought that by the 1880s, around a third of the city’s population identified with Irish heritage. San Francisco is, therefore, one of the best cities for Irish culture and a top destination for St Patrick’s Day, with 19.8 Irish bars for every one million people. For the Silo, Grace Burton.
From the rolling vineyards of Provence to the luxury Champagne houses of France, celebrities are turning their star power into wine brands. But do these bottles offer more than just a famous name on the label?
Whether it’s Brad Pitt’s critically acclaimed Château Miraval rosé or Jay-Z’s ultra-premium Ace of Spades Champagne, the world of celebrity-backed wines is booming. Some are passion projects backed by top-tier winemakers, while others are little more than expensive marketing plays. Below, vinicultural expert Peter Douglas Provides a break down a few top celebrity wines.
Wines and celebrities are an iconic match.
Both evoke a sense of lifestyle, luxury, and aspiration. So, it’s no surprise that some celebrities have ventured into the wine business, adding another income stream and supporting their brand image. Celebrity wines, like their owners, come in various styles. Some hip-hop artists favor Champagne, while others opt for rosé or non-alcoholic options.
Even if a celebrity isn’t a wine connoisseur themselves, aligning with a quality product enhances their personal brand. It sends a message that they have discerning taste, appreciate the finer things, and are willing to put their name behind something they believe in (or at least want you to think they believe in!).
Regardless of their involvement, the perception of quality is crucial. No celebrity wants to be associated with a wine considered cheap or poorly made. It could damage their reputation and make the whole venture backfire.
When Brad Pitt and Angelina Jolie bought Château Miraval in 2011, it wasn’t just another celebrity purchase. Partnering with the quality-driven Perrin family, they released their first vintage in 2013, making waves in the rosé world.
Even after their war of the roses, Miraval remains a top-selling rosé. And for good reason.
It’s a classic Provence style: dry, medium-bodied, with bright acidity. Think fresh red cherries, ripe raspberries, and a touch of wild herbs. It’s crisp, refreshing, and has a lingering finish.
Is it worth the hype? Absolutely.
2. Francis Ford Coppola – Coppola Winery (California, USA)
Before celebrity wines were a thing, there was Francis Ford Coppola. The legendary director of The Godfather, Apocalypse Now, and The Conversation wasn’t just making cinematic masterpieces; he was quietly pioneering a wine empire, starting with vineyard purchases in the 1970s.
Coppola’s wine ventures are twofold: his namesake brand and the historic Inglenook estate in Napa Valley. While he sold the Francis Ford Coppola winery to Delicato Family Wines in 2021, Inglenook, a true Napa Valley icon, remains under his ownership.
The most important brands are:
Francis Ford Coppola Reserve Cabernet Sauvignon The flagship from his Sonoma winery, known for its rich flavors and elegant structure. This is the wine for a special occasion, or, you know, just a Tuesday night when you feel like channeling your inner Vito Corleone.
Director’s Cut This series takes a clever approach, linking each wine to a different aspect of filmmaking. It’s a fun concept and an excellent quality wine.
Inglenook Rubicon Inglenook is one of Napa’s “grand wine estates,” and Rubicon is its crown jewel. This is a highly sought-after, exceptional wine that commands respect (and a higher price tag). If you want to experience Napa Valley at its finest, this is a must-try.
Whether you’re a film buff, a wine aficionado, or just curious, Coppola’s wines are definitely worth a try. They are positioned in the premium section, but the extra cost is worth a shot! After all, it’s an offer you can’t refuse.
3. Post Malone – Maison No. 9 (Provence, France)
From hip-hop stages to the rolling hills of Provence, Post Malone has made a surprising, yet somehow fitting, foray into the wine world. The American rapper, producer, guitarist, and purveyor of distinctive face tattoos launched his rosé brand, Maison No. 9..
More than just a celebrity endorsement, Maison No. 9 is a genuine expression of Post Malone’s passion for wine and his vision for a rosé he truly enjoys. The name, a reference to the Nine of Swords tarot card, lends the brand a touch of mystique.||
Produced in partnership with Estandon Vignerons, Maison No. 9 comes in a distinctive bottle and expresses the classic pale salmon pink hue typical of Provençal rosés.
On the nose, it’s a bouquet of inviting aromas: think fresh strawberries and raspberries mingling with delicate white floral notes and a whisper of citrus. The palate is dry and refreshing, with a vibrant acidity that keeps things lively. Flavors of ripe strawberry and watermelon dance on the tongue, complemented by a subtle citrusy edge.
Maison No. 9 hits the spot for those who appreciate a refreshing rosé and enjoy the energy of Post Malone’s music. It’s a well-made wine that shows he’s got more than just “Circles” in his repertoire.||
4. Jay-Z – Armand de Brignac (Champagne, France)
A gold bottle, the iconic Ace of Spades logo, and a price tag that raises eyebrows: Armand de Brignac, or “Ace of Spades” as it’s more commonly known, is Jay-Z’s foray into the world of luxury Champagne. Launched in partnership with the Champagne house Terroirs des Maison, the brand was conceived as a symbol of opulence, a statement piece as much as a drink.
The blend itself is a multi-vintage cuvée, typically composed of about 40% Chardonnay, 40% Pinot Noir, and 20% Pinot Meunier. In the glass, it shimmers with a luminous pale gold hue, releasing a complex bouquet of ripe orchard fruits, bright citrus zest, and delicate notes of brioche. The palate is both creamy and vibrant, revealing layers of golden apple and juicy peach, with a subtle hint of toasted almond. An elegant mousse (the delicate bubbles) underscores the experience, leading to a persistent finish with a touch of minerality.
Jay-Z’s commitment to the brand is clear. In 2014, he acquired full ownership, solidifying his dedication. Over the years, Ace of Spades has become a champagne of the elite in nightclubs and high-end events, cementing its status as a prestige Champagne. This hard work culminated in 2021 when the luxury conglomerate Louis Vuitton Moët Hennessy (LVMH) purchased a 50% stake in the brand, officially placing it among the titans of Champagne, alongside names like Dom Pérignon and Krug.
Ace of Spades is more than just about the bling, as it’s an excellent-quality Champagne. Jay-Z has transformed from a successful musician to a pioneer in the wine industry. He’s undeniably achieved “Empire State of Mind” status and pioneered the Champagne category.
5. Thomas Anders – Modern Talking (Rheinhessen, Germany)
From the synth-driven sounds of 80s pop to the quiet vineyards of Rheinhessen, Thomas Anders of Modern Talking has embarked on a new venture. The “Titans of Pop,” known for hits like “You’re My Heart, You’re My Soul,” have traded the stage for the cellar, with Anders partnering with the St. Antony winery.
This collaboration, based near his hometown, reflects his passion for wine. Since 2022, they’ve produced the “Anders Grauburgunder” (Pinot Gris) and expanded into sparkling and rosé wines. As Anders himself puts it, “For me, wine is more than just a drink…It’s the same with music.” However, these wines are only available through their website in select markets.
Like a “Timebomb,” Kylie Minogue exploded onto the wine scene in 2020, partnering with Benchmark Drinks. With a career spanning decades, Kylie Minogue has cemented her status as a global pop icon.
From her early days on the Australian soap opera Neighbours to her chart-topping hits like “I Should Be So Lucky,” “Spinning Around,” and “Can’t Get You Out of My Head,” Kylie has consistently reinvented herself, captivating audiences worldwide. Now, she’s bringing that same energy and style to the world of wine with her self-titled brand, which includes:
Prosecco Rosé This fancy bottle, adorned with Kylie’s signature, is a fruit-driven, light, and elegant sparkling wine. The Australian singer brings her musical flair to life through this bubbly sensation.
Kylie Minogue No Alcohol Sparkling Rosé A refreshing and finely balanced non-alcoholic sparkling rosé with elegant hints of strawberries. Perfect for those who want to “Spin Around” without the hangover.
And many more! Kylie’s wine offerings are constantly expanding, including Sauvignon Blanc, Rosé, and other varietals and regions.
Kylie Minogue Wines has swiftly established itself as a serious and stylish brand dedicated to producing excellent wines. Her expanding portfolio offers numerous award winners, solidifying Kylie’s presence as a noteworthy figure in the wine and music industry. This is a brand producing excellent wines and definitely one to watch closely.
So are celebrity wines worth the hype?
Yes, these celebrity wine brands are great gifts and often deliver on quality. You pay a premium, but you get a taste of a lifestyle.
For the Silo, Peter Douglas, VinoVoss.com Wine Expert
Peter Douglas, DipWSET, is a wine expert with the “VinoVoss AI Sommelier” smartphone app and web-based semantic wine search and recommendation system developed by BetterAI. VinoVoss picks the perfect wine every time, for any occasion courtesy of a highly advanced artificial intelligence assist. Douglas is an experienced wine trade professional with a diverse background in the HORECA industry, specialist stores, purchasing, portfolio management, and general wine trade. He also possesses hands-on experience in winemaking, further enhancing his knowledge and understanding of the industry. Peter’s qualifications include the WSET-Level 4 Diploma in Wines and Spirits, and currently, he is in Stage 2 of pursuing the most esteemed and prestigious title in the wine industry, Master of Wine. Peter’s expertise extends to consulting distributors and importers, as well as assisting in enhancing the wine portfolio of on-trade settings. Additionally, Peter serves as a wine agent, proficient in sourcing specific SKUs at favorable prices for clients’ portfolios. He can be reached via www.VinoVoss.com.
Brunette knockout Christina Estrada modeled for some of the world’s top brands and appeared in the famous Pirelli calendar. Born in the USA, the glamorous Estrada has been based in London since 1998. She was previously married to Saudi billionaire Sheikh Walid Juffali, but the couple divorced in 2016, leaving Estrada the sole owner of a fabulous Beverly Hills villa. The supermodel has listed the stunning estate for sale at $26 million usd/ $37.7 million cad. According to the listing agent, Gary Gold, “This is the quintessential Beverly Hills estate located on the best block of the best street in the Flats. This is the type of home you see in a movie portraying the good life.”
Built in the 1930s, the mansion has been painstakingly restored, blending the irreplaceable craftsmanship of a bygone era with all the latest in modern luxury.
Spanning 9,000 square feet with five bedrooms, eight bathrooms, and luxe modern amenities, the residence will be sold fully furnished.
The two-story home boasts an imposing Italian-style facade. Old World styling is evident throughout the mansion, with columns, archways, wrought-iron details, and exquisite beamed ceilings. Enter through the impressive two-story foyer, featuring double staircases, coved archways, a wonderful chandelier, and a hand-painted ceiling. The chef’s kitchen boasts top-of-the-line appliances and a spacious butler’s pantry, while the formal dining room includes seating for 12. A cozy breakfast offers a more relaxed atmosphere, with yard access for al fresco dining.
The spacious primary suite includes big windows for lots of natural light, a generously appointed bathroom with marble accents and a glamorous mirrored powder room, plus a walk-in closet fit for a supermodel’s wardrobe. Upstairs, find three more bedrooms with their own en-suites, furnished in a classic style. A step-down living room with steel-case windows and an attractive great room with an inviting fireplace offer lots of space for lounging. Meticulous attention to detail is evident in every room, while the chic but understated furnishings allow the home’s timeless beauty to shine.
The spectacular yard offers a resort-like atmosphere with a stylish pool, manicured lawns, and tall hedges for lots of privacy. Multiple balconies offer pool views, while the den and family room connect with the outdoor spaces for seamless indoor-outdoor living, taking advantage of LA’s year-round pleasant weather. The covered loggia is especially lovely, with intricate columns and a curtained gazebo. Other amenities include a library with built-in bookshelves, a five-car garage, and a separate guest house with two bedrooms and two baths.
Located just off world-famous Sunset Blvd, the mansion is convenient to the music and entertainment venues of the Sunset Strip, the high-end shops on Rodeo Drive, the Getty Museum, and the Los Angeles Country Club. Known for its beautiful homes on large lots, the Flats is one of 90210’s most exclusive neighborhoods. Just a few of the zipcode’s illustrious residents include Adele, Taylor Swift, Jennifer Aniston, Jack Nicholson, Ashton Kutcher and Katy Perry. For the Silo, Bob Walsh/ toptenrealestatedeals.com.
The listing is held by Gary Gold at Forward One. Photo Credit: Jennifer Mann, The Luxury Level
Microfibers were invented by Japanese textile company Toray in 1970, but the technology wasn’t used for cleaning until the late 1980s.
The key, as the name suggests, is in the fiber: Each strand is really tiny—100 times finer than human hair—which allows them to be packed densely on a towel. That creates a lot of surface area to absorb water and pick up dust and dirt. Plus, microfibers have a positive electric charge when dry (you might notice the static cling on your towels), which further helps the towel to pick up and hold dirt. “They tend to trap the dirt in but not allow it to re-scratch the finish,” explains professional concours detailer Tim McNair, who ditched old T-shirts and terry cloths for microfibers back in the 1990s.
These days, the little towels are ubiquitous and relatively cheap, but in order to perform wonders consistently, they need to be treated with respect. Below, a miniature guide to microfibers.
Care for Your Towels: Dos and Don’ts
“They’re just towels,” you might say to yourself. But if you want them to last and retain their effectiveness, microfiber towels need more care than your shop rags:
DO: Keep your microfiber towels together in a clean storage space like a Rubbermaid container. They absorb dirt so readily that a carelessly stored one will be dirty before you even use it.
DON’T: Keep towels that are dropped on the ground. It’s hard to get that gunk out and it will scratch your paint.
DO: Reuse your towels. “I have towels that have lasted 15 years,” says McNair. That said, he recommends keeping track of how they’re used. “I’ll use a general-purpose microfiber to clean an interior or two, and I’ll take them home and wash them. After about two, three washings, it starts to fade and get funky, and then that becomes the towel that does lower rockers. Then the lower rocker towel becomes the engine towel. After engines, it gets thrown away.”
DON’T: Wash your microfibers with scented detergent, which can damage the fibers and make them less effective at trapping dirt. OxiClean works great, according to McNair.
DO: Separate your microfibers from other laundry. “Make sure that you keep the really good stuff with the really good stuff and the filthy stuff with the filthy stuff,” says McNair.
DO: Air-dry your towels. Heat from the dryer can damage the delicate fibers. If you’re in a rush, use the dryer’s lowest setting.
How Do You Know Microfiber Is Split Or Not?
A widespread misunderstanding is that you can “feel” if a microfiber towel is made from split microfiber or not by stroking it with your hand. This is false!
The theory is that if it feels like the towel “hooks” onto tiny imperfections on dry unmoisturized hands, this is because the fibers are split and they microscopically grab your skin. Although this is partially true, you cannot feel split microfiber “hook” onto your skin. These microscopic hooks are way too small to feel, but do generate a general surface resistance called “grab”. Yet, this is not the “individual” hooking sensation you feel when you touch most microfiber towels. It’s the tiny loops in loop-woven microfiber that are large enough to actually feel grabbing imperfections on your hands (minute skin scales).
Try it for yourself: gently stroke a loop-weave microfiber towel of any kind, split or not. If your hands are dry and unmoisturized, you will feel the typical “hooking” sensation most people hate. It’s simply the loops that catch around the scales on your skin like mini lassos. Take a picture of the microfiber material with your smartphone, zoom in and you can clearly see the loops.
Now try stroking a cut microfiber towel which is not loop-woven, split or not, and it will not give that awful hooking sensation. If you take a picture of this material, you will see a furry surface without those loops. Because there are no loops, it won’t “hook”.
Now you know the truth: it’s the loops that latch onto your skin when you touch a microfiber towel, regardless if the towel is split microfiber or not. Tightly woven microfiber towels without pile (e.g. glass towels) can also have the “hooking” effect, caused by the way their fibers are woven, but less pronounced than loop weave towels.
Another misunderstanding is that a towel that is made of non-split microfiber will “push away” water and is non-absorbent. This also is not true!
Although a non-split microfiber fiber is not absorbent, water is still caught in between the fibers. You can do the test: submerge a 100% polyester fleece garment (check the label), which is always non-split fiber, in a bucket of water and take it out after about 10 sec. Wring it out over an empty bucket and you’ll see that it holds quite a bit of water, meaning it is absorbent.
So, another myth is busted: non-split microfiber can’t be determined simply by testing if it holds water. You can however test how much water it holds. Compare it to a similar dry-weight towel that is known to be split 70/30 microfiber: Submerge both in a bucket of water. If they hold about the same amount of water, they are both split microfiber. If the 70/30 towel holds more than twice as much water, the test towel is more than likely non-split material.
How do you know if Microfiber is split or not?
A widespread misunderstanding is that you can “feel” if a microfiber towel is made from split microfiber or not by stroking it with your hand. This is false!
The theory is that if it feels like the towel “hooks” onto tiny imperfections on dry non-moisturized hands, this is because the fibers are split and they microscopically grab your skin. Although this is partially true, you cannot feel split microfiber “hook” onto your skin. Our friends at classiccarmaintenance.com have more to say about this- “these microscopic hooks are way too small to feel, but do generate a general surface resistance called “grab”.” Yet, this is not the “individual” hooking sensation you feel when you touch most microfiber towels. It’s the tiny loops in loop-woven microfiber that are large enough to actually feel grabbing imperfections on your hands (minute skin scales).
Try it for yourself: gently stroke a loop-weave microfiber towel of any kind, split or not. If your hands are dry and unmoisturized, you will feel the typical “hooking” sensation most people hate. It’s simply the loops that catch around the scales on your skin like mini lassos. Take a picture of the microfiber material with your smartphone, zoom in and you can clearly see the loops.
Now try stroking a cut microfiber towel which is not loop-woven, split or not, and it will not give that awful hooking sensation. If you take a picture of this material, you will see a furry surface without those loops. Because there are no loops, it won’t “hook”.
Now you know the truth: it’s the loops that latch onto your skin when you touch a microfiber towel, regardless if the towel is split microfiber or not. Tightly woven microfiber towels without pile (e.g. glass towels) can also have the “hooking” effect, caused by the way their fibers are woven, but less pronounced than loop weave towels.
Another misunderstanding is that a towel that is made of non-split microfiber will “push away” water and is non-absorbent. This also is not true!
Although a non-split microfiber fiber is not absorbent, water is still caught in between the fibers. You can do the test: submerge a 100% polyester fleece garment (check the label), which is always non-split fiber, in a bucket of water and take it out after about 10 sec. Wring it out over an empty bucket and you’ll see that it holds quite a bit of water, meaning it is absorbent.
So, another myth is busted: non-split microfiber can’t be determined simply by testing if it holds water. You can however test how much water it holds. Compare it to a similar dry-weight towel that is known to be split 70/30 microfiber: Submerge both in a bucket of water. If they hold about the same amount of water, they are both split microfiber. If the 70/30 towel holds more than twice as much water, the test towel is more than likely non-split material.
Tim’s Towels
The budget pack of microfiber towels will serve you fine, but if you want to go down the detailing rabbit hole, there’s a dizzying variety of towel types that will help you do specific jobs more effectively. Here’s what McNair recommends:
General Use: German janitorial supply company Unger’s towels are “the most durable things I’ve ever seen,” says McNair.
Wheels and other greasy areas: This roll of 75 microfiber towels from Walmart is perfect for down-and-dirty cleaning, like wire wheels. When your towel gets too dirty, throw it away and rip a new one off the roll.
Glass: There are specific two-sided towels for glass cleaning. One side has a thick nap that is good for getting bugs and gunk off the windshield. The other side has no nap—just a smooth nylon finish—that’s good for a streak-free final wipe down.
The gambling industry in Canada has changed a lot over the years. While land-based casinos remain popular, online platforms have opened up new opportunities for players who prefer the convenience of playing from home. With better internet access, secure payment methods, and a wider variety of games, more Canadians are choosing online casinos over traditional ones.
Online casinos allow players to enjoy everything from classic slot machines to live dealer games, making them a preferred choice for those looking for excitement without visiting a physical casino. Many platforms are now licensed and regulated to ensure safe and fair play. Whether you are a beginner or an experienced player, the options available today cater to everyone.
One of the biggest attractions of online casinos is the variety of games they offer. Websites casinosincanada provide detailed reviews and recommendations on the best platforms available, making it easier for players to choose a trustworthy casino.
Understanding Online Casino Regulations in Canada
The legal status of online gambling in Canada is unique. While gambling is allowed, it is regulated at the provincial level. This means that each province has its own rules regarding online casinos. For example, Ontario has introduced a fully regulated market through iGaming Ontario, allowing licensed operators to provide legal online gambling services.
Other provinces, such as British Columbia and Quebec, operate their own online casino platforms. However, many Canadians still play on international sites that accept players from Canada. These offshore casinos operate under licenses from jurisdictions such as Malta, Gibraltar, and Curacao.
Since each province sets its own rules, it’s important for players to check whether an online casino is legally allowed to operate in their area. Choosing a licensed casino ensures a safer experience, as these platforms follow strict guidelines to protect players.
The Most Popular Online Casinos in Canada
With so many online casinos available, choosing the right one can be difficult. Here are some of the top-rated casinos that Canadian players trust:
1. JackpotCity Casino
Established in 1998, JackpotCity is known for its wide selection of games.
Offers over 700 games, including slots, table games, and live dealer games.
New players can claim up to $1,800 in welcome bonuses.
Holds a license from the Malta Gaming Authority, ensuring a secure experience.
2. Spin Casino
Provides a user-friendly platform with high-quality games.
Features over 700 slot machines and table games.
New users can receive a $1,000 match bonus plus free spins.
Licensed by the Malta Gaming Authority.
3. LeoVegas Casino
Specializes in mobile gaming, offering over 2,500 games.
Known for fast withdrawals, often processed within 24 hours.
Provides a welcome bonus of up to $1,000 plus 100 free spins.
Holds licenses from iGaming Ontario and the Malta Gaming Authority.
4. PartyCasino
Features a mix of classic and modern casino games.
Offers frequent promotions and a loyalty rewards program.
Licensed by the UK Gambling Commission and the Gibraltar Regulatory Authority.
5. Bet365 Casino
A well-known international platform with a strong reputation.
Provides over 2,000 games, including slots, blackjack, and roulette.
New players receive 100 free spins with no wagering requirements.
Regulated by the UK Gambling Commission and iGaming Ontario.
How to Choose the Right Online Casino
If you are new to online gambling, selecting a reliable casino is crucial. Here are a few things to look for:
1. Licensing and Security
Always choose a casino that is licensed by a trusted authority. This ensures fair gaming and secure transactions. Look for SSL encryption, which protects personal and financial data.
2. Game Selection
A good casino should offer a wide variety of games, including slots, table games, and live dealer options. Make sure the platform provides games from reputable software providers.
3. Bonuses and Promotions
Many casinos attract new players with welcome bonuses, free spins, and loyalty programs. Read the terms and conditions carefully to understand wagering requirements before claiming any offers.
4. Payment Methods
Choose a casino that supports convenient and secure payment options. Common methods include credit cards, e-wallets, and bank transfers. Fast withdrawal processing is a key factor to consider.
5. Customer Support
Reliable customer support is essential for a smooth gaming experience. Look for casinos that offer 24/7 assistance through live chat, email, or phone.
The Future of Online Gambling in Canada
The online gambling industry in Canada is evolving rapidly. Ontario’s success in regulating its online market may encourage other provinces to introduce similar frameworks. This could lead to safer and more transparent gaming environments across the country.
Technology is also changing the way people play. Virtual reality (VR) and augmented reality (AR) are expected to enhance the gaming experience, making online casinos more interactive. Additionally, cryptocurrency payments may become more common, providing faster and more secure transactions.
Final Thoughts
Online casinos offer a great way to enjoy casino games without visiting a physical location. With the right precautions, players can have a fun and safe experience. Checking for proper licensing, choosing reputable platforms, and setting limits on gameplay are important steps in responsible gambling.
As the industry grows, more Canadian players will have access to better platforms, improved security, and a wider range of gaming options. Whether you are playing for fun or real money, always gamble responsibly and choose casinos that prioritize player protection. For the Silo, Vinayak Gupta.
Newcomers increase consumption and spending, and are actually contributing to demand for labour in other sectors.
Study in Brief
This study investigates the effects of Canada’s expansive immigration policy, implemented between 2016 and 2024, on labour shortages. It explores how the influx of permanent and temporary immigrants has affected the balance between labour supply and demand, with attention to whether the policy has met one of its key objectives – alleviating shortages in labour markets.
It provides an analysis of labour market dynamics through the lens of the Beveridge curve, which tracks the joint path of unemployment and job vacancies over time. The study compares labour market tightness before, during, and after the pandemic and evaluates how rapidly rising immigration and the adoption of remote work have affected job vacancy rates in Canada.
The arrival of immigrant workers has expanded the supply of labour to employers, but has also generated additional income and spending, and hence greater demand for labour throughout the economy. The macroeconomic evidence from this study indicates that, on balance, the increase in demand generated by immigration has more than likely outpaced the additional supply, potentially making economy-wide labour shortages more widespread rather than alleviating them.
Introduction
Canada’s immigration levels began to accelerate in 2016, following a period of relative stability. From 2001 to 2015, the annual inflow of immigrants, including both permanent and temporary admissions, was reasonably stable at around 0.85 percent of the overall population. In the following years, despite a temporary contraction during the pandemic, this rate rose fourfold, reaching up to 3.2 percent of the population in 2023.
This post-2015 expansion was consistent with recommendations from the Advisory Council on Economic Growth, established by Minister of Finance Bill Morneau in 2016. The Council’s 2016 report suggested that the annual number of permanent economic immigrants should be increased from 300,000 in 2016 to 450,000 in 2021, and to nearly double this number later. Its stated objectives were to increase population growth, reduce the old age dependency ratio, generate a bigger GDP, and accelerate the rise in real GDP per capita by easing shortages of high-skilled workers and other means. Policymakers, encouraged by the perceived success of Canada’s immigration program, embraced the idea that higher immigration levels could deliver even greater economic and demographic benefits.1 The Council also urged the government to facilitate admissions of temporary workers and attract more international students. The government responded by increasing permanent immigration levels from 270,000 in 2015 to 480,000 in 2024, allowing uncapped increases in temporary immigration, and trying to address shortages of low- as well as high-skilled labour.
The C.D. Howe Institute’s research has shown that the benefits of immigration in mitigating population aging, and supporting the growth of GDP per capita, have been more limited than expected (Mahboubi and Robson 2018; Doyle, Skuterud and Worswick 2024). The present study is an attempt to assess whether the policy has succeeded in meeting the goal of easing the challenges employers face in finding suitable candidates for their job openings. The answer to this question has clearly been a big “yes” at the level of the individual employer. Many employers are benefiting from the contribution of their new immigrant workers, which is the basis for the unrelenting support for more immigration by representative national business organizations.
It is less clear whether immigration has helped alleviate labour shortages in the overall economy. Immigration not only expands the supply of labour, but also adds to the demand for labour. Putting more immigrants to work generates an expansionary multiplier effect on gross domestic product (GDP) and national income. As the additional income is spent on various consumption and investment goods by households, businesses and governments, the demand for labour increases. The net effect of immigration on the difference between supply and demand in the aggregate economy is, therefore, a priori uncertain. It could be negative or positive.
My goal in this study is to uncover what simple economic logic, and the statistical evidence from Canadian macrodata, reveal about the direction and quantitative importance of the net effect of rising immigration on the economy-wide balance between the demand for, and the supply of, labour. I find that the demand has likely matched or exceeded the supply and has therefore increased the overall job vacancy rate at any given level of unemployment.
Labour Shortages and Job Vacancies
What do “labour shortages” mean, and how have they evolved since Canada’s immigration rate began to increase eight years ago? Employers feel they are short of labour when the number of unfilled job openings significantly exceeds the number of available employees with the necessary skills and qualifications to meet their operational needs. Each month, Statistics Canada reports the extent of labour shortages in various sectors and regions from its Job Vacancy and Wage Survey. It is called the “job vacancy rate” and is an estimate of the number of job vacancies as a percentage of total labour demand, including all occupied and vacant salaried jobs.
Data on the job vacancy rate have been available since 2015 (Figure 1). After the oil-induced economic slowdown of 2014-2015, job vacancies increased from 2.3 percent of labour demand in mid-2016 to 3.3 percent in early 2020. No vacancy data were available from April to September 2020 due to a six-month pandemic-related pause in Statistics Canada’s survey. Moving through the spring 2020 recession, but with the unemployment rate still very high, job vacancies then increased swiftly, reaching a peak of 5.7 percent of all occupied and vacant jobs in the second quarter of 2022. But with the economic slowdown and slackened labour markets subsequently accompanying high interest rates, vacancies fell back to 3.0 percent of labour demand in the third quarter of 2024.
Immigration and Labour Supply and Demand
Since 2015, Canada’s job vacancy rate has fluctuated in response to three key macroeconomic factors: rising immigration, the pandemic, and fluctuations in aggregate economic activity.
Immigration has risen steadily in recent years, with both permanent and temporary entries increasing in each non-pandemic year (Figure 2). Permanent admissions rose from 272,000 in 2015 to 472,000 in 2023. This upward trend was guided by the multi-year immigration-level targets set each year since 2017 by the government in its Annual Report to Parliament on Immigration. For example, the target for permanent admissions in 2023 was set at 465,000 in the 2022 Report.
Temporary immigration includes holders of study or temporary work permits, asylum seekers, and their family members. They are collectively referred to as “non-permanent residents” by Statistics Canada. Prior to 2024, temporary immigration was excluded from the government’s annual targets. It was uncapped and followed demand from businesses and educational establishments. The net annual addition to temporary permits (new entries less exits to permanent residence and to abroad) rose from basically zero in 2015 to 190,000 in 2019, and 821,000 in 2023 (Figure 2).
Overall, total immigration – the sum of permanent and temporary immigration – increased fivefold from 263,000 in 2015, to 1,293,000 in 2023. Was this fivefold surge in immigration over eight years able to lower the job vacancy rate and reduce labour shortages in the aggregate Canadian economy? How could it not? Prima facie, the arrival of new immigrant workers increases the supply of labour, allowing recipient employers to ameliorate their personnel gap, at least in part. The addition of immigrant labour might suggest the “common sense” inference that labour scarcity has been effectively eased up throughout the economy.
However, it is erroneous to assume that simply because immigration solves the personnel shortage of individual employers, it will necessarily solve the problem of labour scarcity in the aggregate economy.
The error comes from focusing narrowly on increasing the supply of labour, while neglecting the simultaneous increase in the demand for labour that is generated by immigration. With more immigrants in the workforce, employers can produce more goods and services and generate more income for themselves, their employees, and their suppliers – a good thing. However, to assess the overall effect of immigration on labour scarcity, it is crucial to consider that this additional income will be spent on various consumer and investment goods. Immigrants allocate their new income, along with any savings brought from abroad, to essentials such as food, clothing, housing, transportation, personal care, and leisure. In turn, employers and their chains of suppliers invest more in construction, machinery and intellectual property. Furthermore, immigrants, employers and suppliers all contribute to taxes, which governments allocate to meet the increased demand for social services, including public housing, education, and healthcare. The growing demand for private and public goods and services will expand aggregate labour demand.
In other words, the hiring of immigrants initially adds to the supply of labour, but it also ends up adding to the demand for labour once the new income generated is spent throughout the economy and a multiplier effect is generated on GDP. On net, it is a priori uncertain whether the supply increases more than the demand, in which case labour would be made less scarce overall, or whether it is the demand that increases more than the supply, in which case labour would be made scarcer.
As a first attempt to clarify the picture, let us see how the excess of labour supply over labour demand evolved from 2016 to 2024 (Figure 3). I take labour supply to be the entire labour force (all workers who are employed or are looking for work), and labour demand to be the sum of employment and job vacancies (all jobs that are occupied or ready to be filled). Expressed as a percentage of the labour force, the difference between the two – excess supply – boils down to the difference between unemployment and job vacancies. Excess supply goes up or down depending on whether unemployment increases more or less than job vacancies.
Figure 3 shows that the excess supply of labour has fluctuated widely since 2016. In the pre-pandemic period 2016-2019, it declined from 5.4 percent to 2.9 percent of the labour force. Labour became scarcer. During the pandemic year 2020, it shot up to 6.1 percent of the labour force. But in the aftermath, labour demand outpaced supply again so that by mid-2022 excess supply had dropped to a low of 0.3 percent of the labour force. Since then, it has risen back to 4.1 percent.
The time path of the excess supply of labour cannot alone determine whether the rise in immigration since 2016 has increased labour supply more or less than labour demand. Excess supply results from the interplay of three simultaneous determinants: rising permanent immigration and accelerating temporary immigration, the disruptions caused by the pandemic and its potential after-effects, and fluctuations in aggregate activity. For example, the declining excess supply in the pre-pandemic period 2016-2019 was the combined outcome of rising immigration and aggregate economic expansion. But the impact of rising immigration cannot be separated out from that of aggregate economic expansion by just looking at the trend in excess supply. Correctly identifying the net effect of each of the two factors requires a more comprehensive economic and statistical analysis of the data.
The Shifting Beveridge Curve
To identify the net effect of immigration on labour shortages, I will use a well-established tool called the Beveridge curve. The Beveridge curve offers valuable insights by highlighting the observed inverse relation between vacancies and unemployment.
William Beveridge (Beveridge 1960) used the unemployment rate as a main marker of fluctuations in aggregate activity, a practice business cycle analysts still follow to this day (Romer and Romer 2019; Hazell et al. 2022). He observed that vacancies and unemployment typically move in opposite directions through business cycles. He attributed the negative relationship to the pressure exerted by aggregate activity on economic potential. When aggregate economic activity was moving up to its full potential (as in Canada in 2016-2019), there were fewer unemployed workers and more job vacancies. Conversely, when activity was moving away from potential (as in Canada in 2023-2024), there were more unemployed workers and fewer job vacancies. Since 1960, this inverse relation between the job vacancy rate and the unemployment rate – now called the Beveridge curve – has played a key role in macroeconomic analysis of labour markets. It has been abundantly studied by researchers and has been identified in job vacancy and unemployment data in many countries (e.g., Blanchard and Diamond 1989; Pissarides 2000; Archambault and Fortin 2001; Elsby, Michaels and Ratner 2018; Michaillat and Saez 2021).
It is instructive to examine the trajectory of the Canadian unemployment – job vacancy relation in two-dimensional space from 2015 to 2024 (Figure 4). First, following the 2015 economic slowdown, the economic expansion of 2016-2019 brought a decrease in the unemployment rate and an increase in the job vacancy rate along a path that was consistent with a negatively sloped Beveridge curve. The sudden outbreak of the pandemic in early 2020 shattered this trajectory. The unemployment – job vacancy pair was sent far outward toward the northeast corner of the chart. From then until the end of 2021, it followed a new Beveridge curve to the northwest. During the recovery following the pandemic recession in the spring quarter of 2020, the unemployment rate decreased and the job vacancy rate increased along a path that was about parallel to that of 2015-2019, but at a much higher level. For instance, whereas the unemployment rate was the same in the summer quarter of 2021 as in the winter quarter of 2016 (7.25 percent), the job vacancy rate was twice as large in the former (4.2 percent) as it was in the latter (1.9 percent). Finally, as the pandemic faded, the unemployment – job vacancy pair did a loop to the west. A new post-pandemic Beveridge curve emerged along a southeasterly trend that looked parallel to, but somewhat higher than, the old pre-pandemic path of 2015-2019.2
This visual check reveals that there have been three distinct periods in the inverse relationship between job vacancies and unemployment, known as the Beveridge curve: pre-pandemic, pandemic and post-pandemic. The start and end of the pandemic significantly affected the vertical position of the Beveridge curve in the unemployment – job vacancy space. Although the three branches are not perfectly aligned, they appear to be nearly parallel. According to the statistical results in Table 1 below, a one percent change in the unemployment rate corresponds to about a 1.5 percent change in the opposite direction in the vacancy rate – this is sometimes referred to as the Beveridge curve “elasticity.”
The shifts in the Canadian Beveridge curve during the pandemic are not an entirely unexpected development. Shifts have occurred from time to time in the past.3 As Figure 4 has shown, the Canadian Beveridge curve looked relatively stable before the pandemic in 2016-2019. Figure 5 is an idealized illustration of the position it occupied in the unemployment – job vacancy space in this period. However, starting in 2020, it shifted significantly. It first moved outward during the pandemic in 2020-2021 and then returned inward after the pandemic in 2022-2024.
As an initial assessment of the magnitude of these movements, I use the actual values of unemployment and job vacancies to calculate the implied monthly shifts in the Figure 5 Beveridge curve from January 2016 to October 2024. I then illustrate the implied vertical movements of the job vacancy rate corresponding to a given reference unemployment rate of 5.5 percent4 by averaging the results for each year from 2016 to 2024. The vertical height of the Beveridge curve calculated in this way increased from 2.8 percent in 2019 to nearly 6 percent in 2020-2021, and dropped back to 3.2 percent in 2024 (Figure 6).
The Beveridge curve’s elevation at around 3.2 or 3.3 percent in the post-pandemic period 2023-2024 is higher than its height of 2.8 or 2.9 percent in the pre-pandemic period 2018-2019. This can be attributed to shifts in the ratio of two background factors: the intensity of labour reallocation across occupations, industries and regions, and the efficiency of the matching process between job openings and job seekers (Blanchard, Domash and Summers 2022). At any given rate of unemployment, the job vacancy rate and the Beveridge curve will be higher in relation to the intensity of labour reallocation and the inefficiency of job matching.
The first factor, the intensity of labour reallocation, is captured by the monthly flow of hires as a percentage of the labour force. It is shown as an index with 2019 = 100 in Figure 7. It increased by some 10 percent during the pandemic of 2020-2021. Labour moved from transport industries and those requiring person-to-person contact toward electronic communications and home deliveries. There was a displacement from traditional businesses and occupations to those allowing work from home. However, in 2022-2024 labour reallocation calmed down and its intensity decreased by some 15 percent below its 2019 level. This pushed the Beveridge curve downward.
The second factor, the efficiency of job matching, reflects the capacity of labour markets to generate hires at the observed levels of unemployment and job vacancies. It is an index with 2019 = 100 in Figure 8. It experienced a sharp drop of nearly 20 percent during the pandemic (2020-2021). Factors contributing to this decline include the increasing physical distance between vacant positions and available candidates, as well as the widening gap between the demand for and supply of skills. Also, the rise in illnesses and the increased popularity of remote work during the pandemic likely may have contributed to a decline in job search intensity. As a result, employers found it more difficult to match job offers with suitable job seekers. Matching efficiency did not recover from 2022-2024. It remained some 20 percent below its pre-pandemic level of 2018-2019. This pushed the Beveridge curve upward.
Going from 2019 to 2024, movements in labour reallocation and matching efficiency had opposite effects on the height of the Beveridge curve. But the upward pressure on the curve from the 20 percent drop in matching efficiency was greater than the downward pressure from the 15 percent decline in labour reallocation. Therefore, as already pictured in Figures 4 and 6, the net outcome is that, going over the pandemic, the Beveridge curve wound up at a higher level in 2024 than in 2019, implying a higher job vacancy rate for any given unemployment rate.
So far, I have used the Beveridge relation between job vacancies and unemployment as a broad interpretive framework for macroeconomic developments in Canada over the 2015-2024 period. First, I have focused on the effect of fluctuations in aggregate economic activity (captured by changes in unemployment) on the job vacancy rate. Second, I have noted that the onset and ending of the pandemic have been big shifters of this unemployment – job vacancy trade off upward in 2020-2021 and downward in 2022-2024. Nevertheless, third, I have shown that, mainly due to a persistent 20 percent drop in job matching efficiency since 2019, the Canadian Beveridge curve was occupying a higher vertical position in 2023-2024 than before the pandemic.
In addition to the pandemic, Canada’s immigration policy, characterized by rising immigration levels, is another major development that has impacted labour markets in recent years. Like the pandemic, this policy may have affected the level of the unemployment rate along the Beveridge curve, as well as the vertical position of the curve, through its impacts on labour reallocation and matching efficiency. The following sections try to assess the existence and magnitude of these potential effects of immigration.
Economic Logic
The Beveridge framework can be used to explain how the expansion of immigration in Canada before and after the pandemic could have produced a lasting decrease or increase in labour shortages. Excluding the pandemic’s influence, rising immigration may affect aggregate labour shortages in two mechanical ways: by causing labour markets to slide up or down along the Beveridge curve, or by shifting the entire position of the Beveridge curve upward or downward, resulting in a larger or a smaller number of job vacancies for any given unemployment rate.
The first scenario involves a slide along the Beveridge curve. If rising immigration moves the economy up and to the northwest, unemployment decreases and vacancies increase; if the economy descends to the southeast, unemployment increases and vacancies decrease, as shown in Figure 5.
A permanent increase in unemployment along a given Beveridge curve is not what is generally hoped for by policymakers and the public. We want to achieve a permanent reduction in labour scarcity without being forced to suffer a permanent increase in unemployment. Nevertheless, it is important to understand how rising immigration could impact unemployment permanently, such that a higher or lower unemployment rate would be structurally needed to keep inflation low and stable over time.
A rough check on whether a higher immigration rate has raised or lowered the national unemployment rate consists of seeing if the excess of the national rate over the rate of the experienced group, formed by the Canadian-born plus the immigrants landed more than five years earlier, was higher or lower in 2023 than in 2015. Labour force data indicate that the excess of the national rate over the rate of this experienced group did increase in this period, but by just 0.1 percentage point, owing essentially to the rising labour force share of immigrants landed less than five years earlier. Seen in this light, rising immigration does not seem to have had a meaningful direct effect on structural unemployment. This result is consistent with research by Dion and Dodge (2023), who found no significant change in the national unemployment rate needed to keep inflation stable, known as the noninflationary rate of unemployment, that could be attributed to rising immigration.
It is a relief to see that rising immigration has not entailed a permanent reduction in the job vacancy rate by permanently pushing the national unemployment rate upward. There is evidence, though, that rising immigration has led to greater cyclical volatility of unemployment. First, the phenomenal expansion in the number of new residents since 2021 is known to have contributed to the strong demographic pressure on the demand for housing and, hence, to the significant increase in the cost of rented and owned accommodation. The Bank of Canada has acknowledged that the persistence of high shelter inflation consequently acts “as a material headwind against the return of inflation to the 2 percent target” (Bank of Canada 2024). In other words, through this channel, rising immigration is prolonging the current period of slower growth and higher unemployment. Second, the difference in cyclical sensitivity of the unemployment rate, between the above-defined experienced group and immigrants landed less than five years earlier, seems to have increased. In the economic slowdown during the spring quarter of 2024, the unemployment rate was 4.0 points higher than a year before for immigrants landed less than five years earlier, but only 0.7 point higher for the experienced group. The difference of 3.3 points between them was larger than in the 2009 and 2020 recessions. It could be due in part to the rising share of the low-skilled population of immigrant workers, which is more exposed to layoffs.
This study is primarily concerned with the permanent structural effects of rising immigration on unemployment, which look small, and not with the short-term economic and social costs associated with the greater cyclical volatility of unemployment around its steady state. Nevertheless, the possibility that these short-term costs are real should be kept in mind. Easing labour scarcity by tolerating more unemployment, whether of the short- or long-term variety, is an outcome our policies should try to avoid.
The other way rising immigration may have impacted aggregate labour shortages is by moving the vertical position of the entire Beveridge curve up or down in the unemployment-job vacancy space. Ultimately, we want to know whether rising immigration has increased the job vacancy rate and worsened labour shortages, or whether it has decreased the vacancy rate and alleviated the shortages, at every given level of unemployment.
The combined visual evidence presented by Figures 4 to 8 above implies that the Beveridge curve did shift upward somewhat from the pre-pandemic to the post-pandemic period, particularly due to a persistent 20 percent drop in job matching efficiency. Has rising immigration in Canada contributed to this evolution? Bowlus, Miyairi and Robinson (2016) conducted a longitudinal study of the job search behaviour of immigrants to Canada in 2002-2007. Results imply that heightened immigration may reduce matching efficiency in the short run, as new immigrants often face a lower rate of job offers than natives during their initial integration period. Based on US data, Barnichon and Figura (2015) focused on the two primary determinants of aggregate matching efficiency: worker heterogeneity and labour market segmentation. They pointed out that matching efficiency would decline if workers with a lower-than-average search efficiency became more represented among job seekers, or if the dispersion between tight labour submarkets and slack ones increased. These two conditions would seem to apply to the Canadian context with rising immigration. Lu and Hou (2023) have identified a major shift of immigration toward lower-skilled workers, and a significant relative tightening of labour markets such as construction, accommodation, food, business support services, education, healthcare, and social services. The statistical analysis below will provide a test of whether in recent years rising immigration has in fact shifted the Beveridge curve upward and intensified labour scarcity, or not.
Rising immigration is not the only macroeconomic development that may conceivably have affected aggregate labour shortages in the post-pandemic period. It is entirely conceivable that some of the changes triggered suddenly by the pandemic shock may have persisted into the post-pandemic era. Potentially, the most important of these is the widespread shift to work from home (Aksoy et al. 2023). The pandemic can be seen as a mass natural experiment that brought millions of workers in Canada, and other countries, to suddenly experience more work from home, to value its benefits, and to stick to it thereafter, often with a surprising upside in productivity.
The percentage of Canadian workers aged 15 to 69 who work most of their hours from home was 7 percent in early 2020. It sprang to 41 percent in the great confinement month of April 2020, and then declined as the pandemic evolved and faded out. But it was still holding up around 20 percent in the first half of 2024, which was three times as large as the 7 percent of early 2020.
The large increase in the percentage of Canadians working primarily from home has introduced an increase in worker heterogeneity compared to the pre-2020 period. With more workers satisfied with their work from home, fewer are incentivized to seek new jobs, particularly of the traditional variety. Following the Barnichon and Figura (2015) result, this could partly explain the reduction in job matching efficiency that has so far kept the Beveridge curve at a higher level than otherwise.
The economic logic developed in this section suggests that rising immigration and increased work from home may have contributed to the 20 percent loss of matching efficiency that has kept the post-pandemic height of the Canadian Beveridge curve at a level higher than before the pandemic. (However, fully confirming this hypothesis is beyond the scope of this study).
Statistical Analysis
This section summarizes an analysis of the factors influencing job vacancies in Canada, focusing on immigration and the rise of work-from-home arrangements. Introducing the rate of work from home as a factor is done to verify whether the shift to work from home that was initiated by the pandemic, but persisted in 2022-2024 (Schirle 2024), affected the position of the Beveridge curve.5
The analysis spans six Canadian regions – Atlantic Canada, Quebec, Ontario, the Prairies (Manitoba and Saskatchewan), Alberta, and British Columbia – across the periods from 2015 to 2019 (pre-pandemic) and 2022 to 2024 (post-pandemic).
Table 1 summarizes the key findings of the statistical results. Consistent with expectations, it shows that the Beveridge relationship between vacancies and unemployment is negative, with a precisely estimated elasticity of -1.42 in the two models. The results also show that immigration has been a significant contributor to the rise in job vacancies in Canada. Specifically, Model 1 estimates that a one percentage point increase in the immigration rate is associated with an 8.12 percent increase in the job vacancy rate after one year. It suggests that rising immigration has pushed the Beveridge curve upward, increasing the job vacancy rate at each unemployment rate over the period. However, when accounting for the rise in work-from-home arrangements in Model 2, the effect of immigration is smaller, at 3.21 percent,6 reflecting the additional impact of remote work.7 The positive effect of work-from-home arrangements is estimated at 0.85 percent.
These results suggest that both factors – immigration and remote work – have played a significant role in pushing the Beveridge curve upward, making it more difficult to match available workers with job openings.
While both factors contribute to the rise in job vacancies, their high correlation complicates the ability to isolate their individual effects. The correlation between immigration and remote work is particularly strong, which makes it challenging to assess their independent impacts.8 As a result, the evidence for immigration’s effect on job vacancies in Model 2 is less powerful than it would be if the data allowed sharper estimation.9 However, the findings from Model 2 indicate that the combined effects of both immigration and remote work have contributed to higher job vacancies, suggesting that increasing immigration alone is unlikely to solve labour shortages in the short term.
To be specific, statistical calculation of Model 2 indicates an 82 percent chance that rising immigration has left the job vacancy rate unchanged or raised it, and only an 18 percent chance that it has lowered it.10 In other words, increased immigration is more than four times as likely to have raised the aggregate demand for labour by as much as, or more than, the supply than to have increased it by less than the supply. In short, it is unlikely that rising immigration in Canada has helped the country solve its economy-wide problem of labour shortages by reducing the job vacancy rate at any given unemployment rate.
A natural question is whether the effect of immigration on job vacancies varies between permanent and temporary immigration. So far, an expanded version of Model 2, which distinguishes between these factors by analyzing the permanent and temporary immigration rates separately, has found no significant difference in their four-quarter total effects.11 Future analyses could benefit from disaggregating data by industry, as the impact of immigration and working from home may vary across sectors. For instance, remote work affects sectors like technology differently than it does retail or construction.
Discussion and Conclusion
This paper’s conclusion, drawn from statistical analysis of the macrodata runs, is contrary to the views of business organizations, which have campaigned relentlessly in favour of increases in permanent and temporary economic immigration in the past several years (e.g., Business Council of Canada 2022; Canadian Manufacturers & Exporters 2023; Canadian Federation of Independent Business 2021; Conseil du patronat du Québec 2022). Their position is understandable and grounded in a genuine concern to address labor shortages. By filling the vacancies, economic immigration enables firms to produce more and maintain or increase profitability.
The evidence presented here does not question the important role immigration can play for individual employers, whose need for additional employees is acute and urgent. However, in economics, everything depends on everything. The direction and importance of a phenomenon, confirmed at a microeconomic level with regard to a particular business, government organization, or sector, can be different or even reversed at the macroeconomic level, once all spillovers into the rest of the economy are accounted for. In his 1955 introductory textbook, the renowned American economist Paul Samuelson warned against the risk of the “fallacy of composition,” where it is assumed that what is true for individual parts is automatically true for the whole economy.
In the case of immigration, the fallacy of composition consists of believing that the advantages accruing to employers that hire immigrants can simply be added up and said to extend to the whole economy. What the present study has uncovered is that this belief is not corroborated by the macroeconomic evidence from the recent experience of Canadian regions. It is true that immigration eases up the dearth of personnel in firms that hire newcomers, which is clearly a good thing. But it is also true, conversely, that it worsens the shortage of labour in industries that must cater to the additional demand for goods and services generated by the addition to total GDP. The induced increase in the demand for labour in the aggregate economy can offset or even exceed the initial expansion of supply, so that it contributes to amplify economy-wide labour shortages on net. The insights I have extracted from Canadian regional data suggest that rising immigration has more likely redistributed or increased labour scarcity across the economy than reduced it overall. The political implication is that, if labour shortages persist or increase in the whole of the country despite fast-rising immigration, the insistent demand of business organizations for more immigration will not calm down; labour shortages will persist or intensify.
The vision of immigration as an economy-wide offset to labour scarcity is also reductionist. To take account solely of the hoped-for benefits accruing directly to employers of new immigrants overlooks the fact that immigration is a global and transformative phenomenon. The purpose of immigration is not only to serve the interests of a particular group. It is of concern to a whole society for reasons that are no doubt partly economic, but also demographic, cultural, social, and humanitarian. Society is morally obligated to welcome and integrate all immigrants in the most humane manner. This requires much time and money. Society must also make sure that the pace of immigration is not so fast that it leads ethnic groups to “hunker down” (as Putnam 2007 found) and provokes serious economic disequilibria in sectors that must absorb the induced increase in demand, such as construction, housing, health, education and social services. The overall pace and composition of immigration must balance individual interests against the challenges it brings to society.
Among these costs are the negative potential repercussions on productivity and wage growth stemming from the open-door immigration policy that Canada has followed until recently. Two key implications merit attention. First, investment in housing, business investment to equip newcomers with required physical and human capital, and government investment in public infrastructure to provide social services have not been able to keep pace with fast-rising immigration. Second, the open-door policy has made it easy for employers to rely on low-skilled foreign workers to meet high labour demand, which has been concentrated in low-wage industries (Lu and Hou 2023). While immigration alleviates immediate labour shortages, it may suppress wage increases that would otherwise occur as labour markets tighten and affect capital investments.
For example, in the 12 months leading to 2024Q3, overall wages increased by 4 percent, outpacing inflation at 2 percent, but sectoral differences were stark: wages grew by 3.2 percent in the business sector compared to 6.3 percent in the non-commercial sector. These dynamics suggest that wage growth patterns are influenced by a blend of short-term factors and structural shifts, including immigration trends.
Data also show that business sector labour productivity in Canada is on a slippery slope. From 2021Q3 to 2024Q3, output per hour went down cumulatively by 2.3 percent, whereas it would have gone up by 3.2 percent if it had increased at the same rate as in 1999-2019 (Statistics Canada, table 36-10-0206). While there are many factors behind this slowdown in productivity growth, the high immigration rate may have been a contributor.
In March 2024, the government suddenly announced a reversal of its immigration policy. Immigration Minister Marc Miller committed his department to cutting Canada’s non-permanent resident population from 6.5 percent of the overall population in early 2024 to 5 percent in early 2027. In November, details of the plan were set in the 2024 Annual Report to Parliament on Immigration (Government of Canada 2024, Annex 4). There would be 446,000 fewer entries of new non-permanent residents than exits in each of 2025 and 2026. Annual temporary immigration would be negative to this extent. The Annual Report also announced that the annual target for admissions to permanent immigration would be reduced from 485,000 in 2024 to 395,000 in 2025, 380,000 in 2026 and 365,000 in 2027.
If implemented as intended, scaling back the number of temporary and permanent immigrants will impact Canada’s aggregate labour supply significantly in 2025-2027. The working-age (15-64) population will stagnate instead of increasing by 800,000 or more, as it did in each of 2023 and 2024. An implication of the evidence reported above in Table 1 is that labour demand will likely decline alongside the reduction in labour supply because there will be 800,000 fewer consumers in the Canadian economy. While this policy reversal may not directly address the job vacancy rate, it could reduce vacancies by decreasing the overall demand for labour. As a result, while Canada’s aggregate GDP may contract, GDP per capita could increase, particularly if a smaller portion of national savings is directed toward demographic investments and the composition of immigration shifts toward fewer low-skilled immigrants.
The government’s policy reversal is a first step toward moderation. While it presents challenges, it also offers opportunities for improvement. When employers do not have the luxury of recruiting a rising stream of newcomers who are willing to accept low wages, it may push them to invest more in technology and work reorganization, and hence increase productivity. Furthermore, with a more moderate immigration level, the issue of the lack of absorptive capacity in the economy to provide enough skill-equivalent jobs to high-skilled immigrants will be less acute. Immigrants will see their skill utilization increase and their overqualification rate decrease. This shift could enhance Canada’s ability to attract global talent, aligning with the 2016 recommendation from the Advisory Council on Economic Growth that immigration should help address the shortage of high-skilled workers.
Appendix: Statistical Methodology and Data
This appendix provides a detailed description of the statistical analysis conducted to assess the factors influencing the job vacancy rate in Canada. The analysis spans 27 non-pandemic quarters, covering two periods: 2015Q2 to 2019Q4 (pre-pandemic) and 2022Q4 to 2024Q3 (post-pandemic). It includes data from six Canadian regions – Atlantic Canada, Quebec, Ontario, the Prairies (Manitoba and Saskatchewan), Alberta, and British Columbia. Each of these regions has a population of more than 2 million.
The dataset consists of 162 observations, representing the six regions across the 27 quarters. All labour market and population data are sourced from publicly available Statistics Canada tables. The job vacancy rate and unemployment rate are expressed as ratios of seasonally adjusted job vacancies and unemployment to the labour force. These variables are logarithmically transformed to account for the convexity of the Beveridge curve.
To estimate the relationship between job vacancies and its key determinants, two regression models are specified:
• Model 1 includes the unemployment rate, the immigration rate (measured as the total number of new permanent immigrants and net additional non-permanent residents relative to the population, annualized), and three unconstrained lagged values of the immigration rate.
• Model 2 builds upon Model 1 by including the rate of work from home as an additional explanatory variable. The work-from-home rate is the fraction of workers aged 15 to 69 who work most of their hours from home in their main jobs. This model tests whether the pandemic-induced shift to remote work, which persisted post-pandemic, has affected the Beveridge curve and the job vacancy rate.
Both models incorporate regional and seasonal fixed effects to account for regional disparities and seasonal fluctuations in the labour market.
The author is grateful to Mario Fortin, Gilles Grenier, Jeremy Kronick, Nicolas Marceau, Parisa Mahboubi, Pascal Michaillat, Mario Polèse, Statistics Canada data analysts, Mikal Skuterud, Daniel Schwanen, Christopher Worswick and several anonymous referees for valuable comments and suggestions. The author retains responsibility for any errors and the views expressed.
References
Advisory Council on Economic Growth. 2016. “The Path to Prosperity: Resetting Canada’s Growth Trajectory.” Report of the Advisory Council on Economic Growth to the Minister of Finance of Canada. October.
Ahn, Hie Joo, and Leland Crane. 2020. “Dynamic Beveridge curve accounting.” Finance and Economic Discussion Series 2020-27. Board of Governors of the Federal Reserve System, Washington. March.
Aksoy, Cevat Giray, Jose Maria Barrero, Nicholas Bloom, Steven Davis, Mathias Dolls and Pablo Zarate. 2023. “Working from home around the world.” Brookings Papers on Economic Activity, Fall 2022, 281-330.
Archambault, Richard, and Mario Fortin. 2001. “The Beveridge Curve and Unemployment Fluctuations in Canada.” Canadian Journal of Economics 34(1): 58-81.
Bank of Canada. 2024. Monetary Policy Report. Ottawa, January.
Barlevy, Gadi, Jason Faberman, Bart Hobijn and Ayşegül Şahin. 2024. “The Shifting Reasons for Beveridge Curve Shifts.” Journal of Economic Perspectives 38(2): 83-106.
Barnichon, Régis, and Andrew Figura. 2015. “Labor market Heterogeneity and the Aggregate Matching Function.” American Economic Journal: Macroeconomics 7(4): 222-249.
Beveridge, William. 1960. Full Employment in a Free Society. Second edition. London: Allen & Unwin.
Blanchard, Olivier, and Peter Diamond. 1989. “The Beveridge Curve.” Brookings Papers on Economic Activity (1)1-67.
Blanchard, Olivier, Alex Domash, and Lawrence Summers. 2022. “Bad news for the Fed from the Beveridge space.” Policy Brief. Peterson Institute for International Economics. July.
Bok, Brandyn, Nicolas Petrosky-Nadeau, Robert Valetta, and Mary Yilma. 2022. “Finding a soft landing along the Beveridge curve.” Economic Letter. Federal Reserve Bank of San Francisco. August.
Bowlus, Audra, Masashi Miyairi and Chris Robinson. 2016. “Immigrant job search assimilation in Canada.” Canadian Journal of Economics 49(1): 5-52.
Dion, Richard, and David Dodge. 2023. “Labour force growth and labour market gap in Canada: 2011 to 2032.” Working Paper. Ottawa: Bennett Jones, May.
Doyle, Matthew, Mikal Skuterud and Christopher Worswick. 2024. Optimizing Immigration for Economic Growth. Commentary No. 662. Toronto: C.D. Howe Institute.
Elsby, Michael, Ryan Michaels and David Ratner. 2015. “The Beveridge Curve: A Survey.” Journal of Economic Literature 53(3): 571-630.
Fortin, Pierre.2024. “Does immigration help alleviate economy-wide labour shortages?” Working Paper #70, Canadian Labour Economics Forum, University of Waterloo. May.
Government of Canada. 2024.2024 Annual Report to Parliament on Immigration. Immigration, Refugees and Citizenship Canada. October.
Hazell, Jonathon, Juan Herreño, Emi Nakamura, and Jón Steinsson. 2022. “The Slope of the Phillips Curve: Evidence from US states.” Quarterly Journal of Economics 137(3): 1299-1344.
Lam, Alexander. 2022. “Canada’s Beveridge curve and the outlook for the labour market.” Staff Analytical Note 2022-18, Bank of Canada, Ottawa. November.
Lu, Yuqian, and Feng Hou. 2023. “Foreign workers in Canada: distribution of paid employment by industry.” Economic and Social Reports. Catalogue No. 36-28-0001. Ottawa: Statistics Canada. December.
Mahboubi, Parisa, and William Robson. 2018. “Inflated Expectations: More Immigrants Can’t Solve Canada’s Aging Problem on Their Own.” E-Brief. Toronto: C.D. Howe Institute.
Michaillat, Pascal, and Emmanuel Saez. 2021. “Beveridgean unemployment gap.” Journal of Public Economics Plus 2. December.
Morissette, René, Vincent Hardy and Voltek Zolkiewski. 2023. “Working most hours from home: new estimates for January to April 2022.” Analytical Studies Branch Research Papers Series. Catalogue no. 11F0019M, No. 472. Ottawa: Statistics Canada. July.
Pissarides, Christopher. 2000. Equilibrium Unemployment Theory. Cambridge MA: MIT Press.
Putnam, Robert. 2007. “E Pluribus Unum: Diversity and community in the twenty-first century: The 2006 Johan Skytte Prize Lecture.” Scandinavian Political Studies 30(2): 137-174.
Romer, Christina, and David Romer. 2019. “NBER business cycle dating: Retrospect and prospect.” Paper prepared for the session “NBER and the Evolution of Economic Research, 1920–2020” at the ASSA Annual Meeting, San Diego, California, January 2020. Department of Economics, University of California, Berkeley.
Samuelson, Paul. 1955. Economics: An Introductory Analysis. New York: McGraw-Hill.
Schirle, Tammy. 2024. “Settling into a New Normal? Working from Home across Canada.” E-Brief. Toronto: C.D. Howe Institute.
Shimer, Robert. 2012. “Reassessing the Ins and Outs of Unemployment.” Review of Economic Dynamics 15(2): 127-148.
Gravitational action at a distance is non-Newtonian and independent of mass, but is proportional to intrinsic energy, distance, and time. Electrical action at a distance is proportional to intrinsic energy, distance, and time.
The conventional assumption that all energy is kinetic and proportional to velocity and mass has resulted in an absence of mechanisms to explain important phenomena such as stellar rotation curves, mass increase with increase in velocity, constant photon velocity, and the levitation and suspension of superconducting disks.
In addition, there is no explanation for the existence of the fine structure constant, no explanation for the value of the proton-electron mass ratio, no method to derive the spectral series of atoms larger than hydrogen, and no definitive proof or disproof of cosmic inflation.
All of the above issues are resolved by the existence of intrinsic energy.
Table of contents
Part One “Gravitation and the fine structure constant” derives the fine structure constant, the proton-electron mass ratio, and the mechanisms of non-Newtonian gravitation including the precession rate of mercury’s perihelion and stellar rotation curves.
Part Two “Structure and chirality” describes the structure of particles and the chirality meshing interactions that mediate action at a distance between particles and gravitons (gravitation) and particles and quantons (electromagnetism) and describes the properties of photons (with the mechanism of diffraction and constant photon velocity).
Part Three “Nuclear magnetic resonance” is a general derivation of the gyromagnetic ratios and nuclear magnetic moments of isotopes.
Part Four “Particle acceleration” derives the mechanism for the increase in mass (and mass-energy) in particle acceleration.
Part Five “Atomic Spectra” reformulates the Rydberg equations for the spectral series of hydrogen, derives the spectral series of helium, lithium, beryllium, and boron, and explains the process to build a table of the spectral series for any elemental atom.
Part Six “Cosmology” disproves cosmic inflation.
Part Seven “Magnetic levitation and suspension” quantitatively explains the levitation of pyrolytic carbon, and the levitation, suspension and pinning of superconducting disks.
Part One
Gravitation and the fine structure constant
“That gravity should be innate inherent & essential to matter so that one body may act upon another at a distance through a vacuum without the mediation of anything else by & through which their action or force may be conveyed from one to another is to me so great an absurdity that I believe no man who has … any competent faculty of thinking can ever fall into it.”1
Intrinsic energy is independent of mass and velocity. Intrinsic energy is the inherent energy of particles such as the proton and electron. Neutrons are composite particles composed of protons, electrons, and binding energy. Atoms, composed of protons, neutrons, and electrons, are the substance of larger three-dimensional physical entities, from molecules to galaxies.
Gravitation, electromagnetism, and other action at a distance phenomenon are mediated by gravitons, quantons and neutrinos. Gravitons, quantons and neutrinos are quanta that have a discrete amount of intrinsic energy and are emitted by particles in one direction at a time and absorbed by particles from one direction at a time. Emission-absorption events can be chirality meshing interactions that produce accelerations or achiral interactions that do not produce accelerations. Chirality meshing absorption of gravitons produces attractive accelerations, chirality meshing absorption of quantons produces either attractive or repulsive accelerations, and achiral absorption of neutrinos do not produce accelerations. The word neutrino is burdened with non-physical associations thus achiral quanta are henceforth called neutral flux.
A single chirality meshing interaction produces a deflection (a change in position), but a series of chirality meshing interactions produces acceleration (serial deflections). A single deflection in the direction of existing motion produces a small finite positive acceleration (and inertia) and a single deflection in the direction opposite existing motion produces a small finite negative acceleration (and inertia).
There are two fundamental differences between the mechanisms of Newtonian gravitation and discrete gravitation. The first is the Newtonian probability two particles will gravitationally interact is 100% but the discrete probability two particles will gravitationally interact is significantly less. The second difference is the treatment of force. In Newtonian physics a gravitational force between objects always exists, the force is infinitesimal and continuous, and the strength of the force is inversely proportional to the square of the separation distance. In discrete physics the existence of a gravitational force is dependent on the orientations of the particles of which objects are composed, the force is discrete and discontinuous, and the number of interactions is inversely proportional to the square of the separation distance. While there are considerable differences in mechanisms, in many phenomena the solutions of Newtonian and discrete gravitational equations are nearly identical.
There are similar fundamental differences between mechanisms of electromagnetic phenomena and in many cases the solutions of infinitesimal and discrete equations are nearly identical.
A particle emits gravitons and quantons at a rate proportional to particle intrinsic energy. A particle absorbs gravitons and quantons, subject to availability, at a maximum rate proportional to particle intrinsic energy. Each graviton or quanton emission event reduces the intrinsic energy of the particle and each graviton or quanton absorption event increases the intrinsic energy of the particle. Because graviton and quanton emission events continually occur but graviton and quanton absorption events are dependent on availability, these mechanisms collectively reduce the intrinsic energy of particles.
Only particles in nuclear reactions or undergoing radioactive disintegration emit neutral flux but in the solar system all particles absorb all available neutral flux.
In the solar system, discrete gravitational interactions mediate orbital phenomena and, for objects in a stable orbit the intrinsic energy loss due to the emission-absorption of gravitons is balanced by the absorption of intrinsic energy in the form of solar neutral flux.
Within the solar system, particle absorption of solar neutral flux (passing through a unit area of a spherical shell centered on the sun) adds intrinsic energy at a rate proportional to the inverse square of orbital distance, and over a relatively short period of time, the graviton, quanton, and neutral flux emission-absorption processes achieve Stable Balance resulting in constant intrinsic energy for particles of the same type at the same orbital distance, with particle intrinsic energies higher the closer to the sun and lower the further from the sun.
The process of Stable Balance is bidirectional.
If a high energy body consisting of high energy particles is captured by the solar gravitational field and enters into solar orbit at the orbital distance of earth, the higher particle intrinsic energies will result in an excess of intrinsic energy emissions compared to intrinsic energy absorptions at that orbital distance, and the intrinsic energy of the body will be reduced to bring it into Stable Balance.
If, on the other hand, a low energy body consisting of low energy particles is captured by the solar gravitational field and enters into solar orbit at the orbital distance of earth, the lower particle intrinsic energies will result in an excess of intrinsic energy absorptions at that orbital distance compared to the intrinsic energy emissions, and the intrinsic energy of the body will be increased to bring it into Stable Balance.
In an ideal two-body earth-sun system, a spherical and randomly symmetrical earth is in Stable Balance orbit about a spherical and randomly symmetrical sun. A randomly symmetrical body is composed of particles that collectively emit an equal intensity of gravitons (graviton flux) through a unit area on a spherical shell centered on the emitting body.
Unless otherwise stipulated, in this document references to the earth or sun assume they are part of an ideal two-body earth-sun system.
The gravitational intrinsic energy of earth is proportional to the gravitational intrinsic energy of the sun because total emissions of solar gravitons are proportional to the number of gravitons passing into or through earth as it continuously moves on a spherical shell centered on the sun (and also proportional to the volume of the spherical earth, to the cross-sectional area of the earth, to the diameter of the earth and to the radius of the earth).
Likewise, because the sun and the earth orbit about their mutual barycenter, the gravitational intrinsic energy of the sun is proportional to the gravitational intrinsic energy of the earth because total emissions of earthly gravitons are proportional to the number of gravitons passing into or through the sun as it continuously moves on a spherical shell centered on the earth (and also proportional to the volume of the spherical sun, to the cross-sectional area of the sun, to the diameter of the sun and to the radius of the sun).
We define the orbital distance of earth equal to 15E10 meters and note earth’s orbit in an ideal two-body system is circular. If additional planets are introduced, earth’s orbit will become elliptical and the diameter of earth’s former circular orbit will be equal to the semi-major axis of the elliptical orbit.
We define the intrinsic photon velocity c equal to 3E8 m/s and equal in amplitude to the intrinsic constant Theta which is non-denominated. We further define the elapsed time for a photon to travel 15E10 meters equal to 500 seconds.
The non-denominated intrinsic constant Psi, 1E-7, is equal in amplitude to the intrinsic magnetic constant denominated in units of Henry per meter.
Psi is also equal in amplitude to the 2014 CODATA vacuum magnetic permeability divided by 4 (after 2014 CODATA values for permittivity and permeability are defined and no longer reconciled to the speed of light); half the electromagnetic force (units of Newton) between two straight ideal (constant diameter and homogeneous composition) parallel conductors with center-to-center distance of one meter and each carrying a current of one Ampere; and to the intrinsic voltage of a magnetically induced minimum amplitude current loop (3E8 electrons per second).
The intrinsic electric constant, the inverse of the product of the intrinsic magnetic constant and the square of the intrinsic photon velocity, is equal to the inverse of 9E9 and denominated in units of Farad per meter.
The Newtonian mass of earth, denominated in units of kilogram, is equal to 6E24, and equal in amplitude to the active gravitational mass of earth, denominated in units of Einstein (the unit of intrinsic energy).
The active gravitational mass is proportional to the number of gravitons emitted and the Newtonian mass is proportional to the number of gravitons absorbed. Every graviton absorbed contributes to the acceleration and inertia of the absorber, therefore the Newtonian mass is also the inertial mass.
We define the radius of earth, the square root of the ratio of the Newtonian inertial mass of earth divided by orbital distance, or the square root of the ratio of the active gravitational mass of earth divided by its orbital distance, equal to the square root of 4E13, 6.325E6, about 0.993 the NASA volumetric radius of 6.371E6. Our somewhat smaller earth has a slightly higher density and a local gravitational constant equal to 10 m/s2 at any point on its perfectly spherical surface.
We define the Gravitational constant at the orbital distance of earth, the ratio of the local gravitational constant of earth divided by its orbital distance, equal to the inverse of 15E9.
The unit kilogram is equal to the mass of 6E26 protons at the orbital distance of earth, and the proton mass equal to the inverse of 6E26.
The proton intrinsic energy at the orbital distance of earth is equal to the inverse of the product of the proton mass and the mass-energy factor delta (equal to 100). Within the solar system, the proton intrinsic energy increases at orbital distances closer to the sun and decreases at orbital distances further from the sun. Changes in proton intrinsic energy are proportional to the inverse square of orbital distance.
The Newtonian mass of the sun, denominated in units of kilogram, is equal to 2E30, and equal in amplitude to the active gravitational mass of the sun, denominated in units of Einstein.
The active gravitational mass is proportional to the number of gravitons emitted and the Newtonian mass is proportional to the number of gravitons absorbed. Every graviton absorbed contributes to the acceleration and inertia of the absorber, therefore the Newtonian mass is also the inertial mass.
The active gravitational mass of earth divided by the active gravitational mass of the sun is equal to the intrinsic constant Beta-square and its square root is equal to the intrinsic constant Beta.
The charge intrinsic energy ei, denominated in units of intrinsic Volt, is proportional to the number of quantons emitted by an electron or proton. The charge intrinsic energy is equal to Beta divided by Theta-square, the inverse of the square root of 27E38.
Intrinsic voltage does not dissipate kinetic energy.
The electron intrinsic energy Ee, equal to the ratio of Beta-square divided by Theta-cube, the ratio of Psi-square divided by Theta-square, the product of the square of the charge intrinsic energy and Theta, and the ratio of the intrinsic electron magnetic flux quantum divided by the intrinsic Josephson constant, is denominated in units of Einstein.
The intrinsic electron magnetic flux quantum, equal to the square root of the electron intrinsic energy, is denominated in units of intrinsic Volt second.
The intrinsic Josephson constant, equal to the inverse of the square root of the electron intrinsic energy, the ratio of Theta divided by Psi and the ratio of the photon velocity divided by the intrinsic sustaining voltage of a minimum amplitude superconducting current, is denominated in units of Hertz per intrinsic Volt.
The discrete (dissipative kinetic) electron magnetic flux quantum, equal to the product of 2π and the intrinsic electron magnetic flux quantum, is denominated in units of discrete Volt second, and the discrete rotational Josephson constant, equal to the intrinsic Josephson constant divided by 2π and the inverse of the discrete electron magnetic flux quantum, is denominated in units of Hertz per discrete Volt. These constants are expressions of rotational frequencies.
We define the electron amplitude equal to 1. The proton amplitude is equal to the ratio of the proton intrinsic energy divided by the electron intrinsic energy.
We define the Coulomb, ec, equal to the product of the charge intrinsic energy and the square root of the proton amplitude divided by two. The Coulomb denominates dissipative current.
We define the Faraday equal to 1E5, and the Avogadro constant equal to the Faraday divided by the Coulomb.
Lambda-bar, the quantum of particle intrinsic energy, equal to the intrinsic energy content of a graviton or quanton, is the ratio of the product of Psi and Beta divided by Theta-cube, the ratio of Psi-cube divided by the product of Beta and Theta-square, the product of the charge intrinsic energy and the intrinsic electron magnetic flux quantum, and the charge intrinsic energy divided by the intrinsic Josephson constant.
CODATA physical constants that are defined as exact have an uncertainty of 10-12 decimal places therefore the exactness of Newtonian infinitesimal calculations is of a similar order of magnitude. We assert that Lambda-bar and proportional physical constants are discretely exact (equivalent to Newtonian infinitesimal calculations) because discretely exact physical properties can be exactly expressed to greater accuracy than can be measured in the laboratory.
All intrinsic physical constants and intrinsic properties are discretely rational. The ratio of two positive integers is a discretely rational number.
The ratio of two discretely rational numbers is discretely rational.
The rational power or rational root of a discretely rational number is discretely rational.
The difference or sum of discretely rational numbers is discretely rational. This property is important in the derivation of atomic spectra where it serves the same purpose as a Fourier transform in infinitesimal mathematics.
The intrinsic electron gyromagnetic ratio, equal to the ratio of the cube of the charge intrinsic energy divided by Lambda-bar square, is denominated in units of Hertz per Tesla.
The intrinsic proton gyromagnetic ratio, equal to the ratio the intrinsic electron gyromagnetic ratio divided by the square root of the cube of the proton amplitude divided by two and the ratio of eight times the photon velocity divided by nine, is denominated in units of Hertz per Tesla.
The intrinsic conductance quantum, equal to the product of the intrinsic Josephson constant and the discrete Coulomb, is denominated in units of intrinsic Siemen.
The kinetic conductance quantum, equal to the intrinsic conductance quantum divided by 2π, is denominated in units of kinetic Siemen.
The CODATA conductance quantum is equal to 7.748091E-5.
The intrinsic resistance quantum, equal to the inverse of the intrinsic conductance quantum, is denominated in units of Ohm.
The kinetic resistance quantum, equal to the inverse of the kinetic conductance quantum, is denominated in units of Ohm.
The CODATA resistance quantum is equal to 1.290640E4.
The intrinsic von Klitzing constant, equal to the ratio of the discrete Planck constant divided by the square of the intrinsic electric constant, is denominated in units of Ohm.
The kinetic von Klitzing constant, equal to the ratio of the discrete Planck constant divided by the square of the discrete Coulomb, is denominated in units of Ohm.
The CODATA von Klitzing constant is equal to 2.581280745E4.
In Newtonian physics the probability particles at a distance will interact is 100% but in discrete physics a certain granularity is needed for interactions to occur.
A particle G-axis is a single-ended hollow cylinder. The mechanism of the G-axis is analogous to a piston which moves up and down at a frequency proportional to particle intrinsic energy. At the end of the up-stroke a single graviton is emitted and during a down-stroke the absorption window is open until the end of the downstroke or the absorption of a single graviton.
The difference (the intrinsic granularity) between the inside diameter of the hollow cylindrical G-axis and the outside diameter of the graviton allows absorption of incoming gravitons at angles that can deviate from normal (straight down the center) by plus or minus 20 arcseconds.
There are three kinds of intrinsic granularity: the intrinsic granularity in phenomena mediated by the absorption of gravitons and quantons; the intrinsic granularity in phenomena mediated by the emission of gravitons and quantons; and the intrinsic granularity in certain electromagnetic phenomena.
The intrinsic granularity in phenomena mediated by the absorption of gravitons or quantons by particles in tangible objects (with kilogram mass greater than one microgram or 1E20 particles) is discretely infinite therefore the average value of 20 arcseconds is discretely exact.
The intrinsic granularity in phenomena mediated by the emission of gravitons or quantons by particles is 20 arcseconds because gravitons and quantons emitted in the direction in which the emitting axis is pointing have an intrinsic granularity of not more than plus or minus 10 arcseconds.
The intrinsic granularity of certain electromagnetic phenomena, in particular a Faraday disk generator, governed by a “Lorentz force” that causes the velocity of an electron to be at a right angle to the force also causes an additional directional change of 20 arcseconds in the azimuthal direction.
In the above diagram, the intrinsic granularity of graviton absorption is illustrated on the left.
Above center illustrates the aberration between the visible and the actual positions of the sun with respect to an observer on earth as the sun moves across the sky. Position A is the visible position of the sun, position B is the actual position of the sun, position B will be the visible position of the sun in 500 seconds, and position C will be the actual position of the sun in 500 seconds. The elapsed time between successive positions is proportional to the separation distance, but 20 arcseconds of aberration is independent of separation distance.
Above right illustrates the six directions within a Cartesian space and the six possible forms describing the six possible facing directions in which a vector can point. A vector pointing up the G-axis of particle A in the facing direction of particle B has one and only one of the six possible forms. The probability a gravitational interaction will occur, if the vector is facing in one of the other five facing directions, is zero. Therefore, a gravitational interaction involving a graviton emitted by a specific particle A and absorbed by a specific particle B is possible (not probable) in only one-sixth the total volume of Cartesian space.
We define the intrinsic steric factor equal to 6. The intrinsic steric factor is inversely proportional to the probability a specific gravitational intrinsic energy interaction can occur on a scale where the probability a Newtonian gravitational interaction will occur is 100%.
The intrinsic steric factor points outward from a specific particle located at the origin of a Cartesian space facing outward into the surrounding space. The intrinsic steric factor applies to action at a distance in phenomena mediated by gravitons and quantons.
To convert 20 arcseconds of intrinsic granularity into an inverse possibility, divide the 1,296,000 arcseconds in 360 degrees by the product of 20 arcseconds and the intrinsic steric factor.
A possibility is not the same as a probability. The possibility two particles can gravitationally interact (each with the other) is equal to 1 out of 10,800. The probability two particles will gravitationally interact (each with the other) is dependent on the geometry of the interaction.
Because Newtonian gravitational interactions are proportional to the quantum of kinetic energy, the discrete Planck constant, and discrete gravitational interactions are proportional to the quantum of intrinsic energy, Lambda-bar, the factor 10,800 is a conversion factor.
In a bidirectional gravitational interaction, the ratio of the square of the discrete Planck constant divided by the square of Lambda-bar is equal to 10,800.
In a one-directional gravitational interaction the ratio of the discrete Planck constant divided by Lambda-bar is equal to the square root of 10,800.
The discrete Planck constant is equal to Lambda-bar times the square root of 10,800 and denominated in units of Joule second.
The value of the discrete Planck constant, approximately 1.006 times larger than the 2018 CODATA value, is the correct value for the two-body earth-sun system and proportional to the intrinsic physical constants previously defined.
The CODATA fine structure constant alpha is equal to the ratio of the square of the CODATA electron charge divided by the product of two times the CODATA Planck constant, the CODATA vacuum permittivity and the CODATA speed of light (2018 CODATA values).
The intrinsic constant Beta is a transformation of the CODATA expression.
By substitution of the charge intrinsic energy for the CODATA electron charge, Lambda-bar for two times the CODATA Planck constant, the intrinsic electric constant for the CODATA vacuum permittivity and the intrinsic photon velocity for the CODATA speed of light, the dimensionless CODATA fine structure constant alpha is transformed into the dimensionless intrinsic constant Beta.
The existence of the fine structure constant and its ubiquitous appearance in seemingly unrelated equations is due to the assumption that phenomena are governed by kinetic energy, consequently measured values of phenomena governed or partly governed by intrinsic energy do not agree with the theoretical expectations.
A gravitational phenomenon governed by intrinsic energy is the solar system Kepler constant equal to the square root of the cube of the planet’s orbital distance divided by 4π-square times the orbital period of the planet, the product of the active gravitational mass of the sun and the Gravitational constant at the orbital distance of earth divided by 4π-square, and the ratio of the product of the square of the planet’s velocity and the orbital distance of the planet divided by 4π-square.
The intrinsic constant Beta-square, previously shown to be the ratio of the active gravitational mass of earth divided by the active gravitational mass of the sun, is also proportional to the key orbital properties of the sun, earth, and moon.
An electromagnetic phenomenon governed by intrinsic energy is the proton-electron mass ratio, here termed the electron-proton deflection ratio, equal to the square root of the cube of the proton intrinsic energy divided by the cube of the electron intrinsic energy, and to the square root of the cube of the proton amplitude divided by the cube of the unit electron amplitude.
The CODATA proton-electron mass ratio is a measure of electron deflection (1836.15267344) in units of proton deflection (equal to 1). Because the directions of proton and electron deflections are opposite, the electron-proton deflection ratio is approximately equal to the CODATA proton-electron mass ratio plus one.
In this document, unless otherwise specified (as in CODATA constants denominated in units of Joule proportional to the CODATA Planck constant), units of Joule are proportional to the discrete Planck constant.
The ratio of the discrete Planck constant divided by Lambda-bar, equal to the product of the mass-energy factor delta and omega-2, is denominated in units of discrete Joule per Einstein.
In the above equation the denomination discrete Joule represents energy proportional to the discrete Planck constant and the denomination Einstein represents energy proportional to Lambda-bar. The mass-energy factor delta converts non-collisional energy (action at a distance) into collisional energy in units of intrinsic Joule. The factor omega-2 converts units of intrinsic Joule into units of discrete Joule.
Omega factors correspond to the geometry of graviton-mediated and quanton-mediated phenomena.
We will begin with a brief discussion of electrical (quanton-mediated) phenomena then exclusively focus on gravitational phenomena for the remainder of Part One.
Electrical phenomena
The discrete steric factor, equal to 8, is the number of octants defined by the orthogonal planes of a Cartesian space.
Each octant is one of eight signed triplets (—, -+-, -++, –+, +++, +-+, +–, ++-) which correspond to the direction of the x, y, and z Cartesian axes.
A large number of random molecules, each with a velocity coincident with its center of mass, are within a Cartesian space. If the origin is the center of mass of specific molecule1, then random molecule2 is within one of the eight signed octants and, because the same number of random molecules are within each octant, then the specific molecule1 is within one of the eight signed octants with respect to random molecule2, and the possibility (not probability) of a center of mass collisional interaction between molecule2 and molecule1 is equal to the inverse of the discrete steric factor (one in eight).
The discrete and intrinsic steric factors correspond to the geometries of phenomena governed by discrete kinetic energy (proportional to the discrete Planck constant) and to phenomena governed by intrinsic energy:
The discrete steric factor points inward from a random molecule in the direction of a specific molecule and applies to phenomena mediated by collisional interactions.
The intrinsic steric factor points outward from a specific particle into the surrounding space and applies to phenomena mediated by gravitons and quantons (action at a distance).
The intrinsic molar gas constant, equal to the discrete steric factor, is the intrinsic energy (units of intrinsic Joule) divided by mole Kelvin.
The discrete molar gas constant, equal to the product of the intrinsic molar gas constant and omega-2, is the intrinsic energy (units of discrete Joule) divided by mole Kelvin. The discrete molar gas constant agrees with the CODATA value within 1 part in 13,000.
The ratio of the CODATA electron charge (the elementary charge in units of Coulomb) divided by the charge intrinsic energy (in units of intrinsic Volt) is nearly equal to the discrete molar gas constant.
The intrinsic Boltzmann constant, equal to the ratio of the intrinsic molar gas constant divided by the Avogadro constant, is denominated in units of Einstein per Kelvin.
The discrete Boltzmann constant, equal to the product of omega-2 and the intrinsic Boltzmann constant, and the ratio of the discrete molar gas constant divided by the Avogadro constant, is denominated in units of discrete Joule per Kelvin. The CODATA Boltzmann constant is equal to 1.380649×10-23.
Gravitational phenomena
Omega-2, the square root of 1.08, corresponds to one-directional gravitational interactions between non-orbiting objects (objects not by themselves in orbit, that is, the object might be part of an orbiting body but the object itself is not the orbiting body), for example graviton emission by the large lead balls or absorption by the small lead balls in the Cavendish experiment.
Omega-4, 1.08, corresponds to two-directional gravitational interactions (emission and absorption) between non-orbiting objects, for example the acceleration of the large lead balls or the acceleration of the small lead balls in the Cavendish experiment.
Omega-6, the square root of the cube of 1.08, corresponds to gravitational interactions between a planet and moon in a Keplerian orbit where the square root of the cube of the orbital distance divided by the orbital period is equal to a constant.
Omega-8, the square of 1.08, corresponds to four-directional gravitational interactions by non-orbiting objects, for example the acceleration of the small lead balls and the acceleration of the large lead balls in the Cavendish experiment.
Omega-12, equal to the cube of 1.08, corresponds to gravitational interactions between two objects in orbit about each other, for example the sun and a planet in orbit about their mutual barycenter.
Except where previously defined (the Gravitational constant at the orbital distance of earth, the orbital distance of earth, the mass and volumetric radius of earth, the mass of the sun) the following equations use the NASA2 values for the Newtonian masses, orbital distances, and volumetric radii of the planets.
The local gravitational constant for any of the planets is equal to the product of the Gravitational constant of earth and the Newtonian mass (kilogram mass) of the planet divided by the square of the volumetric radius of the planet.
The v2d value of a planetary moon is equal to the product of the Gravitational constant at the orbital distance of earth and the Newtonian mass of the planet.
The active gravitational mass of a planet, denominated in units of Einstein, is equal to the product of the square of the volumetric radius of the planet and the orbital distance of the planet, divided by the square of the orbital distance of the planet in units of the orbital distance of earth.
The mass of a planet in a Newtonian orbit about the sun (the planet and sun orbit about their mutual barycenter) is a kinetic property. The active gravitational mass of such a planet, denominated in units of Joule, is equal to the product of the active gravitational mass of the planet in units of Einstein and omega-12.
The Gravitational constant at the orbital distance of the planet is equal to the product of the local gravitational constant of the planet and the square of the volumetric radius of the planet, divided by the active gravitational mass of the planet.
The v2d value of a planetary moon is equal to the product of the Gravitational constant at the orbital distance of the planet and the active gravitational mass of the planet.
The v2d values calculated using the NASA orbital parameters for the moon is larger than the above calculated values by 1.00374; the v2d calculations using the NASA orbital parameters for the major Jovian moons (Io, Europa, Ganymede and Callisto) are larger than the above calculated values by 1.0020, 1.0016, 1.00131, and 1.00133.
Newtonian gravitational calculations are extremely accurate for most gravitational phenomena but there are a number of anomalies for which the Newtonian calculations are inaccurate. The first of these anomalies to come to the attention of scientists in 1859 was the precession rate of the perihelion of mercury for which the observed rate was about 43 arcseconds per century larger than the Newtonian calculated rate.3
According to Gerald Clemence, one of the twentieth century’s leading authorities on the subject of planetary orbital calculations, the most accurate method for calculating planetary orbits, the method of Gauss, was derived for calculating planetary orbits within the solar system with distance expressed in astronomical units, orbital period in days and mass in solar masses.4
The Gaussian method was used by Eric Doolittle in what Clemence believed to be the most reliable theoretical calculation of the perihelion precession rate of mercury.5
With modifications by Clemence including newer values for planetary masses, newer measurements of the precession of the equinoxes and a careful analysis of the error terms, the calculated rate was determined to be 531.534 arc-seconds per century compared to the observed rate of 574.095 arc-seconds per century, leaving an unaccounted deficit of 42.561 arcseconds per century.
The below calculations are based on the method of Price and Rush.6 This method determines a Newtonian rate of precession due to the gravitational influences on mercury by the sun and five outer planets external to the orbit of mercury (venus, earth, mars, jupiter and saturn) The solar and planetary masses are treated as Newtonian objects and in calculations of planetary gravitational influences the outer planets are treated as circular mass rings.
The Newtonian gravitational force on mercury due to the mass of the sun is equal to ratio of the product of the negative Gravitational constant at the orbital distance of earth, the mass of the sun and the mass of mercury divided by the square of the orbital distance of mercury.
The Newtonian gravitational force on mercury due to the mass of the five outer planets is equal to the sum of the gravitational force contributions of the five outer planets external to the orbit of mercury. The gravitational force contribution of each planet is equal to the ratio of the product of the Gravitational constant at the orbital distance of earth, the mass of the planet, the mass of mercury and the orbital distance of mercury, divided by the ratio of the product of twice the planet’s orbital distance and the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.
The gravitational force ratio is equal to the gravitational force on mercury due to the mass of the five outer planets external to the orbit of mercury divided by the gravitational force on mercury due to the mass of the sun.
The gamma factor is equal to the sum of the gamma contributions of the five outer planets external to the orbit of mercury. The gamma contribution of each planet is equal to the ratio of the product of the mass of the planet, the orbital distance of mercury, and the sum of the square of the planet’s orbital distance and the square of the orbital distance of mercury, divided by the product of 2π, the planet’s orbital distance and the square of the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.
Psi-mercury is equal to the product of π and the sum of one plus the difference between the negative of the gravitational force ratio and the ratio of the product of the Gravitational constant at the orbital distance of earth, π, the mass of mercury and the gamma factor divided by twice the gravitational force on mercury due to the mass of the sun.
The number of arc-seconds in one revolution is equal to 360 degrees times sixty minutes times sixty seconds.
The number of days in a Julian century is equal to 100 times the length of a Julian year in days.
The perihelion precession rate of mercury is equal to the ratio of the product of the difference between 2ψ-mercury and 2π, the number of arc-seconds in one revolution and the number of days in a Julian century, divided by the product of 2π and the NASA sidereal orbital period of mercury in units of day (87.969).
The Newtonian perihelion precession rate of mercury determined above is 0.139 arc-seconds per century less than the Clemence calculated rate of 531.534 arc-seconds per century.
The following equations, the same format as the Newtonian equations, derive the non-Newtonian values (when different).
The Newtonian gravitational force on mercury due to the mass of the sun is equal to ratio of the product of the negative Gravitational constant at the orbital distance of earth, the mass of the sun and the mass of mercury divided by the square of the orbital distance of mercury.
The non-Newtonian gravitational force on mercury due to the mass of the five outer planets is equal to the sum of the gravitational force contributions of the five outer planets external to the orbit of mercury. The gravitational force contribution of each planet is equal to the product of the ratio of the product of the Gravitational constant at the orbital distance of earth, the active gravitational mass (in units of Joule) of the planet, the Newtonian mass of mercury and the orbital distance of mercury, divided by the ratio of the product of twice the planet’s orbital distance and the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.
The non-Newtonian gravitational force ratio is equal to the gravitational force on mercury due to the mass of the five outer planets external to the orbit of mercury divided by the gravitational force on mercury due to the mass of the sun.
The gamma factor is equal to the sum of the gamma contributions of the five outer planets external to the orbit of mercury. The gamma contribution of each planet is equal to the ratio of the product of the mass of the planet, the orbital distance of mercury, and the sum of the square of the planet’s orbital distance and the square of the orbital distance of mercury, divided by the product of 2π, the planet’s orbital distance and the square of the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.
The non-Newtonian value for Psi-mercury is equal to the product of π and the sum of one plus the difference between the negative of the gravitational force ratio and the ratio of the product of the Gravitational constant at the orbital distance of earth, π, the mass of mercury and the gamma factor divided by twice the gravitational force on mercury due to the mass of the sun.
The non-Newtonian perihelion precession rate of mercury is equal to the ratio of the product of the difference between 2ψ-mercury and 2π, the number of arc-seconds in one revolution and the number of days in a Julian century, divided by the product of 2π and the NASA sidereal orbital period of mercury in units of day (87.969).
The non-Newtonian perihelion precession rate of mercury is 6.128 arc-seconds per century greater than the Clemence observed rate of 574.095 arc-seconds per century.
We have built a model of gravitation proportional to the dimensions of the earth-sun system. A different model, with different values for the physical constants, would be equally valid if it were proportional to the dimensions of a different planet in our solar system or a planet in some other star system in our galaxy.
Our sun and the stars in our galaxy, in addition to graviton flux, emit large quantities of neutral flux that establish Stable Balance orbits for planets that emit relatively small quantities of neutral flux.
Our galactic center emits huge quantities of gravitons and neutral flux, and its dimensional relationship with our sun is dependent on the neutral flux emissions of our sun. If the intrinsic energy of our sun was less, its orbit would be further out from the galactic center, and if it was greater, its orbit would be closer in.
Of two stars at the same distance from the galactic center with different velocities, the star with higher velocity has a higher graviton absorption rate (higher stellar internal energy) and the star with lower velocity has a lower graviton absorption rate (lower stellar internal energy).
Of two stars with the same velocity at different distances from the galactic center, the star closer in will have a higher graviton absorption rate (higher stellar internal energy) and the star further out will have a lower graviton absorption rate (lower stellar internal energy).
The active gravitational mass of the Galactic Center is equal to the active gravitational mass of the sun divided by Beta-fourth and the cube of the active gravitational mass of the sun divided by the square of the active gravitational mass of earth.
The second expression of the above equation, generalized and reformatted, asserts the square root of the cube of the active gravitational mass of any star in the Milky Way divided by the active gravitational mass of any planet in orbit about the star is equal to a constant.
The above equation, combined with the detailed explanation of the chirality meshing interactions that mediate gravitational action at a distance, the derivation of solar system non-Newtonian orbital parameters, the derivation of the non-Newtonian rate of precession of the perihelion of mercury, and the detailed explanation of non-Newtonian stellar rotation curves, disproves the theory of dark matter.
Part Two
Structure and chirality
A particle has the property of chirality because its axes are orthogonal and directed, pointing in three perpendicular directions and, like the fingers of a human hand, the directed axes are either left-handed (LH) or right-handed (RH). The electron and antiproton exhibit LH structural chirality and the proton and positron exhibit RH structural chirality. The two chiralities are mirror images.
The electron G-axis (black, index finger) points into the paper, the electron Q-axis (blue, thumb) points up in the plane of the paper, and the north pole of the electron P-axis (red, middle finger) points right in the plane of the paper.
The orientation of the axes of an RH proton are the mirror image: the proton G-axis (black, index finger) points into the paper, the proton Q-axis (blue, thumb) points up in the plane of the paper, and the north pole of the proton P-axis (red, middle finger) points left in the plane of the paper.
Above, to visualize orientations, models are easier to manipulate than human hands.
When Michael Faraday invented the disk generator in 1831, he discovered the conversion of rotational force, in the presence of a magnetic field, into electric current. The apparatus creates a magnetic field perpendicular to a hand-cranked rotating conductive disk and, providing the circuit is completed through a path external to the disk, produces an electric current flowing inward from axle to rim (electron flow not conventional current), photograph below.7
Above left, the electron Q-axis points in the CCW direction of motion. The inertial force within a rotating conductive disk aligns conduction electron G-axes to point in the direction of the rim. The alignment of the Q-axes and G-axes causes the orthogonal P-axes to point down.
Above right, the electron Q-axis points in the CW direction of motion. The inertial force within a rotating conductive disk aligns conduction electron G-axes to point in the direction of the rim. The alignment of the Q-axes and G-axes causes the orthogonal P-axes to point up.
In generally accepted physics (GAP), the transverse alignment of electron velocity with respect to magnetic field direction is attributed to the Lorentz force but, as explained above it is a consequence of electron chirality.
In addition to the transverse alignment of the electron direction with respect to the direction of the magnetic field, the electron experiences an additional directional change of 20 arcseconds in the azimuthal direction which causes the electron to spiral in the direction of the axle. Thus, in both a CCW rotating conductive disk and a CW rotating conductive disk, the current (electron flow not conventional current) flows from the axle to the rim.
The geometries of the Faraday disk generator apply to the orientation of conduction electrons in the windings of solenoids and transformers. CCW and CW windings advance in the same direction, below into the plane of the paper. In contrast to the rotating conductor in the disk generator, the windings are stationary, and the conduction electrons spiral through in the direction of the positive voltage supply (which continually reverses in transformers and AC solenoids).
Above left, the electron Q-axes point down in the direction of current flow through the CCW winding. The inertial force on conduction electrons moving through the CCW winding aligns the direction of the electron G-axes to the left. The electron P-axes, perpendicular to both the Q-axes and P-axes, point S→N out of the paper.
Above right, the electron Q-axes point up in the direction of current flow through the CW winding. The inertial force on conduction electrons moving through the CW winding aligns the direction of the electron G-axes to the left. The electron P-axes, perpendicular to both the Q-axes and G-axes, point S→N into the paper.
Above is a turnbuckle composed of a metal frame tapped at each end. On the left end an LH bolt passes through an LH thread and on the right end an RH bolt passes through an RH thread. If the LH bolt is turned CCW (facing right into the turnbuckle frame) the bolt moves to the right and the frame moves to the left and if the LH bolt is turned CW the bolt moves to the left and the frame moves to the right. If the RH bolt is turned CW (facing left into the turnbuckle frame) the bolt moves to the left and the frame moves to the right and if the RH bolt is turned CCW the bolt moves to the right and the frame moves to the left.
In the language of this analogy, a graviton or quanton emitted by the emitting particle is a moving spinning bolt, and the absorbing particle is a turnbuckle frame with a G-axis, Q-axis or P-axis passing through.
In a chirality meshing interaction, absorption of a graviton or quanton by the LH or RH G-axis, Q-axis or P-axis of a particle, causes an attractive or repulsive acceleration proportional to the difference between the graviton or quanton velocity and the velocity of the absorbing particle.
An electron G-axis has a RH inside thread and a proton G-axis has a LH inside thread. An electron G-axis emits CW gravitons and a proton G-axis emits CCW gravitons.
In the bolt-turnbuckle analogy, a graviton is a moving spinning bolt, and the absorbing particle through which the G-axis passes is a turnbuckle frame:
If a CCW graviton emitted by a proton is absorbed into a proton LH G-axis, the absorbing proton is attracted, accelerated in the direction of the emitting proton.
If a CW graviton emitted by an electron is absorbed into an electron RH G-axis, the absorbing electron is attracted, accelerated in the direction of the emitting electron.
Protons and electrons do not gravitationally interact with each other because a proton is larger than an electron, a graviton emitted by a proton is larger than a graviton emitted by an electron, the inside thread of a proton G-axis is larger than the inside thread of an electron G-axis, and the size differences prevent the ability of a graviton emitted by an electron to mesh with a proton G-axis or a graviton emitted by a proton to mesh with an electron G-axis.
Tangible objects are composed of atoms which are composed of protons, electrons and neutrons.
In gravitational interactions between tangible objects (with kilogram mass greater than one microgram or 1E20 particles) the total intensity of the interaction is the sum of the contributions of the electrons and protons of which the object is composed (note that neutrons themselves do not gravitationally interact but each neutron is composed of one electron and one proton both of which do gravitationally interact).
A particle Q-axis is a single-ended hollow cylinder. The mechanism of the Q-axis is analogous to a piston which moves up and down at a frequency proportional to charge intrinsic energy. At the end of each up-stroke a single quanton is emitted. The absorption window opens at the beginning of the up-stroke and remains open until the beginning of the downstroke or the absorption of a single quanton.
The difference (the intrinsic granularity) between the inside diameter of the hollow cylindrical Q-axis and the outside diameter of the quanton allows absorption of incoming quantons at angles that can deviate from normal (straight down the center) by plus or minus 20 arcseconds.
An electron Q-axis has a RH inside thread and a proton Q-axis has a LH inside thread. An electron Q-axis emits CCW quantons and a proton Q-axis emits CW quantons.
In the bolt-turnbuckle analogy, a quanton is a moving spinning bolt, and the absorbing particle through which the G-axis passes is a turnbuckle frame:
If a CCW p-quanton emitted by a proton is absorbed into an electron RH Q-axis, the absorbing electron is attracted, accelerated in the direction of the emitting proton.
If a CCW p-quanton emitted by a proton (or the anode plate in a CRT) is absorbed into a proton LH Q-axis, the absorbing proton is repulsed, accelerated in the direction of the cathode plate (opposite the direction of the emitting proton).
If a CW e-quanton emitted by an electron is absorbed into an electron RH Q-axis, the absorbing electron is repulsed, accelerated in the direction opposite the emitting electron.
If a CW e-quanton emitted by an electron (or the cathode plate in a CRT) is absorbed into a proton LH Q-axis, the absorbing proton is repulsed, accelerated in the direction of the cathode plate (the direction opposite the emitting electron).
In a CRT, the Q-axis of an accelerated electron is oriented in the linear direction of travel and its P-G-axis are oriented transverse to the linear direction of travel. After the electron is linearly accelerated, the electron passes between oppositely charged parallel plates that emit quantons perpendicular to the linear direction of travel and these e-quantons are absorbed into the electron P-axes. The chirality meshing interactions between an electron with a linear direction of travel and a quantons emitted by either plate results in a transverse acceleration in the direction of the anode plate:
An incoming CCW p-quanton approaching an electron RH P-axis within less than 20 arcseconds deviation from normal (straight down the center) is absorbed in an attractive chirality meshing interaction in which the electron is deflected in the direction of the anode plate.
An incoming CW e-quanton approaching an electron RH P-axis within less than 20 arcseconds deviation from normal (straight down the center) is absorbed in a repulsive chirality meshing interaction in which the electron is deflected in the direction of the anode plate.
This is the mechanism of the experimental determination of the electron-proton deflection ratio.
The magnitude of the ratio between these masses is not equal to the ratio of the measured gravitational deflections but rather to the inverse of the ratio of the measured electric deflections. It would not matter which of these measurable quantities were used in the experimental determination if Newton’s laws of motion applied. However, in order for Newton’s laws to apply the assumptions behind Newtons laws, specifically the 100% probability that particles gravitationally and electrically interact, must also apply. But this is not the case for action at a distance.
The electron orientation below top left, rotated 90 degrees CCW, is identical to the electron orientations previously illustrated for a CW disk generator or a CW-wound transformer or solenoid; and the electron orientation bottom left is a 180 degree rotation of top left.
Above are reversals in Q-axis orientation due to reversals in direction of incoming quantons
Above top right and bottom right are the left-side electron orientations with the electron Q-axis directed into the plane of the paper (confirmation of the perspective transformation is easier to visualize with a model). These are the orientations of conduction electrons in an AC current.
In the top row CW quantons, emitted by the positive voltage source are absorbed in chirality meshing interactions by the electron RH Q-axis, attracting the absorbing electron. In the bottom row CCW quantons, emitted by the negative voltage source are absorbed in chirality meshing interactions into the electron RH Q-axis repelling the absorbing electron.
In either case the direction of current is into the paper.
In an AC current, a reversal in the direction of current is also a reversal in the rotational chirality of the quantons mediating the current.
In a current moving in the direction of a positive voltage source each linear chirality meshing absorption of a CW p-quanton into an electron RH Q-axis results in an attractive deflection.
In a current moving in the direction of a negative voltage source each linear chirality meshing absorption of a CCW e-quanton into an electron RH Q-axis results in a repulsive deflection.
In an AC current, each reversal in the direction of current, reverses the direction of the Q-axes of the conduction electrons. This reversal in direction is due to a complex rotation (two simultaneous 180 degree rotations) that results in photon emission.
During a shorter or longer period of time (the inverse of the AC frequency) during which the direction of current reverses, a shorter or longer inductive pulse of electromagnetic energy flows into the electron Q and P axes and the quantons of which the electromagnetic energy is composed are absorbed in rotational chirality meshing interactions.
Above left, the electron P and Q axes mesh together at their mutual orthogonal origin in a mechanism analogous to a right angle bevel gear linkage.8
Above center and right, an incoming CCW quanton induces an inward CCW rotation in the Q-axis and causes a CW outward (CCW inward) rotation of the P-axis. The rotation of the Q-axis reverses the orientation of the P-axis and G-axis, and the rotation of the P-axis reverses the orientation of the Q-axis and the orientation of the G-axis thereby restoring its orientation to the initial direction pointing left and perpendicular to a tangent to the cylindrical wire.
Above center and right, an incoming CW quanton induces an inward CW rotation in the Q-axis and causes a CCW outward (CW inward) rotation of the P-axis. The rotation of the Q-axis reverses the orientation of the P-axis and G-axis, and the rotation of the P-axis reverses the orientation of the Q-axis and the orientation of the G-axis thereby restoring its orientation to the initial direction pointing left and perpendicular to a tangent to the cylindrical wire.
In either case the electron orientations are identical, but CCW electron rotations cause the emission of CCW photons and CW electron rotations cause the emission of CW photons.
The absorption of CCW e-quantons by the Q-axis rotates the Q-axis CCW by the square root of 648,000 arcseconds (180 degrees) and the P-Q axis linkage simultaneously rotates the P-axis CW by the square root of 648,000 arcseconds (180 degrees).
If the orientation of the electron G-axis is into the paper in a plane defined by the direction of the Q-axis, the CCW rotation of the Q-axis tilts the plane of the G-axis down by the square root of 648,000 arcseconds and the CW rotation of the P-axis tilts the plane of the G-axis to the right by the square root of 648,000 arcseconds.
The net rotation of the electron G-axis is equal to the product of the square root of 648,000 arcseconds and the square root of 648,000 arcseconds.
In the production of photons by an AC current, the photon wavelength and frequency are proportional to the current reversal time, and the photon energy is proportional to the voltage.
Above, an axial projection of the helical path of a photon traces the circumference of a circle and the sine and cosine are transverse orthogonal projections.9 The crest to crest distance of the transverse orthogonal projections, or the distance between alternate crossings of the horizontal axis, is the photon wavelength.
The helical path of photons explains diffraction by a single slit, by a double slit, by an opaque circular disk, or a sphere (Arago spot).
In a beam of photons with velocity perpendicular to a flat screen or sensor, each individual photon makes a separate impact that can be sensed or is visible somewhere on the circumference of one of many separate and non-overlapping circles corresponding to all of the photons in the beam. The divergence of the beam increases the spacing between circles and the diameter of each individual photon circle which is proportional to the wavelength of each individual photon. The sensed or visible photon impacts form a region of constant intensity.
Below, the top image shows those photons, initially part of a photon beam illuminating a single slit, which passed through the single slit.10
Above, the bottom image shows those photons, initially part of a photon beam illuminating a double slit, that passed through a double slit.
Below, the image illustrating classical rays of light passing through a double slit is equally illustrative of a photon beam illuminating a double slit but, instead of constructive and destructive interference, the photons passing through the top slit diverge to the right and photons passing through the bottom slit diverge to the left. The spaces between divergent circles are dark and, due to coherence, the photon circles are brightest at the distance of maximum overlap, resulting in the characteristic double slit brighter-darker diffraction pattern.11
The mechanism of diffraction by an opaque circular disk or a sphere (Arago spot) is the same. In either case the opaque circular disk or sphere is illuminated by a photon beam of diameter larger than the diameter of the disk or sphere.
The photons passing close to the edge of the disk or sphere diverge inwards, and the spiraling helical path of a inwardly diverging CW photon passing one side of the disk will intersect in a head-on collision the spiraling helical path of a inwardly diverging CCW photon passing on the directly opposite side of the disk or sphere (if the opposite chirality photons are equidistant from the center of the disk or sphere).
In the case of a sphere illuminated by a laser, the surface of the sphere must be smooth and the ratio of the square of the diameter of the sphere divided by the product of the distance from the center of the sphere to the screen and the laser wavelength must be greater than one (similar to the Fresnel number).
Photon velocity
Constant photon velocity is due to a resonance driven by the emission of photon intrinsic energy which results in an increase in wavelength and a proportional decrease in frequency. In a related phenomenon, Arthur Holly Compton demonstrated Compton scattering in which the loss of photon kinetic energy does not change velocity but increases wavelength and proportionally decreases frequency.12
The mechanism of constant photon velocity is the emission of quantons and gravitons.
Below top, looking down into the plane of the paper a photon G-axis points in the direction of photon velocity and the P and Q-axes are orthogonal. In the language of the turnbuckle analogy, the mechanism of the photon P and Q-axes are analogous to pistons which move up and down or back and forth and emit a single quanton or graviton at the end of each stroke.
Above middle, in column A of the P-axis row, at the position of the oscillation the up-stroke has just completed, a single graviton has been emitted, and the current direction of the oscillation is now down. In column B of the P-axis row, the position of the oscillation is mid-way, and the direction of the oscillation is down. In column C of the P-axis row, at the position of the oscillation the downstroke has just completed, a single graviton has been emitted, and the current direction of the oscillation is up. In column D of the P-axis row, the position of the oscillation is mid-way, and the direction of the oscillation is up.
Above middle, in column A of the Q-axis row, the position of the oscillation is mid-way and the direction of oscillation is left. In column B of the Q-axis row, at the position of the oscillation the left-stroke has just completed, a single quanton has been emitted, and the current direction of the oscillation is right. In column C of the Q-axis row, the position of the oscillation is mid-way and the direction of the oscillation is right. In column D of the Q-axis row, at the position of the oscillation the right-stroke has just completed, a single quanton has been emitted, and the current direction of the oscillation is left.
Above left or right bottom, in each cycle of the photon frequency there are eight sequential CCW or CW alternating quanton/graviton emissions and the intrinsic energy of the photon is reduced by Lambda-bar on each emission.
This is the mechanism of intrinsic redshift.
Part Three
Nuclear magnetic resonance
In the 1922 Stern-Gerlach experiment, a molecular beam of identical silver atoms passed through an inhomogeneous magnetic field. Contrary to classical expectations, the beam of atoms did not diverge into a cone with intensity highest at the center and lowest at the outside. Instead, atoms near the center of the beam were deflected with half the silver atoms deposited on a glass slide in an upper zone and half deposited in a lower zone, illustrating “space quantization.”
The Stern-Gerlach experiment, designed to test directional quantization in a magnetic field as predicted by old quantum theory (the Bohr-Sommerfeld hypothesis)13, was conducted two years before intrinsic spin was conceived by Wolfgang Pauli and six years before Paul Dirac formalized the concept. Intrinsic spin became part of the foundation of new quantum theory.
The concept of intrinsic spin, where the property that causes the deflection of silver atoms in two opposite directions “space quantization” is inherent in the particle itself, is incorrect.
However, a molecular beam composed of atoms with magnetic moments passed through a Stern-Gerlach apparatus does exhibit the numerical property attributed to intrinsic spin but this property, interactional spin, is not inherent in the atom but is dependent on external factors.
The protons within a nucleus are the origin of spin, magnetic moment, Larmor frequency, and other nuclear gyromagnetic properties. A nucleus contains “ordinary protons” which, for clarity, will be termed Pprotons, and “protons within neutrons” will be termed Nprotons.
In nuclei with an even number of Pprotons, the Pproton magnetic flux is contained within the nucleus and does not contribute to the nuclear magnetic moment.
With neutrons the situation is quite different. A neutron is achiral: it is a composite particle composed of an Nproton-electron pair and binding energy, it has no G-axis therefore does not gravitationally interact, and no Q-axis therefore is electrically neutral.
Within a nucleus, a neutron does not have a magnetic moment (during its less than 15-minute mean lifetime after a neutron is emitted from its nucleus, a free neutron has a measurable magnetic moment, but there are no free neutrons within nuclei) but the Nproton and electron of which a neutron is composed do have magnetic moments.
The gyromagnetic properties of a nucleus, its magnetic moment, its spin, its Larmor frequency, and its gyromagnetic ratio are due to Pprotons and Nprotons.
A molecular beam (composed of nuclei, atoms and/or molecules) emerging from an oven into a vacuum will have a thermal distribution of velocities. Molecules within the beam are subject to collisions with faster or slower molecules that cause rotations and vibrations, and the orientations of unpaired Pprotons and unpaired Nprotons are constantly subject to change.
In a silver atom there is a single unpaired Pproton and the orientation of its P-axis, with respect to its direction of motion through an inhomogeneous magnetic field, will be either leading or trailing. Out of a large number of unpaired Pprotons, the P-axes will be leading 50% of the time and trailing 50% of the time, and a silver atom containing an unpaired Pproton with a leading P-axis can be deflected in the direction of the inhomogeneous magnetic north pole while a silver atom containing an unpaired Pproton with a trailing P-axis can be deflected in the direction of the south pole.
If the magnetic field is strong enough for a sufficient percentage of unpaired Pprotons (the orientation of which is constantly changing) to encounter within 20 arcseconds lines of magnetic flux and be deflected up or down, the molecular beam of silver atoms deposited on a glass slide at the center of the magnetic field (where it is strongest) will be split into two zones and, consistent with the definition of spin as equal to the difference between the number of zones minus one divided by 2 (S = (z-1)/2), a Stern-Gerlach experiment determines a spin equal to ½. This result is the only example of spin clearly determined by the position of atoms deposited on a glass slide.14
The above explanation is correct for silver atoms passed through the inhomogeneous magnetic fields of the Stern-Gerlach apparatus, but in the 1939 Rabi experimental apparatus15 (upon which modern molecular beam apparatus are modeled) the mechanism of deflection due to leading or trailing P-axes has nothing to do with the results achieved.
The 1939 Rabi experimental apparatus included back-to-back Stern-Gerlach inhomogeneous magnetic fields with opposite magnetic field orientations, but the result that dramatically changed physics, the accurate measurement of the Larmor frequency of nuclei, was done in a separate Rabi analyzer placed between the inhomogeneous magnetic fields. To Rabi, the importance of the Stern-Gerlach inhomogeneous magnets was for use in the alignment and tuning of the entire apparatus.
In a Rabi analyzer there is a strong constant magnetic field and a weaker transverse oscillating magnetic field. The purpose of the strong constant field is to decouple (increase the separation distance between) electrons and protons. The purpose of the transverse oscillating field is to stimulate the emission of photons by the decoupled protons.
When the Rabi apparatus is initially assembled, before installation of the Rabi analyzer the Stern-Gerlach apparatus is set up and tuned such that the intensity of the molecular beam leaving the apparatus is equal to its intensity upon entering.
After the unpowered Rabi analyzer is mounted between the Stern-Gerlach magnets, and the molecular beam exiting the first inhomogeneous magnetic field passes through the Rabi analyzer and enters the second inhomogeneous magnetic field, the intensity of the molecular beam leaving the apparatus decreases. In this state the entire Rabi apparatus is tuned and adjusted until the intensity of the entering molecular beam is equal to the intensity of the exiting beam.
When the crossed magnetic fields of the Rabi analyzer are switched on, for a second time the intensity of the exiting beam decreases. Then, by adjustment of the relative positions and orientations of the three magnetic fields (and also adjustment of the detector position to optimally align with decoupled protons in the nucleus of interest) the intensity of the exiting beam is returned to its initial value.
During an operational run, the transverse oscillating field stimulates the emission of photons at the same frequency as that of the transverse oscillating magnetic field. The ratio of the photon frequency divided by the strength of the strong magnetic field is equal to the Larmor frequency of the nucleus, and the Larmor frequency divided by the strong magnetic field strength is equal to the gyromagnetic ratio. The Larmor frequency has a very sharp resonant peak limited only by the accuracy of the two experimental measurables: the intensity of the strong magnetic field and the frequency of the oscillating weak magnetic field.
The gyromagnetic ratios of Li6, Li7, and F19, experimentally determined by Rabi in 1939, agree with the 2014 INDC16 values to better than 1 part in 60,000. Importantly, measurements of the gyromagnetic ratios of Li6 and Li7 were made in three different lithium molecules (LiCl, LiF, and Li2) requiring three separate operational runs, thereby demonstrating the Rabi analyzer was adjusted to optimally detect the nucleus of interest.
Modern determinations of spin are based on various types of spectroscopy, the results of which stand out as peaks in the collected data.
The magnetic flux of nuclei with an even number of Pprotons and Nprotons circulates in flux loops between pairs of Pprotons and pairs of Nprotons, and such nuclei do not have magnetic moments. The flux loops within nuclei with an odd number of Pprotons and/or Nprotons do have magnetic moments. In order for all nuclei of the same isotope to have zero or non-zero magnetic moments of the same amplitude, it is necessary for the magnetic flux loops to be circulating in the same plane.
All of the 106 selected magnetic nuclear isotopes from Lithium and Uranium, including all stable isotopes with atomic number (Z) greater than 2, plus a number of important isotopes with relatively long half-lives, belong to one of twelve different Types. The Type is determined based the spin of the isotope and the number of odd and even Pprotons and Nprotons.
An isotope contains an internal physical structure to which the property of magnetic moment correlates, but the magnetic moment is not entirely determined by the internal physical structure of a nucleus. The property of interactional spin is that portion of the magnetic moment due to factors external to the nucleus, including electromagnetic radiation, magnetic fields, electric fields and excitation energy.
Of significance to the present discussion, the detectable magnetic properties of 82 of the 106 selected isotopes (the relative spatial orientations of the flux loops associated with the Pprotons and Nprotons) can be manipulated by four different orientations of directed planar electric fields.
The magnetic signatures of the 106 selected isotopes can be sorted into twelve isotope Types with seven spin values.
Spin ½ isotopes with an odd number of Pprotons and even number of Nprotons are Type A-0. Of the 106 selected isotopes, 10 are Type A-0.
Spin ½ isotopes with an even number of Pprotons and odd number of Nprotons (odd/even Reversed) are Type RA-0. Of the 106 selected isotopes, 14 are Type RA-0.
Spin 1 isotopes with an odd number of Pprotons and an odd number of Nprotons are Type B-1. Of the 106 selected isotopes, 2 are Type B-1.
Spin 3/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type C-1. Of the 106 selected isotopes, 18 are Type C-1.
Spin 3/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RC-1. Of the 106 selected isotopes, 12 are Type RC-1.
Spin 5/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type C-2. Of the 106 selected isotopes, 13 are Type C-2.
Spin 5/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RC-2. Of the 106 selected isotopes, 11 are Type RC-2.
Spin 3 isotopes with an odd number of Pprotons and an odd number of Nprotons are Type B-3. Of the 106 selected isotopes, 2 are Type B-3.
Spin 7/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type A-3.
Of the 106 selected isotopes, 9 are Type A-3.
Spin 7/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RA-3. Of the 106 selected isotopes, 8 are Type RA-3.
Spin 9/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type C-4. Of the 106 selected isotopes, 3 are Type C-4.
Spin 9/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RC-4. Of the 106 selected isotopes, 4 are Type RC-4.
Above, the horizontal line is in the inspection plane. The vertical line, the photon path to the Rabi analyzer, is parallel to the constant magnetic field. The circle indicates the diameter of the molecular beam, and the crosshairs indicate the velocity of the beam is directed into the paper.
A molecular beam is not needed for the operation of a Rabi analyzer, all that is required is for an analytical sample (gas or liquid phase) comprising a large number of molecules containing a larger number of nuclei enclosing an even larger number of particles to be located at the intersection of the cross hairs.
The position of the horizontal inspection plane is irrelevant to Rabi analysis but it is crucial for spectroscopic analysis of flux loops.
Above left, the molecular beam (directed into the paper in the previous illustration) is directed from right to left, and the photon path to the Rabi analyzer is in the same location as in the previous illustration.
For spectroscopic analysis, the inspection plane is the plane defined by the direction the molecular beam formerly passed and the direction of the positive electric field when pointing up.
Above right, the inspection plane for spectroscopic analysis, is labelled at each corner. The dashed line in place of the former position of the molecular beam is an orthogonal axis (OA) passing through the direction of the positive side of the electric field when pointing up (UP),
and passing through the direction of the spectroscopic detectors (SD).
The intersection of OA, UP and SD is the location where the analytical sample (gas or liquid phase) is placed in the inspection plane. The electric field that orients particle Q-axes is in the inspection plane.
The detection of ten of the twelve Types of magnetic signatures (in the 106 selected isotopes) requires one of four alignments of directed electric fields: the positive side of the electric field pointing up, the positive side of the electric field pointing right, the positive side of the electric field pointing down, or the positive side of the electric field pointing left.
The four possible alignments of the electric field are illustrated on either side of the inspection plane (but in operation the entire breadth of the electric field points in the same direction) and the directed lines on the edges of the inspection plane represent the positions of thin wire cathodes that produce planar electric fields.
Prior to an operational run, the spectroscopic detectors are adjusted to optimally detect the magnetic properties of the isotope to be analyzed.
Above is a summary of isotope magnetic signatures.
Column 1 lists the twelve magnetic isotope Types.
In column 2, with the P-axes of particles oriented by a constant magnetic field directed up in the direction of the magnetic north pole and in the absence of a directed electric field, the magnetic signatures due to flipping odd Pproton P-axes (the arrow on the left of the vignette) and odd Nproton P-axes (the arrow on the right of the vignette) are illustrated.
See below, in the detailed discussion of Type B-1, for the reason there is a zero instead of an arrow in Types B-1 and B-3.
The magnetic signatures due to flux loops in the presence of the four orientations of an electric field, are given in columns 3, 4, 5 and 6 for electric fields directed up, directed down, directed to the right, or directed to the left.
In illustrations of flux loop magnetic signatures, if the arrows are oriented up and down the arrow on the left of the vignette represents the direction of Pproton flux loops and the arrow on the right represents the direction of Nproton flux loops, if the arrows are oriented left and right the arrow on the top of the vignette represents the direction of Pproton flux loops and the arrow on the bottom represents the direction of Nproton flux loops.
In total there are six directed orthogonal planes in Cartesian space but only four of these are represented in columns 3, 4, 5 and 6. This omission is due to the elliptical planar shape of magnetic flux loops: the missing orientations provide edge-on views without detectable magnetic signatures.
Type A-0
7N15, with 7 Pprotons and 8 Nprotons, is the lowest atomic number Type A-0 isotope. In Type A-0 isotopes the flux loops associated with Pprotons and Nprotons lie in a directed Cartesian plane without detectable flux loop signatures.
In an analytical sample, 50% of the odd (unpaired) Pproton P-axes will be oriented in one direction and 50% in the opposite direction. The orientation of the magnetic axes of the odd Pproton are flipped by the transverse oscillating magnetic field and the spectroscopic detectors sense two different magnetic signatures resulting in two peaks corresponding to a spin of ½.
Above is the magnetic signature of Type A-0. The left arrow pointing up is the direction of the odd Pproton P-axis after emission of a photon (previously the constant magnetic field aligned the Pproton P-axis in this orientation, then absorption of intrinsic energy from the transverse oscillating magnetic field flipped the axis to pointing down then, due to the 180 degree rotation of the P-Q axes with respect to the direction of the G-axis, the absorbed intrinsic energy was released as a photon when the axis was flipped back to pointing up). The arrow pointing down is the antiparallel direction of the P-axis of a paired Nproton (which does not emit a photon).
The experimental detection of Type A-0 isotopes requires a constant magnetic field oriented in the direction of magnetic north.
Type RA-0
6C13, with 6 Pprotons and 7 Nprotons, is the lowest atomic number Type RA-0 isotope. In Type RA-0 isotopes the flux loops associated with Pprotons and Nprotons lie in a directed Cartesian plane without detectable flux loop signatures.
In an analytical sample, 50% of the odd (unpaired) Nproton P-axes will be oriented in one direction and 50% in the opposite direction. The orientation of the magnetic axes of the odd Nproton are flipped by the transverse oscillating magnetic field and the spectroscopic detectors produce two different magnetic signatures resulting in two peaks corresponding to a spin of ½.
Above is the magnetic signature of Type RA-0. The left arrow pointing up is the direction of the P-axis of a paired Pproton (which does not emit a photon). The right arrow pointing down is the direction of the odd Nproton P-axis after emission of a photon (previously the constant magnetic field aligned the Nproton P-axis in this orientation, then absorption of intrinsic energy from the transverse oscillating magnetic field flipped the axis to pointing up then, due to the 180 degree rotation of the P-Q axes with respect to the direction of the G-axis, the absorbed intrinsic energy was released as a photon when the axis was flipped back to pointing down).
The experimental detection of Type RA-0 isotopes requires a constant magnetic field oriented in the direction of magnetic north.
Type B-1
3Li6, with 3 Pprotons and 3 Nprotons, is the lowest atomic number Type B-1 isotope. In isotopes with an odd number of Pprotons and Nprotons, the odd Pproton interacts with the electron in the odd Nproton preventing electron-Nproton decoupling by the constant magnetic field and the odd Nproton P-axis is unable to be flipped by the transverse oscillating magnetic field, but the electron-Pprotonis decoupled and the orientation of the odd Pproton magnetic axis is flipped by the transverse oscillating magnetic field, and the spectroscopic detectors, adjusted to optimally recognize the magnetic signatures of 3Li6, sense one distinctive magnetic signature, resulting in one peak.
In Type B-1, the odd Nproton P-axis is unable to be flipped thus there is no magnetic signature due to the Nproton itself, but both the Nproton and the Pproton have associated flux loops and spectroscopic detectors can sense the magnetic signatures of the flux loops in the presence of a directed electric field pointing up.
In the analysis of isotopes with detectable flux loop signatures there are four possible orientations of the directed electric fields. The magnetic flux loops associated with Type-1 isotopes are detectable if the directed electric field is pointing up. The magnetic flux loops associated with Type-2 isotopes are detectable if the directed electric field is pointing down. The magnetic flux loops associated with Type-3 isotopes are detectable if the directed electric field is pointing right. The magnetic flux loops associated with Type-4 isotopes are detectable if the directed electric field is pointing left.
Each of these directed electric field orientations require different experiments, therefore the results of five experiments (including one experiment without directed electric fields) are needed to fully establish the Type of an unknown isotope.
The flux loops circulating through particle P-axes can pass through all radial planes. The radial flux planes in the above diagram are in the plane of the paper demonstrating, when detected from opposite directions, flux loops will be CW (directed right-left) or CCW (directed left-right).
Since Pprotons and Nprotons are oppositely aligned, a CW Pproton signature is identical to an Nproton CCW signature, and a CCW Pproton signature is identical to an Nproton CW signature.
Because the magnetic signatures of the particles in the field of view of a detector are differently oriented, on average 50% of the flux loop magnetic signatures will be CW and 50% CCW. Of the 50% of the CW signatures 25% will be due to Pprotons and 25% due to Nprotons, and of the 50% of the CCW signatures 25% will be due to Pprotons and 25% due to Nprotons.
Thus, there will be two different magnetic signatures resulting to two peaks, but we are unable to distinguish which is due to CW Pproton flux loops or CCW Nproton flux loops, and which is due to CCW Pproton flux loops or CW Nproton flux loops.
In Type B-1, the magnetic signature due to the odd Pproton (experimentally determined in the absence of an electric field) has one peak, and the magnetic signature due to flux loops associated with Pprotons and Nprotons (experimentally determined in an electric field oriented parallel to the magnetic field) has two peaks, totaling three peaks corresponding to a spin of 1.
Here we come to a fundamental issue. Is the uncertainty in situations involving linked physical properties (complementarity) described by probability or is it causedby probability? In 1925 Werner Heisenberg theorized this type of uncertainty was caused by probability and that opinion became, along with intrinsic spin, an important part of the foundation of new quantum theory.
In nature, the orientation of the magnetic signatures of isotopes and the orientation of the nuclei containing the particles responsible for the magnetic signatures are random. The magnetic signatures due to a large number of randomly oriented particles are indistinguishable from background noise, but under the proper experimental conditions, the magnetic signatures are discernable.
The magnetic signatures of flux loops, imperceptible in nature, are perceptible when the Q-axes of the associated particles are aligned.
A constant magnetic field is not needed to detect the magnetic signatures of flux loops, but compared to the Rabi analyzer the inspection plane to detect the magnetic signatures of flux loops is in the identical position, and the directed orthogonal plane pointing up in the direction of magnetic north in the Rabi analyzer is the identical to the directed orthogonal plane pointing up in the direction of the positive electric field in the flux loops analyzer, that is, the direction of the electric field is parallel to the magnetic field.
Therefore, even though the magnetic field is not needed to detect the magnetic signatures of flux loops, if the magnetic field is present in addition to the directed electric field, its presence would not alter the experimental results, but it might provide additional information.
Here is a prediction of the present theory. If the experiment detecting the magnetic signature of Type B-1 is conducted in the presence of a constant magnetic field and a directed electric field pointing up, that one experiment will determine the magnetic signatures shown above plus two additional signatures: (1) the magnetic signature due to CW Pproton flux loops and CCW Nproton flux loops and (2) the magnetic signature due to CW Nproton flux loops and CCW Pproton flux loops.
This result would demonstrate the uncertainty in at least one situation involving linked physical properties is described by probability but is not causedby probability. This and other experiments yet to be devised, will overturn the concept of causation by probability, and validate Einstein’s intuition that God “does not play dice with the universe.”17
Type C-1
3Li7, with 3 Pprotons and 4 Nprotons, is the lowest atomic number Type C-1 isotope.
As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. In total, Type C-1 isotopes have four peaks corresponding to a spin of 3/2.
Type RC-1
4Be9, with 4 Pprotons and 5 Nprotons, is the lowest atomic number RC-1 isotope.
As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. In total, Type RC-1 isotopes have four peaks corresponding to a spin of 3/2.
Type C-2
13Al27, with 13 Pprotons and 14 Nprotons, is the lowest atomic number Type C-2 isotope.
As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks.
In the identification of Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. In total, Type C-2 isotopes have six peaks corresponding to a spin of 5/2.
Type RC-2
8O17, with 8 Pprotons and 9 Nprotons, is the lowest atomic number Type RC-2 isotope. 8O17 has one odd Nproton and no odd Pprotons.
As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks.
In the identification of Type RC-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. In total, Type RC-2 isotopes have six peaks corresponding to a spin of 5/2.
Type B-3
5B10, with 5 Pprotons and 5 Nprotons, is the lowest atomic number Type B-3 isotope.
As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks.
In the identification of Type B-3, the odd Pproton flux loops, determined in an electric field pointing right, has two peaks. In total, Type B-3 isotopes have seven peaks corresponding to a spin of 3.
A-3
21Sc45, with 21 Pprotons and 24 Nprotons, is the lowest atomic number Type A-3 isotope.
As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In total, Type A-3 isotopes have eight peaks corresponding to a spin of 7/2.
RA-3
20Ca43, with 20 Pprotons and 23 Nprotons, is the lowest atomic number Type RA-3 isotope.
As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type RC-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In total, Type RA-3 isotopes have 8 peaks corresponding to a spin of 7/2.
C-4
41NB93, with 41 Pprotons and 52 Nprotons, is the lowest atomic number Type C-4 isotope.
As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In the identification of Type C-4, the odd Nproton flux loops, determined in an electric field pointing left, has two peaks. In total, Type C-4 isotopes have 10 peaks corresponding to a spin of 9/2.
RC-4
32Ge73, with 32 Pprotons, 41 Nprotons, is the lowest atomic number Type RC-4 isotope.
As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type RC-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In the identification of Type RC-4, the odd Nproton flux loops, determined in an electric field pointing left, has two peaks. In total, Type RC-4 isotopes have 10 peaks corresponding to a spin of 9/2.
Z
N
Z+N
Spin
Peaks
Type
7N15
7
8
15
0.5
2
A-0
9F19
9
10
19
0.5
2
A-0
15P31
15
16
31
0.5
2
A-0
39Y89
39
50
89
0.5
2
A-0
45Rh103
45
58
103
0.5
2
A-0
47Ag109
47
62
109
0.5
2
A-0
47Ag107
47
60
107
0.5
2
A-0
69Tm169
69
100
169
0.5
2
A-0
81Tl203
81
122
203
0.5
2
A-0
81Tl205
81
124
205
0.5
2
A-0
6C13
6
7
13
0.5
2
RA-0
14Si29
14
15
29
0.5
2
RA-0
26Fe57
26
31
57
0.5
2
RA-0
34Se77
34
43
77
0.5
2
RA-0
48Cd111
48
63
111
0.5
2
RA-0
50Sn117
50
67
117
0.5
2
RA-0
50Sn115
50
65
115
0.5
2
RA-0
52Te125
52
73
125
0.5
2
RA-0
54Xe129
54
75
129
0.5
2
RA-0
74W183
74
109
183
0.5
2
RA-0
76Os187
76
111
187
0.5
2
RA-0
78Pt195
78
117
195
0.5
2
RA-0
80Hg199
80
119
199
0.5
2
RA-0
82Pb207
82
125
207
0.5
2
RA-0
3Li6
3
3
6
1.0
3
B-1
7N14
7
7
14
1.0
3
B-1
3Li7
3
4
7
1.5
4
C-1
5B11
5
6
11
1.5
4
C-1
11Na23
11
12
23
1.5
4
C-1
17Cl35
17
18
35
1.5
4
C-1
17Cl37
17
20
37
1.5
4
C-1
19K39
19
20
39
1.5
4
C-1
19K41
19
22
41
1.5
4
C-1
29Cu63
29
34
63
1.5
4
C-1
29Cu65
29
36
65
1.5
4
C-1
31Ga69
31
38
69
1.5
4
C-1
31Ga71
31
40
71
1.5
4
C-1
33As75
33
42
75
1.5
4
C-1
35Br79
35
44
79
1.5
4
C-1
35Br81
35
46
81
1.5
4
C-1
65Tb159
65
94
159
1.5
4
C-1
77Ir193
77
116
193
1.5
4
C-1
77Ir191
77
114
191
1.5
4
C-1
79Au197
79
118
197
1.5
4
C-1
4Be9
4
5
9
1.5
4
RC-1
10Ne21
10
11
21
1.5
4
RC-1
16S33
16
17
33
1.5
4
RC-1
24Cr53
24
29
53
1.5
4
RC-1
28Ni61
28
33
61
1.5
4
RC-1
54Xe131
54
77
131
1.5
4
RC-1
56Ba135
56
79
135
1.5
4
RC-1
56Ba137
56
81
137
1.5
4
RC-1
64Gd155
64
91
155
1.5
4
RC-1
64Gd157
64
93
157
1.5
4
RC-1
76Os189
76
113
189
1.5
4
RC-1
80Hg201
80
121
201
1.5
4
RC-1
13Al27
13
14
27
2.5
6
C-2
25Mn51
25
26
51
2.5
6
C-2
25Mn55
25
30
55
2.5
6
C-2
37Rb85
37
48
85
2.5
6
C-2
51Sb121
51
70
121
2.5
6
C-2
53I127
53
74
127
2.5
6
C-2
59Pr141
59
82
141
2.5
6
C-2
61Pm145
61
84
145
2.5
6
C-2
63Eu151
63
88
151
2.5
6
C-2
63Eu153
63
90
153
2.5
6
C-2
75Re185
75
110
185
2.5
6
C-2
8O17
8
9
17
2.5
6
RC-2
12Mg25
12
13
25
2.5
6
RC-2
22Ti47
22
25
47
2.5
6
RC-2
30Zn67
30
37
67
2.5
6
RC-2
40Zr91
40
51
91
2.5
6
RC-2
42Mo95
42
53
95
2.5
6
RC-2
42Mo97
42
55
97
2.5
6
RC-2
44Ru101
44
57
101
2.5
6
RC-2
44Ru99
44
55
99
2.5
6
RC-2
46Pd105
46
59
105
2.5
6
RC-2
66Dy161
66
95
161
2.5
6
RC-2
66Dy163
66
97
163
2.5
6
RC-2
70Yb173
70
103
173
2.5
6
RC-2
5B10
5
5
10
3.0
7
B-3
11Na22
11
11
22
3.0
7
B-3
21Sc45
21
24
45
3.5
8
A-3
23V51
23
28
51
3.5
8
A-3
27Co59
27
32
59
3.5
8
A-3
51Sb123
51
72
123
3.5
8
A-3
55Cs133
55
78
133
3.5
8
A-3
57La139
57
82
139
3.5
8
A-3
67Ho165
67
98
165
3.5
8
A-3
71Lu175
71
104
175
3.5
8
A-3
73Ta181
73
108
181
3.5
8
A-3
20Ca43
20
23
43
3.5
8
RA-3
22Ti49
22
27
49
3.5
8
RA-3
60Nd143
60
83
143
3.5
8
RA-3
60Nd145
60
85
145
3.5
8
RA-3
62Sm149
62
87
149
3.5
8
RA-3
68Er167
68
99
167
3.5
8
RA-3
72Hf177
72
105
177
3.5
8
RA-3
92U235
92
143
235
3.5
8
RA-3
41Nb93
41
52
93
4.5
10
C-4
49In113
49
64
113
4.5
10
C-4
83Bi209
83
126
209
4.5
10
C-4
32Ge73
32
41
73
4.5
10
RC-4
36Kr83
36
47
83
4.5
10
RC-4
38Sr87
38
49
87
4.5
10
RC-4
72Hf179
72
107
179
4.5
10
RC-4
Z
N
Z+N
Spin
Peaks
Type
3Li6
3
3
6
1.0
3
B-1
3Li7
3
4
7
1.5
4
C-1
4Be9
4
5
9
1.5
4
RC-1
5B10
5
5
10
3.0
7
B-3
5B11
5
6
11
1.5
4
C-1
6C13
6
7
13
0.5
2
RA-0
7N14
7
7
14
1.0
3
B-1
7N15
7
8
15
0.5
2
A-0
8O17
8
9
17
2.5
6
RC-2
9F19
9
10
19
0.5
2
A-0
10Ne21
10
11
21
1.5
4
RC-1
11Na23
11
12
23
1.5
4
C-1
11Na22
11
11
22
3.0
7
B-3
12Mg25
12
13
25
2.5
6
RC-2
13Al27
13
14
27
2.5
6
C-2
14Si29
14
15
29
0.5
2
RA-0
15P31
15
16
31
0.5
2
A-0
16S33
16
17
33
1.5
4
RC-1
17Cl35
17
18
35
1.5
4
C-1
17Cl37
17
20
37
1.5
4
C-1
19K39
19
20
39
1.5
4
C-1
19K41
19
22
41
1.5
4
C-1
20Ca43
20
23
43
3.5
8
RA-3
21Sc45
21
24
45
3.5
8
A-3
22Ti47
22
25
47
2.5
6
RC-2
22Ti49
22
27
49
3.5
8
RA-3
23V51
23
28
51
3.5
8
A-3
24Cr53
24
29
53
1.5
4
RC-1
25Mn51
25
26
51
2.5
6
C-2
25Mn55
25
30
55
2.5
6
C-2
26Fe57
26
31
57
0.5
2
RA-0
27Co59
27
32
59
3.5
8
A-3
28Ni61
28
33
61
1.5
4
RC-1
29Cu63
29
34
63
1.5
4
C-1
29Cu65
29
36
65
1.5
4
C-1
30Zn67
30
37
67
2.5
6
RC-2
31Ga69
31
38
69
1.5
4
C-1
31Ga71
31
40
71
1.5
4
C-1
32Ge73
32
41
73
4.5
10
RC-4
33As75
33
42
75
1.5
4
C-1
34Se77
34
43
77
0.5
2
RA-0
35Br79
35
44
79
1.5
4
C-1
35Br81
35
46
81
1.5
4
C-1
36Kr83
36
47
83
4.5
10
RC-4
37Rb85
37
48
85
2.5
6
C-2
38Sr87
38
49
87
4.5
10
RC-4
39Y89
39
50
89
0.5
2
A-0
40Zr91
40
51
91
2.5
6
RC-2
41Nb93
41
52
93
4.5
10
C-4
42Mo95
42
53
95
2.5
6
RC-2
42Mo97
42
55
97
2.5
6
RC-2
44Ru101
44
57
101
2.5
6
RC-2
44Ru99
44
55
99
2.5
6
RC-2
45Rh103
45
58
103
0.5
2
A-0
46Pd105
46
59
105
2.5
6
RC-2
47Ag107
47
60
107
0.5
2
A-0
47Ag109
47
62
109
0.5
2
A-0
48Cd111
48
63
111
0.5
2
RA-0
49In113
49
64
113
4.5
10
C-4
50Sn115
50
65
115
0.5
2
RA-0
50Sn117
50
67
117
0.5
2
RA-0
51Sb121
51
70
121
2.5
6
C-2
51Sb123
51
72
123
3.5
8
A-3
52Te125
52
73
125
0.5
2
RA-0
53I127
53
74
127
2.5
6
C-2
54Xe129
54
75
129
0.5
2
RA-0
54Xe131
54
77
131
1.5
4
RC-1
55Cs133
55
78
133
3.5
8
A-3
56Ba135
56
79
135
1.5
4
RC-1
56Ba137
56
81
137
1.5
4
RC-1
57La139
57
82
139
3.5
8
A-3
59Pr141
59
82
141
2.5
6
C-2
60Nd143
60
83
143
3.5
8
RA-3
60Nd145
60
85
145
3.5
8
RA-3
61Pm145
61
84
145
2.5
6
C-2
62Sm149
62
87
149
3.5
8
RA-3
63Eu151
63
88
151
2.5
6
C-2
63Eu153
63
90
153
2.5
6
C-2
64Gd155
64
91
155
1.5
4
RC-1
64Gd157
64
93
157
1.5
4
RC-1
65Tb159
65
94
159
1.5
4
C-1
66Dy161
66
95
161
2.5
6
RC-2
66Dy163
66
97
163
2.5
6
RC-2
67Ho165
67
98
165
3.5
8
A-3
68Er167
68
99
167
3.5
8
RA-3
69Tm169
69
100
169
0.5
2
A-0
70Yb173
70
103
173
2.5
6
RC-2
71Lu175
71
104
175
3.5
8
A-3
72Hf177
72
105
177
3.5
8
RA-3
72Hf179
72
107
179
4.5
10
RC-4
73Ta181
73
108
181
3.5
8
A-3
74W183
74
109
183
0.5
2
RA-0
75Re185
75
110
185
2.5
6
C-2
76Os187
76
111
187
0.5
2
RA-0
76Os189
76
113
189
1.5
4
RC-1
77Ir191
77
114
191
1.5
4
C-1
77Ir193
77
116
193
1.5
4
C-1
78Pt195
78
117
195
0.5
2
RA-0
79Au197
79
118
197
1.5
4
C-1
80Hg199
80
119
199
0.5
2
RA-0
80Hg201
80
121
201
1.5
4
RC-1
81Tl203
81
122
203
0.5
2
A-0
81Tl205
81
124
205
0.5
2
A-0
82Pb207
82
125
207
0.5
2
RA-0
83Bi209
83
126
209
4.5
10
C-4
92U235
92
143
235
3.5
8
RA-3
In GAP, the gyromagnetic ratio of a nucleus is equal to the product of the INDC isotope g-factor and the CODATA nuclear magneton divided by the product of the INDC intrinsic spin and the CODATA reduced Plank constant, and the magnetic moment of a nucleus is equal to the product of the INDC isotope g-factor and the CODATA nuclear magneton.
In discrete physics, the magnetic moment of a nucleus is equal the product of two times the interactional spin (converts spin to number of odd Pprotons and/or odd Nprotons), the kinetic steric factor (converts molecular beam thermal energy into Joules), Lambda-bar, and the GAP value for the gyromagnetic ratio (assumed correct).
In the 106 isotopes tested, the ratio of the INDC isotope magnetic moment divided by the value denominated in discrete units is equal to 1.0288816.
The difference can be narrowed by adjustment but cannot be eliminated because CODATA constants are not exactly reconciled.
Part Four
Particle acceleration
Einstein believed mass was constant and many of his revolutionary discoveries were based on that concept. Constancy of mass is an eminently reasonable assumption because Newtonian equations are also founded on mass conservation and in the majority of situations his equations accurately predict the observables. But in fact, as was succinctly expressed in his letter to Richard Bentley, his equations do not correspond to physical reality.18
Einstein also believed the speed of light was constant and since kinetic energy is proportional to mass and velocity, he concluded that the mass of a particle increases with velocity and approaches (but never reaches) a maximum value as the velocity approaches the speed of light. In special relativity he was able to derive, in a few simple equations, the relativistic momentum and energy (mass-energy) of a particle.
In general relativity, Einstein’s field equations described the curvature of space-time in intense gravitational fields in agreement with the measured value for the precession of the perihelion of mercury. It seems likely the field equations were derived with that result in mind. Even so, this approach is eminently justifiable because measurables are valid assumptions for a physical theory.
Einstein’s prediction that the curvature of space-time in intense gravitational fields was not only responsible for the precession of the perihelion of mercury but would also bend rays of light was verified in two astronomical expeditions led by Arthur Eddington and Andrew Crommelin. Their observations were acclaimed as verification of general relativity and today the curvature of space-time is considered by most scientists to be undisputed.
Unfortunately, this undisputed theory cannot determine the velocity of a relativistically accelerated electron or proton and does not provide a mechanism for the increase in energy and mass (mass-energy).
The present theory derives the velocity and mass-energy of accelerated electrons and protons, and provides a mechanism.
In particle acceleration, charged particles are electrostatically formed into a linear beam and accelerated, then injected into a circular accelerator (or cyclotron) where they are magnetically formed into a circular beam and further accelerated by oscillating magnetic fields. Particle acceleration in linear and circular beams is mediated by chirality meshing interactions.
An electrostatic voltage is the emission of quantons:
In electrostatic acceleration of negatively charged particles between a negative cathode on the left emitting CCW quantons and a positive anode on the right emitting CW quantons, chirality meshing absorptions of CCW quantons results in repulsive deflections (voltage acceleration) to the right and chirality meshing absorptions of CCW quantons results in attractive deflections (voltage acceleration) to the right.
If positively charged particles are between a negative cathode on the left emitting CCW quantons and a positive anode on the right emitting CW quantons, chirality meshing absorptions of CW quantons results in attractive deflections (voltage acceleration) to the left and chirality meshing absorptions of CW quantons results in repulsive deflections (voltage acceleration) to the left.
Quantons are also produced transverse to a magnetic field with CCW quantons emitted by the magnetic North pole and CW quantons emitted by the magnetic South pole:
In acceleration by a transverse oscillating magnetic field, charged particles are alternately pushed (repulsively deflected) from one direction and pulled (attractively deflected) from the opposite direction.
Negatively charged particles are alternately pushed (deflected in the direction of the positive anode) due to the absorption of CCW quantons and pulled (deflected in the direction of the positive anode) due to the absorption of CW quantons.
Positively charged particles are alternately pulled (deflected in the direction of the negative cathode) due to the absorption of CCW quantons, and pushed (deflected in the direction of the negative cathode) due to the absorption of CW quantons.
In either case (electrostatic voltage or oscillating magnetic voltage) the energy of simultaneous acceleration by oppositely directed voltages is proportional to the square of the voltage.
A chirality meshing absorption of a quanton increases the intrinsic energy of a particle and produces an intrinsic deflection that increases the particle velocity. Like kinetic acceleration, an intrinsic deflection increases the velocity but does so without the dissipation of kinetic energy.
The number of particles and quantons is directly proportional to the intrinsic Josephson constant: 3.0000E15 quantons are absorbed by 3.0000E15 particles per second per Volt. At 400 Volts 1.2000E18 quantons are absorbed by 1.2000E18 particles per second; and at 250,000 Volts 7.5000E20 quantons are absorbed by 7.5000E20 particles per second.
Each quanton absorption produces a deflection (acceleration) equal to the square root of Lambda-bar divided by the particle amplitude. Quanton absorption by an electron produces a deflection of 2.5327E-18 meters, and quanton absorption by a proton produces a deflection of 2.0680E-19 meters.
The number of chirality meshing interactions is equal to the square of the voltage divided by the square root of Lambda-bar. The intrinsic energy absorbed by a particle in a chirality meshing interaction is equal to the product of the number of chirality meshing interactions and Lambda-bar, divided by the number of particles. The accelerated particle intrinsic energy is equal to the sum of the particle intrinsic energy plus the intrinsic energy absorbed by the particle in a chirality meshing interaction.
The kinetic mass-energy in units of Joule is equal to the product of the accelerated particle intrinsic energy, the square of the photon velocity, and the ratio of the discrete Planck constant divided by Lambda-bar.
Electron acceleration
Below left, the GAP equation for electron velocity due to electrostatic or electromagnetic voltage is equal to the square root of the ratio of the product of 2, the CODATA elementary charge (units of Coulomb) and the voltage, divided by the CODATA electron mass (units of kilogram).
Above right, the discrete equation for electron velocity due to electrostatic or electromagnetic voltage is equal to the square root of the ratio of the product of 2, the charge intrinsic energy and the voltage, divided by the electron intrinsic energy.
The velocity calculated by the GAP equation is higher than the discrete equation by a factor of 1.007697. The difference can be narrowed by adjustment but cannot be eliminated because CODATA constants are not reconciled.
The analysis of electron acceleration includes a range of ten voltages between a minimum voltage and a maximum voltage. The maximum voltage is equal to a few millivolts less than the theoretical voltage required to accelerate an electron to the photon velocity (an impossibility), which, if calculated to fifteen significant digits, is 259807.621135332 Volts.
Top row column 1, the voltages used in this example analysis are 1, 100, 400, 800, 4000, 10000, 25000, 100000, 250000, and 259807.621135 Volts. The highest voltage, calculated to thirteen significant digits, exactly converts to the photon velocity (an impossibility) to eleven significant digits but is less than the photon velocity (the correct result) at 12 significant digits (this is an excellent example of a discretely exact property).
The equations following, calculations for 100 Volts, are identical to the equations for any other of the nine voltages, or for any other range of ten voltages greater than zero and less than the theoretical maximum.
Top row column 2, the calculated electron velocity per the discrete equation.
Top row column 3, the number of accelerated (deflected) electrons is equal to the ratio of the voltage divided by the intrinsic electron magnetic flux quantum.
Top row column 4, the deflection per quanton is equal to the square root of Lambda-bar divided by the electron amplitude.
This is the deflection of a chirality meshing interaction between a quanton and an electron.
Bottom row column 1, the number of chirality meshing interactions is equal to the square of the voltage divided by the square root of Lambda-bar.
Bottom row column 2, the increase in intrinsic energy per electron due to chirality meshing interactions, equal to the product of the number of chirality meshing interactions and Lambda-bar divided by the number of electrons, is denominated in units of Einstein.
Bottom row column 3, the accelerated electron energy is equal to the sum of the electron intrinsic energy and the increase in intrinsic energy per electron.
Bottom row column 4, the mass-energy in units of Joule is equal to the product of the accelerated electron intrinsic energy, the square of the photon velocity and the ratio of the discrete Planck constant divided by Lambda-bar.
Proton acceleration
The analysis of proton acceleration includes a range of ten voltages between a minimum voltage and a maximum voltage. For purposes of comparison, we specify the same voltages as used for the electron.
The theoretical voltage required to accelerate a proton to the photon velocity (an impossibility) is 38971143.1702997 Volts. Any voltage less than this theoretical maximum will accelerate a proton to less than the photon velocity.
The voltage range used in this example analysis is 1, 100, 400, 800, 4000, 10000, 25000, 100000, 250000, and 259807.621135 Volts. The equations below, the calculations for 100 Volts, are identical to the equations for any other accelerating voltage range greater than zero and less than the theoretical maximum.
The analysis of proton acceleration includes a range of ten voltages between a minimum voltage and a maximum voltage. The maximum voltage is equal to a few millivolts less than the theoretical voltage required to accelerate a proton to the photon velocity (an impossibility), which, if calculated to fifteen significant digits, is 259807.621135332 Volts.
Below left, the GAP equation for proton velocity due to electrostatic or electromagnetic voltage is equal to the square root of the ratio of the product of 2, the CODATA elementary charge (units of Coulomb) and the voltage, divided by the CODATA proton mass (units of kilogram).
Above right, the discrete equation for proton velocity, due to electrostatic or electromagnetic voltage, is equal to the square root of the ratio of the product of 2, the charge intrinsic energy (in units of intrinsic Volt) and the voltage, divided by the proton intrinsic energy (in units of Einstein).
The discrete proton velocity is lower than the discrete electron velocity by the square root of 150 (the square root of the proton amplitude).
The equations below, calculations for 100 Volts, are identical to the equations for any other of the nine voltages, or for any other range of ten voltages greater than zero and less than the theoretical maximum.
Top row column 1, the voltages used in this example analysis are 1, 100, 400, 800, 4000, 10000, 25000, 100000, 250000, and 259807.621135 Volts. The highest voltage, calculated to thirteen significant digits, exactly converts to the photon velocity (an impossibility) to eleven significant digits but is less than the photon velocity (the correct result) at 12 significant digits.
The equations following, calculations for 100 Volts, are identical to the equations for any other of the nine voltages, or for any other range of ten voltages greater than zero and less than the theoretical maximum.
Top row column 2, the calculated proton velocity per the discrete equation.
Top row column 3, the number of accelerated (deflected) protons is equal to the ratio of the voltage divided by the intrinsic electron magnetic flux quantum.
Top row column 4, the deflection per quanton is equal to the square root of Lambda-bar divided by the proton amplitude.
This is the deflection of a chirality meshing interaction between a quanton and a proton.
Bottom row column 1, the number of chirality meshing interactions is equal to the square of the voltage divided by the square root of Lambda-bar.
Bottom row column 2, the increase in intrinsic energy per proton due to chirality meshing interactions, equal to the product of the number of chirality meshing interactions and Lambda-bar divided by the number of protons, is denominated in units of Einstein.
Bottom row column 3, the accelerated proton energy is equal to the sum of the intrinsic proton energy and the increase in intrinsic energy per proton.
Bottom row column 4, the mass-energy in units of Joule is equal to the product of the accelerated proton intrinsic energy, the square of the photon velocity and the ratio of the discrete Planck constant divided by Lambda-bar.
Part Five
Atomic Spectra
The Rydberg equations correspond to high accuracy with the hydrogen spectral series and the Newtonian equations correspond to high accuracy with orbital motion but, despite many years of considerable effort, physicists have been unable to account for the spectrum of helium or for non-Newtonian stellar rotation curves.
Previously, we reformulated the Newtonian equations and explained stellar rotation curves. In this chapter we will reformulate the Rydberg equations for the spectral series of hydrogen and derive a general explanation for atomic spectra.
The equation formulated by Johann Balmer in 1885, in which the hydrogen spectrum wave numbers are proportional to the product of a constant and the difference between the inverse square of two integers, is correct, but the Bohr Model is not.
The electron is not a point particle, the electron does not orbit the proton, the force conveyed by an electron is not transmitted an infinite distance, at an infinitesimal distance the force is not infinite, electrons with lower energy and lower wave number are closer to the proton, and electrons with higher energy and higher wave number are further away from the proton (the Bohr distance-energy relationship must be reversed).
In hydrogen an electron and proton are engaged in a positional resonance. In atoms larger than hydrogen many electrons and protons are engaged in positional resonances. Each resonance is between one electron external to the nucleus and one proton internal to the nucleus, in which the electron and the nuclear proton are facing in opposite directions and each particle emits quantons that are absorbed by the other particle. On emission by the electron the quanton is CCW and on emission by the nuclear proton the quanton is CW. On emission the emitting particle recoils by a distance proportional to the particle intrinsic energy and on absorption the absorbing particle is attractively deflected (a chirality meshing interaction) by a distance proportional to the particle intrinsic energy. The result is a sustained positional resonance of a CCW quanton emitted in one direction by the electron and absorbed by the nuclear proton and a CW quanton emitted in the opposite direction by the nuclear proton and absorbed by the electron.
In the hydrogen atom, the resonance can be situated at any one of several quantized positions proportional to energy and corresponding to spectral emission and absorption lines. On emission of a photon the energy of the resonance decreases, and the electron drops to the adjacent lower energy level. On absorption of a photon the energy of the resonance increases, and the electron jumps to the adjacent higher energy level. The highest stable energy level, corresponding to an emission-only line, the maximum electron-proton separation distance beyond which the positional resonance no longer exists, is the hydrogen ionization energy.
The above paragraphs summarize the spectral mechanism which, for the time being, shall be considered a hypothesis.
The intrinsic to kinetic energy factor is equal to the ratio of the discrete Planck constant divided by the Coulomb divided by the ratio of Lambda-bar divided by the charge intrinsic energy, the ratio of the discrete Planck constant divided by the product of Lambda-bar and the square root of the proton amplitude divided by two, and two times the intrinsic steric factor.
The ionization energy of hydrogen (in larger atoms the ionization energy required to remove the last electron) is a discretely exact single value above which the atom no longer exists. The measured energy of hydrogen ionization is 1312 kJ/mol, and the corresponding CRC value is 13.59844 (units of kinetic electron Volts).19 Kinetic electron Volts divided by Omega-2 equals intrinsic Volts (units of Joule), which divided by 12 (the intrinsic to kinetic energy factor) equals intrinsic Volts (units of Einstein), which multiplied by the intrinsic electron charge equals intrinsic energy, which divided by Lambda-bar is equal to the photon frequency of hydrogen ionization.
Working backwards from the calculation sequences above, the discretely exact value of the photon ionization frequency is 3.28000000E15.
The intrinsic energy of hydrogen ionization, denominated in units of Einstein, is equal to the product of the photon frequency and Lambda-bar.
The intrinsic energy of hydrogen ionization, denominated in units of Joule, is equal to the product of the photon frequency and the discrete Planck constant.
The intrinsic voltage of hydrogen ionization, denominated in units of Einstein, is equal to the product of the photon frequency and Lambda-bar, divided by the charge intrinsic energy.
The ratio of the intrinsic voltage of hydrogen ionization divided by Psi is equal to the discrete Rydberg constant and denominated in units of inverse meter (spatial frequency).
The intrinsic voltage of hydrogen ionization, denominated in units of Joule, is equal to the product of 12 (the intrinsic to kinetic energy factor) and the discrete Rydberg constant, and the product of the photon frequency and the discrete Planck constant, divided by the Coulomb.
The kinetic voltage of hydrogen ionization, denominated in units of electron Volt, is equal to the product of the intrinsic voltage of hydrogen ionization and omega-2.
The difference between the above calculated energy of ionization and the CRC value is less than 0.30%. The poor accuracy is due to the performance standards of calorimeters.20 In the measurement of a sample against a calibration standard, a statistical analysis of the results will show the data lie within three standard deviations (sigma-3) of the mean (the expected value) and the accuracy will be 0.15% (99.85% of the measurements will lie in the range of higher than the calibration standard by no more than 0.15% or lower than the calibration standard by no more than 0.15%). If the identical procedure is used without prior knowledge of the expected result and whether the measurement is higher or lower than the actual value is unknown, the accuracy falls to no more than 0.30%.
The calculated value of the kinetic voltage of hydrogen ionization divided by the measured CRC value, expressed as a percentage, is 0.2666%.
Spectral series consist of a number of emission-absorption lines with a lower limit on the left and an upper limit on the right. Both limits are asymptotes: the lower limit corresponds to minimum energy, minimum frequency, and maximum wavelength; and the upper limit corresponds to maximum energy, maximum frequency, and minimum wavelength.
The below diagram of the Lyman spectral series consists of seven black emission-absorption lines to the left and a red emission-only line on the right. From left to right these lines are the Lyman lower limit (Lyman-A), Lyman-B, Lyman-C, Lyman-D, Lyman-E, Lyman-F, Lyman-G, and the Lyman upper limit.
The Rydberg equation expresses the wave numbers of the hydrogen spectrum equal to the product of the discrete Rydberg constant and the difference between the inverse square of the m-index minus the inverse square of the n-index.
The m-index has a constant value for each spectral series within the hydrogen spectrum. The six series ordered by highest energy (at the series upper limit) are Lyman, Balmer, Paschen, Brackett, Pfund and Humphreys.
Each line of a spectral series can be expressed in terms of energy, wave number, wavelength and photon frequency. The energy, wave number, and frequency increase from left to right, but the wavelength decreases from left to right.
For each spectral series the m-index increases from lowest to highest positional energy (Lyman = 1, Balmer = 2, Paschen = 3, Brackett = 4, Pfund = 5, Humphreys = 6). Each spectral series is composed of a sequence of lines (A, B, C, D, E, F, G) in which the n-index is equal to m+1, m+2, m+3, m+4, etc.
In the following analysis we will apply the Rydberg formula to calculate, based on the discretely exact value of the photon ionization frequency of 3.280000E15, the values for energy, wave number and frequency of the six spectral series of hydrogen.
The below calculations begin with the discretely exact values for the Lyman limit photon frequency and the hydrogen ionization energy (intrinsic voltage units of Joule), and the value of the discrete Rydberg constant.
The Lyman upper limit is an emission-only line because at any energy above the Lyman upper limit the hydrogen atom no longer exists. The calculation for the line prior to the Lyman upper limit is based on an n-index equal to 8, but there are additional discernable lines after Lyman-G because the Lyman upper limit is an asymptote. The identical situation holds for the limit of any spectral series.
The spectral series lower limit, the A-line (Lyman-A, Balmer-A, etc.) is also an asymptote and there are additional discernable lines between the C-line and the A-line. The number of lines included in a spectral series analysis is optional, but it is convenient to use the same number of lines in spectral series to be compared.
In this presentation, 8 Lyman and Balmer lines are included because these lines are specified in at least one of the easily available online sources. In the Paschen, Brackett, Pfund and Humphreys spectral series, 6 lines are included because these are also easily available.21
The ratio of the Lyman upper limit divided by the upper limit of another hydrogen spectral series is equal to the square of the m-index of the other series:
The Lyman upper limit divided by the Balmer upper limit is equal to 4.
The Lyman upper limit divided by the Paschen upper limit is equal to 9.
The Lyman upper limit divided by the Brackett upper limit is equal to 16.
The Lyman upper limit divided by the Pfund upper limit is equal to 25.
The Lyman upper limit divided by the Humphreys upper limit is equal to 36.
The ratio of the Lyman spectral series upper limit divided by the Lyman spectral series lower limit is equal to the ratio of the Rydberg wave number calculation for the upper limit divided by the Rydberg wave number calculation for the lower limit.
In all spectral series the Rydberg ratio is equal to the upper limit energy divided by the lower limit energy, the ratio of the upper limit structural frequency divided by the lower limit structural frequency, and the ratio of the lower limit wavelength divided by the upper limit wavelength.
The ratio of the Balmer spectral series upper limit divided by the Balmer spectral series lower limit is equal to the ratio of the Rydberg wave number calculation for the upper limit divided by the Rydberg wave number calculation for the lower limit.
The same calculation is used for the other four hydrogen spectral series:
The ratio of the Paschen spectral series upper limit divided by the Paschen lower limit is equal to 1312/574 (2.285714).
The ratio of the Brackett spectral series upper limit divided by the Brackett lower limit is equal to 25/9 (2.777777).
The ratio of the Pfund spectral series upper limit divided by the Pfund lower limit is equal to 36/11 (3.272727).
The ratio of the Humphreys spectral series upper limit divided by the Humphreys lower limit is equal to 3.769230.
Above, the frequencies under the A, B, C, D, E, F, G-lines and the series limit are the positional structural frequencies, and the transition frequencies between lines (B-A, C-B … F-E, G-F) are the photon emission-absorption frequencies.
The structural frequency of the G-line is equal to the product of the Rydberg calculated wave number and the photon velocity. The energy of the G-line (intrinsic Volts units of Joule) is equal to the product of the structural frequency of the G-line and the Coulomb divided by the discrete Planck constant.
The structural frequency of the F-line is equal to the product of the Rydberg calculated wave number and the photon velocity. The energy of the F-line (intrinsic Volts units of Joule) is equal to the product of the structural frequency of the F-line and the Coulomb divided by the discrete Planck constant.
The photon emission-absorption frequency of the G-F transition is equal to the structural frequency of the G-line minus the structural frequency of the F-line. The energy of the G-F transition (intrinsic Volts units of Joule) is equal to the energy of the G-line minus the energy of the F-line.
The identical process is used to calculate the emission-absorption frequencies and energies for all spectral series.
Note there is no transition frequency or energy between the G-line and the series limit because the series limit is emission-only.
Lyman series transition photons identical to Balmer series photons:
When a Lyman-C positional resonance drops down to Lyman-B, the Lyman-C energy is emitted as two photons: a 11.662222 Vi(J) Lyman-B photon frequency 2.915555E15 and a 0.637777 Vi(J) Lyman C-B photon frequency 1.594444E14. The frequency and wavelength of the transition photon is identical to the Balmer B-A transition photon.
When a Lyman-D positional resonance drops down to Lyman-C, the Lyman-D energy is emitted as two photons: a 12.300000 Vi(J) Lyman-C photon frequency 3.075000E15 and a 0.295200 Vi(J) Lyman D-C photon frequency 7.380000E13. The frequency and wavelength of the transition photon is identical to the Balmer C-B transition photon.
When a Lyman-E positional resonance drops down to Lyman-D, the Lyman-E energy is emitted as two photons: a 12.595200 Vi(J) Lyman-D photon frequency 3.148800E15 and a 0.160356 Vi(J) Lyman E-D photon frequency 4.008888E13. The frequency and wavelength of the transition photon is identical to the Balmer D-C transition photon.
When a Lyman-F positional resonance drops down to Lyman-E, the Lyman-F energy is emitted as two photons: a 12.755555 Vi(J) Lyman-E photon frequency 3.188888E15 and a 0.096689 Vi(J) Lyman F-E photon frequency 2.41723E13. The frequency and wavelength of the transition photon is identical to the Balmer E-D transition photon.
When a Lyman-G positional resonance drops down to Lyman-F, the Lyman-G energy is emitted as two photons: a 12.852245 Vi(J) Lyman-F photon frequency 3.21306E15 and a 0.062755 Vi(J) Lyman G-F photon frequency 1.568878E13. The frequency and wavelength of the transition photon is identical to the Balmer F-E transition photon.
The equivalence of Balmer-A and Lyman series transitions can be extended to the Paschen, Brackett, Pfund and Humphreys series.
The Lyman C-B transition is equal to the energy and frequency of Paschen-A.
The Lyman D-C transition is equal to the energy and frequency of Brackett-A.
The Lyman E-D transition is equal to the energy and frequency of Pfund-A.
The Lyman F-E transition is equal to the energy and frequency of Humphreys-A.
An explanation of atomic spectra begins with the ionization energies.
In atoms with more than one proton, the discretely exact energy (in red) for elemental ionization energy above which the atom no longer exists, is equal to product of the square of the number of protons times the discretely exact value for the hydrogen ionization energy. The intermediate ionization energies (in blue) are equal to the CRC value divided by omega-2.
The ionization frequency is equal to the product of the ionization energy and the Coulomb divided by the discrete Planck constant.
The ionization wave number is equal to the ionization frequency divided by the photon velocity.
The photon wavelength is the inverse of the wave number.
The difference between the calculated and measured value for the hydrogen ionization energy, divided by the difference between the measured wavelength and calculated wavelength for hydrogen ionization is very nearly equal to the difference between the photon velocity and the speed of light.
The difference between these two values, independent of how it is calculated, is a measurement error term of approximately 0.00468%.
The differences between the measured and calculated values for hydrogen are of no concern and, even though the Rydberg equations derive the measurable wavelengths to high accuracy, the explanation requiring the simultaneous emission of two photons is not consistent with the spectral mechanism hypothesis.
The Rydberg explanation for the emission of atomic spectra requires two frequencies:
One frequency is the structural frequency. Structural frequency is proportional to the energy of the positional resonance between an electron and proton (the energy required to hold the electron and proton in the positional resonance).
The photon frequency, equal to the difference between adjacent structural frequencies, is proportional to an ionization energy (the energy required to remove an electron from the positional resonance).
The photon frequency and wavelength are not directly proportional to structural energy and, in atoms larger than hydrogen, cannot be calculated by a Rydberg equation.
Proofs that wavelength and frequency are not directly proportional to energy:
Spectral wavelengths emitted by sources differing greatly in energy, by a discharge tube in the laboratory, by the sun or by the galactic center, are indistinguishable.
In 60 Hertz power transformers the energy of the emitted photons is proportional to the energy of the current (or the magnetic field).
A general explanation for atomic spectra requires an examination of the measured ionization energies and the measured wavelengths of the first four elements larger than hydrogen.
The number of CRC ionization energies (electron Volts in units of kinetic Joule) for each elemental atom larger than hydrogen is equal to the number of nuclear protons; and the number of atomic energies (intrinsic Volts in units of discrete Joule) is also equal to the number of nuclear protons.
While it is true that measured wavelengths are not directly proportional to energy, it is also true that shorter wavelengths are proportional to lower energies and longer wavelengths are proportional to higher energies. For example, ultraviolet photons have shorter wavelengths and lower energies, and visible photons have longer wavelengths and higher energies.
In any atomic spectrum, each measured wavelength corresponds to one specific energy and, in order for each measured wavelength to correspond to one specific energy, the number of wavelengths must either be equal to the number of energies or equal to an integer multiple of the number of energies.
For example, in helium there are two CRC ionization energies (electron Volts in units of kinetic Joule) corresponding to two atomic energies (intrinsic Volts in units of discrete Joule), fourteen measured wavelengths, and one transition between a wavelength proportional to a lower energy and a wavelength proportional to a higher energy.
In the below table, seven lower and seven higher helium atomic energies are in the first row, the measured wavelengths from shortest to longest are in the third row, and the second row is the ratio of the column wavelength divided by the adjacent lower wavelength. This is the definitive test for a transition from a wavelength corresponding to a lower energy to a wavelength corresponding to a higher energy. In the helium atom, the transition wavelength is also detectable by inspection of the previous wavelengths compared to the following wavelengths.
The transitions are less clear in lithium, beryllium, and boron.
In lithium, beryllium and boron the transition wavelengths are not definitively detectable by simple inspection. However, after the higher energy transitions are established by the ratios of the column wavelength divided by the adjacent lower wavelength, the first transition becomes apparent by inspection of the measured wavelengths.
The spectral mechanism hypothesis has been transformed into a general explanation for atomic spectra:
In hydrogen a single electron and proton are engaged in a positional resonance at a discretely exact frequency equal to 3.28E15 Hz. In atoms larger than hydrogen many electrons and protons are engaged in sustained positional resonances, equal to the product of the square of the number of nuclear protons and 3.28E15 Hz, in which CCW quantons are emitted in one direction by electrons and absorbed by nuclear protons, and CW quantons are emitted in the opposite direction by nuclear protons and absorbed by electrons. The positional resonances can be situated at any one of several quantized positions proportional to energy and corresponding to spectral emission and absorption lines. On emission of a photon the energy of the resonance decreases, and the electron drops to a lower energy level. On absorption of a photon the energy of the resonance increases and the electron jumps to a higher energy level.
Part Six
Cosmology
The purpose of this chapter is to disprove cosmic inflation:
The radiated intrinsic energy which drives the resonance of constant photon velocity is converted into units of intrinsic redshift per megaparsec.
A detailed general derivation of intrinsic redshift (applicable to any galaxy) is made.
The final results of the HST Key Project to measure the Hubble Constant are explained by intrinsic redshift.22
The only measurables in the determination of galactic redshifts are the photon wavelength emitted and received in the laboratory, the photon wavelength emitted by a galaxy and received by an observatory, and the ionization energies.
In the following equations Hydrogen-alpha (Balmer-A) wavelengths are used in calculations of intrinsic redshift.
Intrinsic redshift per megaparsec
The photon intrinsic energy radiated per second due to quanton/graviton emissions is equal to the product of 8 and the discrete Planck constant.
The 2015 IAC value for the megaparsec is proportional to the IAC exact SI definition of the astronomical unit (149,597,870,700 m).
The time of flight per megaparsec is equal to one mpc divided by the photon velocity.
The photon intrinsic energy radiated per megaparsec is equal to the product of time of flight per mpc and the photon intrinsic energy radiated per second due to quanton/graviton emissions.
The decrease in photon frequency due to the energy radiated is equal to the photon intrinsic energy radiated per megaparsec divided by the discrete Planck constant.
The increase in photon wavelength due to the photon intrinsic energy radiated is equal to the ratio of the photon velocity divided by decrease in photon frequency.
Note that wavelength and energy are independent thus wavelength cannot be directly determined from energy, but frequency is proportional to energy and the decrease in frequency is proportional to the increase in wavelength.
The intrinsic redshift per megaparsec is equal to the Hydrogen-alpha (Balmer-A) emission wavelength plus the wavelength increase.
General derivation of galactic intrinsic redshift
The distance of the galaxy in units of mpc is that determined by the Hubble Space Telescope Key Project.23 Below, the example calculations are for NGC0300.
The time of flight of photons emitted by NGC0300 is equal to the product of the time of flight per megaparsec and the Hubble Space Telescope Key Project distance of the galaxy.
The photon intrinsic energy radiated by NGC0300 is equal to the product of the time of flight at the distance of NGC0300 and the photon intrinsic energy radiated per second due to quanton/graviton emissions.
The decrease in photon frequency is equal to the photon intrinsic energy radiated by NGC0300 divided by the discrete Planck constant.
The increase in photon wavelength due to the photon intrinsic energy radiated is equal to the ratio of the photon velocity divided by decrease in photon frequency.
The intrinsic redshift at the distance of NGC0300 is equal to the Hydrogen-alpha (Balmer-A) emission wavelength plus the wavelength increase.
Results of the HST Key Project to measure the Hubble Constant
The goal of this massive international project, involving more than fifteen years of effort by hundreds of researchers, was to build an accurate distance scale for Cephied variables and use this information to determine the Hubble constant to an accuracy of 10%.
The inputs to the HST key project were the observed redshifts and the theoretical relativistic expansion rate of cosmic inflation.
In column 2 below, the galactic distances of 22 galaxies in units of mpc are the values determined by the HST Key Project.24
In column 3 below, the galactic distances are expressed in units of meter.
In column 4 below, the time of flight of photons emitted by the galaxy is equal to the distance of the galaxy in meters divided by the photon velocity.
The photon intrinsic energy radiated due to quanton/graviton emissions at the distance of the galaxy is equal to the product of the time of flight of photons emitted by the galaxy and the photon intrinsic energy radiated per second.
The decrease in photon frequency is equal to the photon intrinsic energy radiated by the galaxy divided by the discrete Planck constant.
The increase in photon wavelength due to the photon intrinsic energy radiated is equal to the ratio of the photon velocity divided by decrease in photon frequency.
Above column 5, the intrinsic redshift at the distance of the galaxy is equal to the Hydrogen-alpha (Balmer-A) emission wavelength plus the wavelength increase.
The Hubble parameter for a galaxy, equal to the product of the ratio of 2 omega-2 (converts intrinsic energy to kinetic energy) divided by the time of flight of photons received at the observatory that were emitted by the galaxy, and the ratio of the distance of the galaxy in units of kilometer divided by the distance of the galaxy in units of megaparsec, is denominated in units of km/s per mpc.
The Hubble constant is equal to the sum of the Hubble parameters for the galaxies examined divided by the number of galaxies.
The theory of cosmic inflation has been disproved.
Part Seven
Magnetic levitation and suspension
This chapter was motivated by a video about quantum magnetic levitation and suspension in which superconducting disks containing thin films of YBCO are levitated and suspended on a track composed of neodymium magnet arrays in which a unit array contains four neodymium magnets (two diagonal magnets oriented N→S and the other two S→N).25
An understanding of levitation and suspension by neodymium magnet arrays begins with consideration of the differences between the levitation of a superconducting disk containing thin films of metal oxides and the levitation of thin slice of pyrolytic carbon.
Oxygen is paramagnetic. An oxygen atom is magnetized by the magnetic field of a permanent magnet in the direction of the external magnetic field (for example, a S→N external magnetic field induces a S→N internal field) and reverts to a demagnetized state when the field is removed. The levitation of a superconducting disk requires an array of neodymium magnets and cooling below the critical temperature. In quantum levitation or suspension, the position of the disk is established by holding (pinning) it in the desired location and orientation, and if a pinned disk is forced into a new location and orientation, it remains pinned in the new location.
Carbon is diamagnetic. A carbon atom is magnetized by a magnetic field in the direction opposite to the magnetic field (for example, a N→S external magnetic field induces a S→N internal field) and reverts to a demagnetized state when the field is removed. Magnetic levitation occurs at room temperature, a thin slice of pyrolytic carbon levitates at a fixed distance parallel to the surface of an array of neodymium magnets, and a levitated slice forced closer to the surface springs back to the fixed distance once the force is removed.
In the levitation of pyrolytic carbon, CCW quantons are emitted by a magnetic North pole and CW quantons are emitted by a magnetic South pole (magnetic emission of quantons is discussed in Part Four).
The number of chirality meshing interactions required to exactly oppose the gravitational force on a thin slice of pyrolytic carbon (or any object) is equal to the local gravitational constant of earth divided by the product of the proton amplitude and the square root of Lambda-bar.
In the above equation, the local gravitational constant of earth (as derived in Part One) is equal to 10 meters per second per second and the proton amplitude (also derived in Part One) is equal to 150 and, (as derived in Part Four) the square root of Lambda-bar is the deflection distance (units of meter) of a single chirality meshing interaction between a quanton and an electron.
The above equation is proportional to energy: the higher the energy, the higher the number of chirality meshing interactions, and the higher the levitation distance; the lower the energy, the lower the number of chirality meshing interactions, and the lower the levitation distance.
Pyrolytic carbon is composed of planar sheets of carbon atoms in which a unit cell is composed of a hexagon of carbon atoms joined by double bonds. Carbon atoms are bonded by either lower energy single bonds proportional to the first ionization energy or higher energy double bonds proportional to the second ionization energy. The measured first and second ionization energies of carbon are 1086.5 and 2352.0 (units of kJ/mol)27.
Due to the discretely exact value of PE charge resonance, in carbon (or any elemental atom) the quanton emission-absorption frequency is equal to 3.28E15 Hz.
The quanton emission frequency of a unit cell of pyrolytic carbon is equal to the product of the discretely exact PE charge resonance frequency of 3.28E15 Hz and the ratio of the second ionization energy of carbon divided by the first ionization energy of carbon.
The levitation distance of a thin slice of pyrolytic carbon (in units of mm) is equal to the product of the ratio of quanton emission frequency of a pyrolytic carbon unit cell divided by six (the number of carbon atoms in a unit cell) times 1000 mm/m and the square root of Lambda-bar.
The oxygen atoms in YBCO oxides are bonded by either lower energy single bonds proportional to the first ionization energy or higher energy double bonds proportional to the second ionization energy. The measured first and second ionization energies of oxygen are 1313.9 and 3388.3 (units of kJ/mol).
The three YBCO metallic oxides are composed of low energy single bonds, high energy double bonds, or single and double bonds. In yttrium oxide (Y2O3), a single bond connects each yttrium atom with the inside oxygen, and a double bond connects each yttrium atom with one of the two outside oxygens. In barium oxide (BaO) the two atoms are connected by a double bond. Copper oxide is a mixture of cupric oxide (copper I oxide) in which a single bond connects each of two copper atoms with the oxygen atom, and cuprous oxide (copper II oxide) in which a double bond connects the copper atom with the oxygen atom.
Voltage is the emission of quantons either directly by the Q-axis of an electron or proton or transversely by a magnetic field from which CCW quantons are emitted by the North pole and CW quantons by the South pole.
The mechanism of magnetic levitation or suspension of a superconducting disk is the absorption of quantons, emitted by a neodymium magnet array, in chirality meshing interactions by electrons in the oxygen atoms of superconductingYBCO oxides resulting in repulsive deflections due to CCW quantons (in quantum levitation) and attractive deflections due to CW quantons (in quantum suspension).
The levitation or suspension distance of a superconductingYBCO oxide is higher (the maximum distance) for double bonded oxides and lower (the minimum distance) for single bonded oxides. The initial position of the YBCO disk is established by momentarily holding (pinning) it in the desired location and orientation at some specific distance from the neodymium magnet array.
In each one-hundredth of a second more than 2E14 chirality meshing interactions establishes the intrinsic energy of electrons within the superconducting oxides. At the same time, at any specific distance above or below the neodymium magnet array the number of quanton interactions, inversely proportional to the square of distance, establishes the availability of quantons to be absorbed at that specific distance. The result is an electrical Stable Balance of the electrons in superconducting oxides at specific distances from the neodymium magnet array, analogous to the gravitational Stable Balance of particles in planets at a specific orbital distance from the sun.
This is the mechanism of pinning in YBCO superconducting disks.
The levitation or suspension distance (units of mm) of a single bonded superconductingYBCO oxide is equal to the product of the ratio of the first ionization energy of oxygen divided by itself, the discretely exact PE charge resonance of 3.28E15 Hz, the square root of Lambda-bar, the ratio of the discrete steric factor divided by 1 (single bond), and 1000 (to convert m to mm).
The levitation or suspension distance (units of mm) of a double bonded superconductingYBCO oxide is equal to the product of the ratio of the second ionization energy of oxygen divided by the first ionization energy of oxygen, the discretely exact PE charge resonance of 3.28E15 Hz, the square root of Lambda-bar, the ratio of the discrete steric factor divided by 2 (double bond), and 1000 (to convert m to mm).
1 Original letter from Isaac Newton to Richard Bentley, 189.R.4.47, ff. 7-8, Trinity College Library, Cambridge, UK http://www.newtonproject.ox.ac.uk
2 https://nssdc.gsfc.nasa.gov/planetary/planetfact.html, accessed Dec 24, 2021
3 Urbain Le Verrier, Reports to the Academy of Sciences (Paris), Vol 49 (1859)
4 Clemence G.M. The relativity effect in planetary motions. Reviews of Modern Physics, 1947, 19(4): 361-364.
5 Eric Doolittle, The secular variations of the elements of the orbits of the four inner planets computed for the epoch 1850 GMT, Trans. Am. Phil. Soc. 22, 37(1925).
6 Michael P. Price and William F. Rush, Nonrelativistic contribution to mercury’s perihelion precession. Am. J. Phys. 47(6), June 1979.
7 Wikimedia, by Daderot made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication, location National Museum of Nature and Science, Tokyo, Japan.
8 Illustration from 1908 Chambers’s Twentieth Century Dictionary. Public domain.
9 Wikimedia “Sine and Cosine fundamental relationship to Circle and Helix” author Tdadamemd.
10 By Jordgette – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=9529698
11 By Ebohr1.svg: en:User:Lacatosias, User:Stanneredderivative work: Epzcaw (talk) – Ebohr1.svg, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=15229922
13 O. Stern, Z. fur Physik, 7, 249 (1921), title in English: “A way to experimentally test the directional quantization in the magnetic field”.
14 Ronald G. J. Fraser, Molecular Rays, Cambridge University Press, 1931.
15 The Molecular Beam Resonance Method for Measuring Nuclear Magnetic Moments. II Rabi, S Millman, P Kusch, JR Zacharias – Physical review, 1939 – APS
16 INDC: N. J. Stone 2014. Nuclear Data Section, International Atomic Energy Agency, www-nds.iaea.org/publications
17 “Quantum theory yields much, but it hardly brings us close to the Old One’s secrets. I, in any case, am convinced He does not play dice with the universe.” Letter from Einstein to Max Born (1926).
18 “That gravity should be innate inherent & essential to matter so that one body may act upon another at a distance through a vacuum without the mediation of anything else by & through which their action or force may be conveyed from one to another is to me so great an absurdity that I believe no man who has … any competent faculty of thinking can ever fall into it.” Original letter from Isaac Newton to Richard Bentley, 189.R.4.47, ff. 7-8, Trinity College Library, Cambridge, UK http://www.newtonproject.ox.ac.uk
19 Ionization energies of the elements (data page), https://en.wikipedia.org/
20 How to determine the range of acceptable results for your calorimeter, Bulletin No. 100, Parr Instrument Company, www.parrinst.com.
21 See www.wikipedia.org, www.hyperphysics.com, www.shutterstock.com
22 Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant, Astrophysical Journal 0012-376v1, 18 Dec 2000.
23 Page 60, Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant, Astrophysical Journal 0012-376v1, 18 Dec 2000.
24 Page 60, Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant, Astrophysical Journal 0012-376v1, 18 Dec 2000.
26 This image has been released into the public domain by its creator, Splarka. https://commons.wikimedia.org/wiki/File:Diamagnetic_graphite_levitation.jpg
27 Ionization energies of the elements (data page), https://en.wikipedia.org/
A special ‘Study in Brief’ via our friends at cdhowe.org
This study estimates the economic benefits of a new, dedicated passenger rail link in the Toronto-Québec City corridor, either with or without high-speed capabilities.
Cumulatively, in present value terms over 60 years, economic benefits are estimated to be $11-$17 billion under our modelled conventional rail scenarios, and $15-$27 billion under high-speed rail scenarios.
This study estimates economic benefits, rather than undertaking a full cost-benefit analysis. The analysis is subject to a range of assumptions, particularly passenger forecasts.
Introduction
Canada’s plans for faster, more frequent rail services in the Toronto-Québec City corridor are underway.
In 2021, the federal government announced plans for a new, high frequency, dedicated passenger rail link in the Toronto-Québec City corridor. More recently, the government has considered the potential for this passenger line to provide high-speed rail travel. These two options are scenarios within the current proposed rail project, which VIA-HFR has named “Rapid Train.” This paper analyzes the economic benefits of the proposed Rapid Train project, considering both scenarios, and by implication the costs of forgoing them.
The project offers substantial economic and social benefits to Canada. At a time when existing VIA Rail users must accept comparatively modest top speeds (by international standards) and regular delays, this project offers a dedicated passenger line to solve network capacity constraints. With Canada’s economy widely understood to be experiencing a productivity crisis (Bank of Canada 2024), combined with Canada seeking cost-effective approaches to reducing harmful CO2 emissions, the project offers both productivity gains and lower-emission transportation capacity. There are, in short, significant opportunity costs to postponing or not moving ahead with this investment and perpetuating the status quo in rail service.
The Toronto-Québec City corridor, home to more than 16 million people (Statistics Canada 2024) and generating approximately 41 percent of Canada’s GDP (Statistics Canada 2023), lacks the sort of fully modernized passenger rail service provided in comparable regions worldwide. For example, Canada is the only G7 country without high-speed rail (HSR) – defined by the International Union of Railways (UIC) as a train service having the capability to reach speeds of 250 km per hour. Congestion has resulted in reliability (on time performance) far below typical industry standards. Discussion about enhancing rail service in this corridor has persisted for decades. But delays come with opportunity costs. This Commentary adds up those costs in the event that Canada continues to postpone, or even abandons, investment in enhanced rail services.
The existing rail infrastructure in the Toronto-Québec City corridor was developed several decades ago and continues to operate within parameters set during that time. However, significant changes have occurred since then, including higher population growth, economic development, and shifting transportation patterns. Rising demand for passenger and freight transportation – both by rail and other modes – has increased pressure on the region’s transportation network. There is increasing need to explore the various mechanisms through which enhancements to rail service could affect regional economic outcomes.
According to Statistics Canada (2024), the Toronto-Québec City corridor is the most densely populated and heavily industrialized region in Canada. This corridor is home to 42 percent of the country’s total population and comprises 43 percent of the national labor market. Transport Canada’s (2023) projections indicate that by 2043, an additional 5 million people will reside in Québec and Ontario, marking a 21 percent increase from 2020. This population growth will comprise more than half of Canada’s overall population increase over the period. As the population and economy continue to expand, the demand for all modes of transportation, including passenger rail, will rise. The growing strain on the transportation network highlights the need for infrastructure improvements within this corridor. In 2019, passenger rail travel accounted for only 2 percent of all trips in the corridor, with the vast majority of journeys (94 percent) undertaken by car (VIA-HFR website). This distribution is more skewed than in other countries with high-speed rail. For example, between London and Paris, aviation capacity has roughly halved since the construction of a high-speed rail link (the Eurostar) 25 years ago, which now has achieved approximately 80 percent modal share (Morgan et al. 2025, OAG website 2019). As such, there is potential for rail to have a greater modal share in Canada, particularly as the need for sustainable and efficient transportation solutions becomes more pressing in response to population growth and environmental challenges.
In practical terms, the cost of not proceeding with the Rapid Train project can be estimated as the loss of economic benefits that could have been realized if the project had moved forward. It should be noted that this study does not undertake a full cost-benefit analysis (CBA) of the proposed investment. Rather, it examines the various economic advantages associated with introducing the proposed Rapid Train service in the Toronto-Québec City corridor. Specifically, it analyzes five key dimensions of economic impact: rail-user benefits, road congestion reduction, road network safety improvements, agglomeration effects (explained below), and emission savings. The first three benefits primarily impact individuals who would have travelled regardless, or were induced to travel by rail or car. Agglomeration benefits extend to everyone living in the corridor, while emission savings contribute to both national and international efforts to combat climate change. In each of these ways, enhanced rail services can contribute to regional economic growth and sustainability. By evaluating these aspects, this study aims to develop quantitative estimates of the benefits that enhanced rail services could bring to the economy and society, and by doing so indicate the potential losses that could result from forgoing the proposed rail investment.
Rail user benefits constitute the most direct economic gains. Through faster rail transport with fewer delays, rail users experience reduced travel times, increased service reliability, and improved satisfaction. The Rapid Train project provides rail-user benefits because dedicated passenger tracks would remove the need to give way to freight transport, thus reducing delays. The Rapid Train project would see further benefits with faster routes reducing travel time.
Congestion effects extend beyond individual transportation choices to influence broader economic activity. This study considers how enhanced rail services might affect road congestion levels in key urban centres and along major highways within the corridor. Road network safety is a further aspect of the economic analysis in this study, as modal shift from road to rail could reduce road traffic accidents and their associated economic costs.
Agglomeration economies are positive externalities that arise from greater spatial concentration of industry and business, resulting in lower costs and higher productivity. Greater proximity results in improved opportunities for labour market pooling, knowledge interactions, specialization and the sharing of inputs and outputs (Graham et al. 2009). Improved transportation (both within and between urban areas) can support agglomeration economies by improving connectivity, lowering the cost of interactions and generating productivity gains.1 Supported by academic literature (Graham 2018), these wider economic benefits are included within international transportation appraisal guidance (Metrolinx 2021, UK Department for Transport 2024). Agglomeration effects from enhanced connectivity offer economic benefits distinct from (and additional to) benefits for rail users.
Environmental considerations, particularly emission savings, constitute a further economic benefit. This analysis examines potential reductions in transportation-related emissions and their associated economic value, including direct environmental costs. This examination includes consideration of how modal shifts might influence the corridor’s overall carbon footprint and its associated economic impacts.
The methodology employed in this analysis draws from established economic assessment frameworks while incorporating recent developments in transportation economics. The study utilizes data from VIA-HFR, Statistics Canada, and several other related studies and research papers. Where feasible, the analysis utilizes assumptions that are specific to the Toronto-Québec City corridor, recognizing its unique characteristics, economics, and demographic patterns.
The findings presented here may facilitate an understanding of how different aspects of rail service enhancement might influence economic outcomes across various timeframes and stakeholder groups. This analysis acknowledges that while some benefits may be readily quantifiable, others involve more complex, long-term economic relationships that require careful consideration within the specific context of the Toronto-Québec City corridor.
Based on our modelling and forecasts, the proposals for passenger rail infrastructure investment in the Toronto-Québec City corridor would present substantial economic, environmental, and social benefits (see Table 4 in the Appendix for a full breakdown, by scenario). Our scenario modelling is undertaken over a 60-year period, with new services coming on-stream from 2039, reported in 2024 present value terms. The estimated total of present value benefits ranges from $11 billion in the most conservative passenger growth scenario, to $27 billion in the most optimistic growth scenario. Cumulatively, in present value terms, economic benefits are estimated to be $11-$17 billion under our modelled conventional rail scenarios, and larger – $15-$27 billion – under high-speed rail scenarios. This is subject to a range of assumptions and inputs, including passenger forecasts.
These estimated benefits are built-up from several components. User benefits – stemming from time savings, increased reliability, and satisfaction with punctuality – are the largest component, with an estimated value of $3.1-$9.2 billion. Economic benefits from agglomeration effects (leading to higher GDP) are estimated at $2.6-$3.9 billion, while environmental benefits from reduced greenhouse gas emissions are estimated at $2.6-$7.1 billion. Additional benefits include reduced road congestion, valued between $2.0-$5.9 billion, and enhanced road safety, which adds an estimated $0.3-$0.8 billion. In addition, further sensitivity analysis has been undertaken alongside the main passenger growth scenarios.
Overall, the findings in this study demonstrate and underscore the substantial economic benefits of rail investment in the Toronto-Québec City corridor, and the transformative potential impact on the Toronto-Québec City region from economic growth and sustainable development.
Finally, there are several qualifications and limitations to the analysis in this study. It considers the major areas of economic benefit rather than undertaking a full cost-benefit analysis or considering wider opportunity costs, such as any alternative potential investments not undertaken. It provides an economic analysis, largely building on VIA-HFR passenger forecasts, rather than a full bottom-up transport modelling exercise. Quantitative estimates are subject to degrees of uncertainty.
The Current State of Passenger Rail Services in Ontario and Québec
The Toronto-Québec City corridor is the most densely populated and economically active region of the country. Spanning major urban centres such as Toronto, Ottawa, Montreal, and Québec City, this corridor encompasses more than 42 percent of Canada’s population and is a vital artery for both passenger and freight transport. Despite the significance of the corridor and the economic potential it holds, passenger rail services in Ontario and Québec face numerous challenges, and their overall state remains a topic of debate.
Passenger rail services in the region are primarily provided by VIA Rail, the national rail operator, along with commuter rail services like GO Transit in Ontario and Exo in Québec. VIA Rail operates intercity passenger trains connecting major cities in the Toronto-Québec City corridor, offering an alternative to driving or flying. VIA Rail’s most popular routes include the Montreal-Toronto and Ottawa-Toronto services, which run multiple times per day and serve business travellers, tourists, and daily commuters.
In addition to VIA Rail’s existing medium-to-long-distance services, commuter rail services play a key role in daily transportation for residents of urban centres like Toronto and Montreal. GO Transit, operated by Metrolinx, is responsible for regional trains serving the Greater Toronto and Hamilton Area, while Exo operates commuter trains in the Montreal metropolitan area. These services provide essential links for suburban commuters travelling to and from major employment hubs.
One of the primary challenges facing passenger rail services in Ontario and Québec is that the vast majority of rail infrastructure used by VIA Rail is owned by freight rail companies and is largely shared with freight trains, which means that passenger trains are regularly required to yield to freight traffic. This leads to frequent delays and slower travel times, making passenger rail less attractive compared to other modes of transport, especially for travellers who prioritize frequency, speed and punctuality. The absence of dedicated tracks for passenger rail is a major obstacle in improving travel times and increasing the frequency of service. Without addressing this issue, it is difficult to envisage a significant modal shift towards passenger rail, with cars having greater flexibility, and planes offering faster travel speeds once airborne. Much of the rail network was constructed several decades ago, and despite periodic maintenance and upgrades, it is increasingly outdated in its inability to facilitate higher speeds.
Passenger rail has the potential for low emission intensity. However, some of the potential environmental benefits of rail services in Ontario and Québec have yet to be fully realized. Many existing VIA Rail trains operate on diesel fuel, contributing to greenhouse gas emissions and air pollution. The transition to electrified rail, which would significantly reduce emissions, has been slow, and there is currently no comprehensive plan for widespread electrification of existing VIA Rail passenger rail services in the region.
The current state of rail passenger services in Ontario and Québec – and the opportunities for improvement – have prompted the development of the Rapid Train project along the Toronto-Québec City corridor, which proposes to reduce travel times between major cities and provide a more competitive alternative to air and car travel. The project would also generate significant environmental benefits by reducing greenhouse gas emissions associated with road and air transport. Furthermore, by investing in enhanced rail services, journey times would be further cut, generating additional time savings and associated economic benefits.
Current Government Commitment to Enhanced Rail Services
The Rapid Train project plans to introduce approximately 1,000 kilometres of new, mostly electrified, and dedicated passenger rail tracks connecting the major city centres of Toronto, Ottawa, Montreal, and Québec City. As such, it would be one of the largest infrastructure projects in Canadian history. It is led by VIA-HFR, a Crown corporation that collaborates with several governmental organizations, including Public Services and Procurement Canada, Housing, Infrastructure and Communities Canada; Transport Canada and VIA Rail, all of which have distinct roles during the procurement phases. Subject to approval, a private firm or consortium is expected to be appointed to build and operate these new rail services, via a procurement exercise (see below).
This new rail infrastructure would improve the frequency, speed, and reliability of rail services, making it more convenient for Canadians to travel within the country’s most densely populated regions. The project has the potential to shift a significant portion of travel from cars (which currently account for 94 percent of trips in the Toronto-Québec City corridor) to rail (which represents just 2 percent of total trips).
The project also seeks to contribute to Canada’s climate goals by reducing greenhouse gas emissions. Electrified trains and the use of dual-powered technology (for segments of the route that may still require diesel) will significantly reduce the environmental footprint of intercity travel. The project is expected to improve the experience for VIA Rail users, as dedicated passenger tracks will reduce delays caused by freight traffic, offering passengers faster, more frequent departures, and shorter travel times.
Beyond environmental benefits, the project is expected to stimulate economic growth by creating new jobs in infrastructure development, supporting new economic centres, and enhancing connectivity between cities, major airports, and educational institutions.
The project is currently at the end of the procurement phase, following the issuance of a Request for Proposals (RFP) in October 2023. Through the procurement exercise, a private-sector partner will be selected to co-develop and execute the project. The design phase, which may last four or more years, will involve regulatory reviews, impact assessments, and the development of a final proposal to the government for approval. Once constructed, passenger operations are expected to commence by 2040.
The Rapid Train project also offers opportunities to improve services on existing freight-owned tracks. VIA Rail’s local services, which currently operate between these major cities, will benefit from integration with this project. Although final service levels are not yet determined, the introduction of a new dedicated passenger rail line is expected to enable VIA Rail to optimize operating frequencies and schedules, leading to more responsive and efficient service for passengers. In turn, this will mean that departure and arrival times can be adjusted to better suit travellers’ needs, reducing travel times and increasing the attractiveness of rail as a mode of transportation for both leisure and business. As many of VIA Rail’s existing passenger services switch onto dedicated tracks, there is potential to free up capacity on the existing freight networks. As such, freight rail traffic may benefit from reduced congestion, supporting broader economic growth by easing supply chains and by improving the efficiency of goods transportation across Canada.
The project design will enable faster travel compared to existing services, but as the co-development phase progresses, it will examine the possibility of achieving even higher speeds on certain segments of the dedicated tracks. Achieving higher speeds is not guaranteed, due to the extensive infrastructure changes required and its associated costs, e.g., full double-tracking and the closure of approximately 1,000 public and private crossings. However, the project design currently incorporates flexibility to explore higher speeds where there may be opportunities for operational and financial efficiencies and additional user benefits.
The current Rapid Train project proposal seeks to achieve wider social and government objectives. In the context of maintaining public ownership, private-sector development partners will be required to respect existing labor agreements. VIA Rail employees will retain their rights and protections, with continuity ensured under the Canada Labour Code and relevant contractual obligations.
International Precedent
High-Speed Rail (HSR) already exists in many countries, with notable examples of successful implementation in East Asia and Europe. As of the middle of 2024, China has developed the world’s largest HSR network spanning over 40,000 kilometres, followed by Spain (3,661 km), Japan (3,081 km), and France (2,735 km) (Statista 2024). Among the G7 nations, Canada stands as the only country without HSR infrastructure, albeit the United States maintains relatively limited high-speed operations through the Acela Express in the Northeast Corridor. Recent significant HSR developments include China’s Beijing-Shanghai line (2,760 km), which is the world’s longest HSR route. In Europe, the UK’s High Speed 1 (HS1) connects London to mainland Europe via the Channel Tunnel. Italy has extended its Alta Velocità network with the completion of the Naples-Bari route in 2023, significantly reducing travel times between major southern cities (RFI 2023). Morocco recently became the first African nation to implement HSR with its Al Boraq service between Tangier and Casablanca (OCF 2022). In Southeast Asia, Indonesia’s Jakarta-Bandung HSR, completed in 2023, is the region’s first HSR system (KCIC 2023). India is installing the Mumbai-Ahmedabad HSR corridor, the country’s first bullet train project, which is scheduled to commence partial operations by 2024 (NHSRCL 2023).
The economic impacts of HSR have been extensively studied, particularly in Europe. In Germany, Ahlfeldt and Feddersen (2017) analyzed the economic performance of regions along the high-speed rail line between Cologne and Frankfurt: the study found that, on average, six years after the opening of the line, the GDP of regions along the route was 8.5 percent higher than their estimated counterfactual. In France, Blanquart and Koning (2017) found that the TGV network catalyzed business agglomeration near station areas, with property values increasing by 15-25 percent within a 5km radius of HSR stations. An evaluation of the UK’s HS1 project estimated cumulative benefits of $23-$30 billion (2024 prices, present value, converted from GBP) over the lifetime of the project, excluding wider economic benefits (Atkins 2014).
Modal shift and passenger growth is a critical driver of economic benefits. The Madrid-Barcelona corridor in Spain provides an example: HSR captured over 60 percent of the combined air-rail market within three years of operation, demonstrating that HSR can have a competitive advantage over medium-distance air travel (Albalate and Bel 2012). However, analysis by the European Court of Arbiters (2018) suggests that HSR routes require certain volumes of passengers (estimated at nine million) to become net beneficial, and while some European HSR routes have achieved this level (including the Madrid-Barcelona route), others have not. In the US, the Amtrek Acela service between Boston and Washington D.C. is estimated to have 3-4 million passengers (Amtrek 2023). For some high-speed rail lines, passenger volumes are supported by government environment policy. For example, Air France was asked directly by the government to reduce the frequency of short haul flights for routes where a feasible rail option existed (Reiter et al. 2022). Overall, passenger growth constitutes a key assumption regarding the benefits derived from the Rapid Train project.
Regarding the environmental benefits of HSR, a detailed study by the European Environment Agency (2020) found that HSR generates approximately 14g of CO2 per passenger-kilometre, compared to 158g for air travel and 104g for private vehicles. In Japan, the Central Japan Railway Company reports that the Shinkansen HSR system consumes approximately one-sixth the energy per passenger-kilometre compared to air travel. The UIC’s Carbon Footprint Analysis (2019) demonstrated that HSR infrastructure, despite high initial carbon costs during construction, typically achieves carbon neutrality within 4-8 years of operation through reduced emissions from modal shift.
Socioeconomic benefits of HSR extend beyond direct impacts on rail users. In Spain, the Madrid-Barcelona high-speed rail line enhanced business interactions by allowing for more same-day return trips and improved business productivity (Garmendia et al. 2012). Research has found that Chinese cities connected by HSR experienced a 20 percent increase in cross-regional business collaboration, providing potential evidence of enhanced knowledge spillovers and innovation diffusion (Wang and Chen 2019).
However, the implementation of HSR is not without challenges. Flyvbjerg’s (2007) analysis of 258 transportation infrastructure projects found that rail projects consistently faced cost overruns averaging approximately 45 percent. For example, the costs of the California High-Speed Rail project in the United States rose from an initial estimate of $33 billion in 2008 to over $100 billion by 2022, highlighting the importance of realistic cost projections and robust project management.
Positive labor market impacts are also evident, although varied by region. Studies in Japan by Kojima et al. (2015) found that cities served by Shinkansen experienced a 25 percent increase in business service employment over a 10-year period after connection. European studies, particularly in France and Spain, show more modest but still positive employment effects, with employment growth rates 2-3 percent higher in connected cities compared to similar unconnected ones (Crescenzi et al. 2021).
For developing HSR networks, international experience suggests several critical success factors. These include careful corridor selection based on population density and economic activity, integration with existing transportation networks, and sustainable funding mechanisms. The European Union’s experience, documented by Vickerman (2018), emphasizes the importance of network effects in finding that the value of HSR increases significantly when it connects multiple major economic centres.
Methodology
This study integrates data from VIA-HFR, Statistics Canada, prior reports on rail infrastructure proposals in Canada, and related studies, to build an economic assessment of potential benefits of the proposed Rapid Train project. Key assumptions throughout this analysis are rooted in published transportation models, modelling guidelines, and an extensive body of research. The methodology draws extensively from the Business Case Manual Volume 2: Guidance by Metrolinx, which itself draws upon the internationally recognized transportation appraisal guidelines set by the UK government’s Department for Transport (DFT). These established guidelines offer best practices and standards that provide a structured and reliable framework for estimating benefits. By aligning with proven methodologies in transportation and infrastructure project appraisal, this study ensures rigor and robustness within the economic modelling and analysis.
The proposed route includes four major stations: Toronto, Ottawa, Montréal, and Québec City. These major urban centres are expected to experience the most significant ridership impacts and related benefits. There are three further stations on the proposed route – Trois-Rivières, Laval, and Peterborough – although these are anticipated to have a more limited effect on the overall modelling results, due to their smaller populations. Based on forecast ridership data provided by VIA-HFR for travel between the four main stations, our model designates these areas as four separate zones to facilitate the benefit estimation. Figure 1 below illustrates the proposed route for the Rapid Train project and highlights the different zones modeled in this analysis.
According to current VIA-HFR projections, the routes are expected to be operational between 2039 and 2042. In line with typical transport appraisals, this paper estimates and monetizes economic and social benefits of the project over a 60-year period, summing the cumulative benefits from 2039 through to 2098, inclusive. To calculate the total present value (as of 2024) of these benefits, annual benefits are discounted at a 3.5 percent social discount rate, in line with Metrolinx guidance, and then aggregated across all benefit years.
Our model examines multiple scenarios to assess the range of potential benefits under various conditions. The primary scenarios within the Rapid Train project are for Conventional Rail (CR) and High-Speed Rail (HSR). These scenarios are distinguished by differences in average travel time, with HSR benefiting from significantly faster speeds than CR, and therefore lower travel times (see Table 2).
Within each of these scenarios, we consider three sub-scenarios from VIA-HFR’s modelled passenger projections – central, downside and upside – plus a further sub-scenario (referred to as the 2011 feasibility study in the Figures) based on previous modelled estimates of a dedicated passenger rail line in the corridor. The central sub-scenario provides VIA-HFR’s core forecast for passenger growth under CR and HSR. The upside sub-scenario reflects VIA-HFR’s most optimistic assumptions about passenger demand, while the downside represents the organisation’s more cautious assumptions.
The use of VIA-HFR’s passenger projections is cross-checked in two ways: First, our analysis models an alternative passenger growth scenario (2011 feasibility study), which is based upon the projected growth rate for passenger trips as outlined in the Updated Feasibility Study of a High-Speed Rail Service in the Québec City – Windsor Corridor by Transport Canada (2011).2 The analysis in that study was undertaken by a consortium of external consultants. Second, we have reviewed passenger volumes in other jurisdictions (discussed above and below).
In the absence of investment in the Rapid Train project, VIA-HFR’s baseline scenario passenger demand projections indicate approximately 5.5 million trips annually by 2050 using existing VIA Rail services in the corridor. In contrast, with investment, annual projected demand for CR ranges from 8 to 15 million trips, and for HSR between 12 and 21 million trips by 2050, across all the sub-scenarios described above. Figures 2 and 3 illustrate these projected ridership figures under CR and HSR scenarios across each sub-scenario, as well as compared to the baseline scenario.
Under the CR and HSR scenarios, while the vast majority of rail users are expected to use the new dedicated passenger rail services, VIA-HFR passenger forecasts indicate that some rail users within the corridor will continue to use services on the existing VIA Rail line, for example, due to travelling between intermediate stations (Kingston-Ottawa). The chart below illustrates the breakdown of benefits under the central sub-scenario for high-speed rail.
User Benefits
User benefits in transportation projects such as CR/HSR can be broadly understood as the tangible and intangible advantages that rail passengers gain from improved services. These benefits encompass the value derived from time saved, enhanced reliability, reduced congestion, and improved overall travel experience. For public transit projects like CR/HSR, user benefits are often key factors in justifying the investment due to their broad social and economic impact.
Rail infrastructure projects can reduce the “generalized cost” of travel between areas, which directly benefits existing rail users, as well as newly induced riders. The concept of generalized cost in transportation economics refers to the total cost experienced by a traveller, considering not just monetary expenses (like ticket prices or fuel) but also non-monetary factors such as travel time, reliability, comfort, and accessibility.
Investments that improve transit may reduce generalized costs in several ways. Consistent, on-time service lowers the uncertainty, inconvenience and dissatisfaction associated with delays. More frequent services provide passengers with greater flexibility and reduced waiting times. Reduced crowding can offer more comfortable travel, reducing the disutility associated with congested services. Enhanced services like better seating, Wi-Fi, or improved station facilities may increase user satisfaction. Better access to transit stations or stops may allow for easier integration into daily commutes, increasing the convenience for existing and new travellers. Faster travel can reduce travel time, which is often valued highly by passengers.
In this paper, user benefits are estimated based on three core components: travel time savings based on faster planned journey times, enhanced reliability (lower average delays on top of the planned journey time), and the psychological benefit of more reliable travel. In our analysis, the pool of users is comprised of the existing users who are already VIA Rail passengers within this corridor, plus new users who are not prior rail passengers. Within this category of new users there are two sub-groups. First, new users include individuals who are forecast to switch to rail from other modes of transport, such as cars, buses, and airplanes – known as “switchers.” Second, new users also include individuals who are induced to begin to use CR/HSR as a result of the introduction of these new services – known as “induced” passengers. Overall, this approach captures the comprehensive user benefits of CR/HSR, recognizing that time efficiency, increased dependability, and greater customer satisfaction hold substantial value for both existing and new riders. The split of new users across switchers and induced users – including the split of induced users between existing transport modes, primarily road and air – is based on the federal government’s 2011 feasibility study, although the modelling in this Commentary also undertakes sensitivity analysis using VIA-HFR’s estimates for these proportions. The approach to estimating rail-user benefits is discussed below.
The modelling in this study incorporates projections of passenger numbers for both existing VIA Rail services (under a ‘no investment’ scenario) and for the proposed CR/HSR projects, sourced from VIA-HFR transport modelled forecasts. This enables the derivation of forecasts for both existing and new users.
In line with the formula (above) for user benefits, this study estimates the reduction in generalized costs (C1 – C0 ) arising from the new CR/HSR transportation service. Since the ticket price for the proposed CR/HSR is still undetermined, we have not assumed any changes versus current VIA Rail fares, although this is discussed as part of sensitivity analysis. The model reflects a reduction in generalized costs attributed to shorter travel times and enhanced service reliability under CR/HSR. Table 2 shows a comparison of the average scheduled journey times (as of 2023) for existing VIA Rail services, compared to forecast journey times under the proposed CR/HSR services, across different routes.
In addition to travel time savings based on scheduled journey times, an important feature of the CR/HSR project is that a new, dedicated passenger rail line can reduce the potential for delays. To estimate the reduction in travel delays under CR/HSR, we first calculated a lateness factor for both existing VIA Rail and the proposed CR/HSR, based on punctuality data and assumptions. Current data indicate that VIA Rail services are on time (reaching the destination within 15 minutes of the scheduled arrival time) for approximately 60 percent of journeys. Therefore, VIA Rail experiences delays (arriving more than 15 minutes later than scheduled) approximately 40 percent of the time. Data showing the average duration of delays are not available, and therefore we estimate that each delay is 30 minutes on average, based on research and discussions with stakeholders. CR/HSR would provide a dedicated passenger rail service, which would have a far lower lateness rate. Our model assumes CR/HSR would aim to achieve significantly improved on-time performance, with on-time arrivals (within 15 minutes) for 95 percent of journeys (Rungskunroch 2022), which equates to 5 percent (or fewer) of trains being delayed upon arrival.
Combined, there are time savings to users from both faster scheduled journeys and fewer delays. The estimated travel time savings are derived from the difference between the forecast travel times of CR/HSR and the average travel times currently experienced with VIA Rail. The value of time is monetized by applying a value of $21.45 per hour, calculated by adjusting the value of time recommended by Business Case Manual Volume 2: Guidance-Metrolinx ($18.79 per hour, in 2021 dollars) to 2024 dollars using the Consumer Price Index (CPI). This value remains constant (in real terms) over our modelling period.
There is an additional psychological cost of unreliability associated with delays. Transport appraisal guidelines and literature typically ascribe a multiplier to the value of time for unscheduled delays. The modelling in this study utilizes a multiplier of 3 for lateness, which is consistent with government transport appraisal guidance in the UK and Ireland (UK’s Department for Transport 2015, Ireland’s Department of Transport 2023). Some academic literature finds that multipliers may be even higher, although it varies according to the journey distance and purpose (Rossa et al. 2024). Overall, the lateness adjustment increases the value to rail users of CR/HSR due to its improved reliability and generates a small uplift to the total user benefits under CR/HSR.
The modelling combines these user benefits and makes a final adjustment to net off indirect taxes, ensuring that economic benefits are calculated on a like-for-like basis with costs incurred by VIA-HFR (Metrolinx 2021). Individual users’ value of time implicitly takes into account indirect taxes paid, whereas VIA-HFR’s investments are not subject to indirect taxation. Ontario’s rate of indirect taxation (13 percent harmonized sales tax rate) has been used in the modelling (Metrolinx 2021).
The modelling does not assume any variation in ticket prices under the proposed CR/HSR services, relative to existing VIA Rail services. User benefits in the analysis are derived purely from the shorter journey times and improved reliability. This approach enables the estimation, in the first instance, of the potential benefits from time savings and reliability. While CR/HSR ticket prices are not yet determined, it is nevertheless possible to consider the impact of changes to ticket prices as a secondary adjustment, which is discussed in the sensitivity analysis further below.
Congestion and Safety on the Road Network
In addition to rail-user benefits, the proposed CR/HSR project would also provide benefit to road users via decongestion and a potential reduction in traffic accidents.
When new travel options become available, such as improved rail services, some travellers shift from driving to using transit, reducing the number of vehicles on the road. This reduction in vehicle-kilometres travelled (VKT) decreases road congestion, providing benefits to the remaining road users. Decreased congestion leads to faster travel times, and can also lower vehicle operating costs, particularly in terms of fuel efficiency and vehicle wear-and-tear.
Our research model includes a forecast of how improvements in rail travel could lead to decongestion benefits for auto travellers in congested corridors. Through CR/HSR offering a faster and more reliable journey experience versus existing VIA Rail services, VIA-HFR’s passenger modelling forecasts shifts in travel patterns, with a significant proportion of new rail users being switchers from roads. These shifts reduce road congestion and in turn generate welfare benefits for those continuing to use highways.
Analysis of Canadian road use data, cross-checked with more granular traffic data from the UK, suggests that the proportion of existing road VKT is 37 percent in peak hours and 63 percent in off-peak hours, based on Metrolinx’s daily timetable of peak versus non-peak hours (Metrolinx 2021, Statistics Canada 2014, Department for Transport 2024). Using this information, the estimated weighted average impact of road congestion is approximately 0.004 hours/VKT. Time savings are converted into monetary values (using $21.45/hour, in 2024 dollars) to estimate the economic benefits of reduced road congestion.
In practice, road networks are unlikely to decongest by the precise number of transport users who are forecast to switch from road to rail. First, the counterfactual level of road congestion (without CR/HSR) will change over time, as a function of population growth, investment in road networks (such as through highway expansion), developments in air transport options, and wider factors. Many of these factors are not known precisely (e.g., investment decisions regarding highways expansion across the coming decades), therefore the counterfactual is necessarily subject to uncertainty. Second, if some road users switch to rail due to investment in CR/HSR, the initial (direct) reduction in congestion would reduce the cost of road travel, inducing a subsequent (indirect) “bounce-back” of road users (known as a general equilibrium effect). The modelling of congestion impacts in this study is necessarily a simplification, focusing on the direct impacts of decongestion, based on the forecast number of switchers from road to rail.
In addition to decongestion, CR/HSR may also improve the overall safety of the road network through fewer vehicle collisions. Collisions not only cause physical harm but also cause economic and social costs. These include the emotional toll on victims and families, lost productivity from injuries or fatalities, and the costs associated with treating accident-related injuries. Road accidents can cause disruptions that delay other travellers, adding additional economic costs, and can also incur greater public expenditure through emergency responses.
With CR/HSR expected to shift some users from road to rail, this study models the forecast reduction in overall road VKT. This estimate for the reduction in road VKT is converted into a monetary value assuming $ 0.09/VKT in 2024 prices, which is discounted in future years by 5.3 percent per annum to account for general safety improvements on the road network over time (such as through improvements in technology) and fewer accidents per year (Metrolinx 2018, Metrolinx 2021).
Agglomeration
Agglomeration economies are the economic benefits that arise when firms and individuals are located closer to one another. This generates productivity gains which are additional to direct user benefits. These gains can stem from improved labor market matching, knowledge spillovers, and supply chain linkages, benefiting groups of firms within specific industries (localization economies) as well as across multiple industries (urbanization economies). Where businesses cluster more closely – such as within dense, urbanized environments – these businesses benefit from proximity to larger markets, varied suppliers, and accessible public services. For instance, if a manufacturing firm relocates to an urban hub such as Montreal, productivity benefits may ripple across industries as the economic density and activity scale of the area increases. Agglomeration can enable longer-term economic benefits, through collaboration across businesses, universities, and research hubs, stimulating research and development, supporting innovation and enabling new industries to develop and grow.
Transport investments generate economic benefits and increase productivity through urbanization and localization economies. Urbanization economies (Jacobs 1969) refer to benefits arising from a business being situated in a large urban area with a robust population and employment base. This type of agglomeration allows firms to leverage broader markets and infrastructure advantages, thus achieving economies of scale that are independent of industry. Conversely, localization economies (Marshall 1920) focus on productivity gains within a specific industry, where firms in close proximity can cluster together to benefit from a specialized labor pool and more efficient supply chains. For example, as multiple manufacturing firms cluster within an area, their proximity allows them to co-create a specialized workforce and share industry knowledge, creating productivity gains unique to that industry.
In practice, improved transportation can generate agglomeration effects in two ways; first is “static clustering”, where improvements in connectivity facilitate greater movement between existing clusters of businesses and improved labor market access, without changing land use. For individuals and businesses in their existing locations, enhanced connectivity reduces the travel times and the costs of interactions, so people and businesses are effectively closer together and the affected areas have a higher effective density.
Second, “dynamic clustering” can occur when transport investments alter the location or actual density of economic activity. Dynamic clustering can lead to either increased or decreased density in certain areas, impacting the overall productivity levels across regions by altering labor and firm distributions. Conceptually, dynamic clustering’s benefits include the benefits from static clustering.
The analysis in this study is based on static clustering effects, focusing on productivity benefits arising from improved connectivity without modelling potential changes in land use or actual density. This approach estimates the direct economic gains of reduced travel times and enhanced accessibility within existing urban and industrial structures. Benefits arising from dynamic clustering are subject to greater uncertainty because it may involve displacement of economic activity between regions. In addition, variations in density across regions could be influenced by external factors – such as regional economic policies, housing availability, or industry-specific demands – that would require a much deeper and granular modelling exercise. Overall, focusing on static clustering provides a more conceptually conservative estimate of the benefits.
To estimate the agglomeration economies associated with the CR/HSR project, we utilize well-established transport appraisal methodology for agglomeration estimation (Metrolinx 2021). The analysis in this study applies one simplification to accommodate data availability, which is to undertake the analysis at an economy-wide level, rather than performing and aggregating a series of sector-specific analyses.
Overall, the three-step model estimates these agglomeration effects through changes in GDP. In the first step, the generalized journey cost (GJC) between each zone pair is calculated. This GJC serves as an average travel cost across various transportation modes (e.g., road, rail, air), taking account of journey times and ticket prices. The GJC is estimated for both the baseline (existing VIA Rail) and investment scenarios (CR/HSR), across multiple projection years. Due to the sensitivity of agglomeration calculations, in the baseline the GJC for CR/HSR, road and air are assumed to be equivalent, and in the investment scenario the GJC for road and rail are reduced by utilizing the rule of half principle (see Figure 5). The baseline utilizes Canada-wide vehicle kilometre data from Statistics Canada to estimate passenger modal shares (across existing VIA Rail, road, and air) for 2024, with the modal shares remaining constant over time in the baseline (Transport Canada 2021, Transport Canada 2018, Statistics Canada 2016). In the scenarios, the modal shares are adjusted for passengers moving from existing VIA Rail (and other transport modes) to CR/HSR, as well as induced passengers.
In the second step, the effective density of each of the four zones is calculated under all scenarios. Effective density increases in the investment scenarios because CR/HSR reduces the GJC and enhances connectivity between zones.
In the third step, changes in effective density between scenarios are converted into productivity gains measured as changes in GDP, utilizing a decay parameter of 1.8 and an agglomeration elasticity of 0.046 (Metrolinx 2021). The decay parameter (being greater than 1) diminishes the agglomeration benefits between regions that are further away from each other, such that the estimated productivity gains (arising from greater connectivity) are higher for areas that are closer together. The agglomeration elasticity is – based on academic literature – the assumed sensitivity of GDP to changes in agglomeration. Approximately, an elasticity of 0.046 assumes that a 1 percent increase in the calculated estimate for effective density (see step 2) would correspond to a 0.046 percent increase in GDP. Data on GDP and employment are sourced from Statistics Canada’s statistical tables, and forecast employment growth is assumed to align with Statistics Canada’s projected population growth rates.
Emissions
Environmental effects from transportation create a further source of economic impact. This study considers the main dimensions – greenhouse gas (GHG) emissions and air quality – each contributing to external welfare impacts that affect populations and ecosystems.
Transportation accounts for approximately 22 percent of Canada’s GHG emissions (Canada’s 2024 National Inventory Report), primarily through automobile, public transit, and freight operations. Emissions from GHGs, particularly carbon dioxide, significantly impact the global climate by contributing to phenomena such as rising sea levels, shifting precipitation patterns, and extreme weather events. The social cost of carbon (SCC) framework, published by Environment and Climate Change Canada, assigns a monetary value to these emissions, reflecting the global damage caused by an additional tonne of CO₂ released into the atmosphere. The federal government’s SCC values were published in 2023, more recently than the values recommended by Metrolinx’s 2021 guidance, and therefore the government’s values are used for the modelling in this study. For SCC, data from Environment and Climate Change Canada’s Greenhouse Gas Estimates Table are used, adjusted to 2024 values using CPI. Within the modelling, SCC values increase from $303.6 (in 2024) to $685.5 (in 2098). Using SCC in cost-benefit analyses enables more informed decisions on transportation investments by calculating the welfare costs and benefits associated with emissions under both investment and business-as-usual scenarios.
A wider set of pollutants emitted by vehicles – including CO, NOx, SO₂, VOCs, PM10s, and PM2.5s – pose further health risks, causing respiratory issues, heart disease, and even cancer. These harmful compounds, classified as Criteria Air Contaminants (CACs), impact individuals living or working in the vicinity of transport infrastructure, leading to external societal costs that are not fully perceived by direct users of the transport network. Health Canada’s Air Quality Benefits Assessment Tool (AQBAT) quantifies the health impacts of CACs, evaluating the total economic burden of poor air quality through a combination of local pollution data and Concentration Response Functions (CRFs), linking pollutants to adverse health effects. Furthermore, AQBAT considers air pollution’s effects on agriculture and visibility, allowing analysts to estimate the overall benefits of reducing transport-related emissions for communities across Canada.
This study identifies that CR/HSR has the potential to reduce emissions across multiple fronts. First, as an electrified rail system, CR/HSR is capable of operating with zero emissions, providing a cleaner alternative to existing rail services. If VIA Rail discontinues some services on overlapping routes with CR/HSR, emissions from rail transport in those areas would decrease, as per its planning forecasts. Additionally, CR/HSR’s higher speeds and greater reliability are expected to attract more passengers over time, encouraging a modal shift from more carbon-intensive forms of transportation, such as cars and airplanes. This anticipated shift would lead to a reduction in overall emissions from private vehicle and regional air travel, contributing to CR’s/HSR’s positive environmental impact.
By incorporating SCC and AQBAT metrics, the analysis offers a holistic appraisal of the environmental and social benefits of reducing emissions and improving air quality through CR/HSR, capturing the external welfare consequences beyond direct user impacts. Unit costs of CACs (see Table 3 below) are sourced from Metrolinx (2021) and are also adjusted by CPI into 2024 prices.
Results and Analysis
This section sets out the potential benefits of CR/HSR across various scenarios and sub-scenarios, spanning the 60-year period project implementation (2039 to 2098, inclusive). Results are reported in 2024 present value terms, cumulated over the 60-year period, as per cost-benefit analysis (CBA) literature (e.g., Metrolinx 2021). This cumulative present value represents the total value of benefits to 2098, with benefits in future years discounted to 2024 values. Figure 6 below illustrates the total cumulative present value of benefits for the proposed CR/HSR project, under different scenarios and passenger growth sub-scenarios in our model.
Since the HSR upsideis the most optimistic sub-scenario, with a higher speed and the highest projected growth rate for rail passengers, it yields the largest total economic benefit, estimated at approximately $27 billion. Conversely, the CR downsideassumes a comparatively lower speed and a smaller growth rate for rail passengers, resulting in the lowest benefit among all sub-scenarios, estimated at around $11 billion. This range of outcomes highlights that economic benefits are sensitive to assumptions around speed and passenger growth, underscoring the importance of these factors in the overall project evaluation.
Figure 7 illustrates the breakdown of benefits from the proposed CR/HSR project across different sub-scenarios and categories of benefits (see Table 4 in the Appendix for numerical values). User benefits form the largest component, indicating that rail passengers are expected to gain approximately $3.1–$9.2 billion in value over the modelling period, in present-day terms. Road decongestion effects, agglomeration impacts and emissions reductions are also forecast to deliver economic benefits. This study’s modelling estimates that CR/HSR could generate agglomeration effects that boost GDP by around $2.6–$3.9 billion over the 60-year analysis period, through enhancing productivity in the Ontario-Québec corridor. CR/HSR could significantly reduce greenhouse gas emissions and improve air quality, valued at approximately $2.6–$7.1 billion when considering the social cost of carbon and other pollutants. Benefits from reduced congestion on roads are estimated at $2.0–$5.9 billion. Finally, improved road safety offers an additional $0.3–$0.8 billion (approximately) in present value. Together, these impacts illustrate the wide-ranging economic, environmental, and social benefits anticipated from the CR/HSR project.
Given the potential sensitivity of economic benefits to assumptions around passenger growth, the 2011 federal government feasibility study provides a useful point of comparison for rail passenger growth under CR/HSR. The current outlook for rail passenger forecasts is not the same as it was in 2011, but some of the changes will have offsetting impacts. On one hand, Canada’s population has both grown faster (between 2011 and 2024) and is expected to grow faster in the future, relative to expectations in 2011. On the other hand, remote working has increased significantly since the COVID-19 pandemic. Passenger forecasts are discussed in more detail below.
Modelled agglomeration benefits are at the upper end of expectations. For example, the value of agglomeration effects for the HSR central scenario in this study ($3.4 billion) is almost 50 percent of the value of rail user benefits ($7.2 billion). Within academic literature, economic benefits from agglomeration are typically estimated to be in the region of 20 percent of direct user benefits on average (Graham 2018). However, across a range of studies, agglomeration benefits up to 56 percent have been identified (Oxera 2018). Therefore, the modelled estimates appear high relative to prior expectations, but within a plausible range.
To note, our agglomeration modelling (based on the Metrolinx methodology) forecasts significant economic benefits for all four of the zones. Our modelled agglomeration estimates for each zone are a function of the distance between zones (higher distance reduces agglomeration benefits due to the decay parameter), forecast uptake of CR/HSR services, and GDP. For example, Toronto’s agglomeration effect (as a percentage of GDP) is forecast to be one-third less than that of Montreal, due to be Toronto being slightly further away (from Ottawa, Montreal and Quebec City) than those cities are to each other. The agglomeration modelling is complex and sensitive to input assumptions, therefore it is important to recognize a degree of uncertainty around the precise value of agglomeration-related economic benefits.
Sensitivity Analysis
Ticket prices for CR/HSR impact the total benefits. For example, under the HSR central scenario, if HSR ticket prices were set 20 percent above existing Via Rail ticket prices, the forecast present value of user benefits falls by around 40 percent. The present value of economic benefits would fall by $4.2 billion compared to the HSR central case (from $20.7 billion to $16.5 billion), the majority being due to lower user benefits. However, recognizing cost of living concerns for Canadian households, it is also possible that median ticket prices could fall – such as through dynamic pricing – in which case economic benefits could also rise, by a similar amount.
The source of CR/HSR passengers will impact the estimated quantum of benefits, although relatively moderately. If proportions for “switchers” and “induced” passengers are sourced from VIA-HFR’s estimates, the level of economic benefits is $3.0 billion lower (falling from $20.7 billion to $17.7 billion). VIA-HFR’s forecasts assume a higher proportion of induced passengers, and also assume a greater share of switchers from air transport. As a result, the main impact of the VIA-HFR assumptions is to produce a smaller road decongestion effect, which reduces the potential benefits for road users.
The agglomeration calculation is relatively sensitive to the baseline assumption for passenger modal share. The modelling in this study is based on Canada-wide vehicle kilometre data, utilizing information from Transport Canada and Statistics Canada. Further analysis could be undertaken to refine this assumption across Ontario and Québec, while also ensuring that forecast agglomeration benefits align with wider estimates in existing transport literature.
Discussion and Qualifications
The analysis presented in this study is based on currently available information and projections, which are subject to certain limitations. Notably, there are uncertainties surrounding several key factors, including the precise routes and station locations, the design specifications (e.g., maximum achievable speed), ticket pricing, expected passenger numbers, the breakdown across ”switchers” and “induced” passengers, and passenger modal shares more generally. These elements, if altered, could impact the economic outcomes considerably.
There are several important qualifications to the scope of this study. First, it provides an analysis of potential economic benefits from CR/HSR investment but does not seek to quantify or analyze the direct costs involved in procurement, financing, construction, operations, maintenance or renewals. As such, this study constitutes an analysis of economic benefits, rather than a full cost-benefit analysis exercise. Second, this study seeks to estimate national, aggregate-level impacts, rather than undertaking a full distributional analysis of the impacts across and between different population groups. Third, this study’s primary focus is an economic assessment, rather than a transportation modelling exercise. The economic analysis utilizes and relies upon detailed, bottom-up passenger forecasts developed by VIA-HFR (received directly), cross-checked against the 2011 federal government’s previous HSR study. All three of these scope issues are important inputs to a holistic transport investment appraisal and should be considered in detail as part of investment decision-making.
Specifically, regarding this final issue – passenger forecasts – it is relevant to consider the transport modelling assumptions in further detail. As noted above, this study has not developed a full transport model, nor does it seek to take a definitive view on VIA-HFR’s forecasts. We would recommend that independent technical forecasts are developed. However, there are several relevant observations.
On one hand, VIA-HFR’s estimates do not appear implausible. For example, HSR has achieved a 7-8 percent share of passenger travel within certain routes in the United States (New York-to-Boston and New York-to-Washington), which would appear to be broadly consistent with the level of ambition within VIA-HFR’s passenger growth forecasts for the HSR central scenario (LEK 2019). The Madrid-Barcelona high speed link is estimated to serve 14 million passengers per year (International Railway Journal 2024). Internationally, HSR has achieved high market shares in Europe and Asia, such as 36 percent modal share for Madrid-Barcelona and 37 percent for London-Manchester, albeit noting that Europe typically has lower road usage and a higher propensity to use public transport (LEK 2019).
On the other hand, it is important to recognize the historic tendency for optimism bias within transportation investment projects. For example, in the UK, the HS2 project was criticized as having “overstated the forecast demand for passengers using HS2 [and] overstated the financial benefits that arise from that demand” (Written evidence to the Economic Affairs Committee, UK 2014). A review of HS2 in 2020 revised downwards previous estimates of economic benefits (Lord Berkeley Review 2020). As noted further above, analysis by the European Court of Arbiters (2018) posits that not all HSR projects induce sufficient passenger volumes to achieve net benefits over the project lifetime.
Overall, future passenger forecasts will depend upon a range of factors, including ticket prices, the availability and price of substitute modes (i.e., air), cultural preferences for private vehicle ownership, the impact of changing emission standards and the feasibility of construction plans.
This study applies some pragmatic, simplifying assumptions and approximations, applied to best practice transport appraisal (Metrolinx 2021; Department for Transport, UK, 2024). Across these modelling assumptions, there is variation in the directional impact on our estimates for economic benefits.
On one hand, some of the modelled benefits are likely to be relatively high-end estimates. First, for rail-user benefits, the modelling assumes no differential in ticket prices between existing VIA Rail services and CR/HSR. It also assumes that CR/HSR can deliver VIA-HFR’s proposed journey times with 95 percent reliability, which is achievable but not guaranteed. Second, for road congestion benefits, the forecast (direct) reductions in road congestion assume no indirect “bounce-back” effect where reduced traffic encourages new or longer trips (as noted above). For example, analysis of US highway demand suggests that capacity expansion only results in temporary congestion relief, for up to five years, before congestion returns to pre-existing levels (Hymel 2019). Third, for agglomeration, the modelled estimates for economic benefits are approximately 50 percent of rail-user benefits, which is close to the upper end of estimates from other transportation studies. Fourth, for emissions, the estimated benefits from forecast emissions savings do not seek to make assumptions about future changes to fuel efficiency for road and air transport, the emissions associated with power generation for CR/HSR, or the anticipated growth in electric vehicle adoption. In the case of electric vehicle deployment, there is uncertainty regarding the level of uptake, as well as the carbon intensity of electricity generation (albeit Ontario and Québec have relatively “clean” grids by international standards). Fifth, for benefits overall, this study leverages the VIA-HFR forecasts for passenger growth which are likely to be ambitious, though they have been robustly developed.
On the other hand, by focusing on the most material economic benefits, this study may exclude some smaller additional benefits that could be considered in further detail. First, there may be specific impacts on the tourism and hospitality sector. By enhancing travel convenience, CR/HSR is likely to draw more visitors to the various cultural, entertainment, and natural attractions across the corridor. As this influx would benefit local businesses by stimulating economic growth and job creation, these impacts are likely to be reflected within the estimate of agglomeration benefits.
Second, CR/HSR would improve national and global competitiveness, enhancing the appeal of Canadian cities to investors and environmentally conscious travellers while helping Canada align more closely with global standards for sustainable, modern infrastructure. Again, the economic benefits are likely to align with the agglomeration estimates.
Third, this study does not seek to quantify the potential gains to individual productivity from CR/HSR ridership, e.g., from individuals having time to work on the train. There is not expected to be a benefit for existing rail users, as they can already utilize Wi-Fi on existing VIA Rail services. For individuals switching to rail from road or air, potential benefits would only accrue to business users. Although switchers from road and air could have opportunities for improved individual productivity, Wi-Fi is increasingly available on airlines and individuals are able to dial into meetings remotely whilst driving.
Fourth, CR/HSR could generate wider economic benefits by increasing competition between businesses along the corridor. International transport appraisal literature suggests that enhanced transport connectivity can erode price markups (and therefore increase consumer surplus) by overcoming market imperfections (Metrolinx 2021; Department for Transport 2024). However, such impacts are likely to be relatively small, e.g., the Department for Transport (UK) estimates them at 10 percent of the benefits for rail business users only. Furthermore, sources of market power in Canada are legal in nature (e.g., interprovincial trade barriers) which rail investment alone is unlikely to overcome.
There are a further group of issues that have been excluded consciously from the methodology in this study. First, impacts on rail crowding are not considered. Some transport appraisals (such as the UK’s economic appraisal of the High Speed 2 project) do estimate the user benefits from reduced crowding. However, this is not as applicable for CR/HSR: In the UK, users of existing rail services may be required to stand if the train is overbooked, whereas users of existing VIA Rail services are guaranteed a seat with their booking. Second, impacts on land and property values are not included within the economic benefits. With greater access to efficient transportation, properties near rail stations typically see increased demand and value, boosting local tax revenues and promoting urban revitalization. While CR/HSR could increase values in areas close to the proposed stations, such changes are not additional to other wider economic benefits, but rather reflect a capitalization of those benefits. To avoid the risk of double counting the economic benefits already estimated, these are excluded (Department for Transport 2024).
CR/HSR may improve social equity and accessibility by offering affordable, reliable travel options for those without cars, including low-income individuals, students, and seniors. This expanded access enables broader employment, education, and healthcare opportunities, contributing to a more inclusive society. Whilst this study does not include a distribution analysis, social benefits from greater inclusion and social equity would constitute a benefit of CR/HSR investment and merit further detailed analysis.
Finally, in addition to policy considerations, major investment decisions have a substantial political dimension. For example, Canada is the only G7 country without HSR infrastructure. While cognizant of the political context, the analysis in this study is purely an economic assessment and does not consider political factors.
Conclusion
Canada’s population and economy continue to expand, particularly within the Toronto-Québec corridor. Existing transportation routes can expect greater congestion over time, particularly capacity-constrained VIA Rail services. In this context, can Canada afford not to progress with faster, more frequent rail services? There are significant opportunity costs to postponing investment.
This study has developed quantified estimates of the economic benefits of investing in the proposed Rapid Train project in the Toronto-Québec City corridor. Cumulatively, in present value terms, these economic benefits are estimated to be $11-$17 billion under our modelled conventional rail (CR) scenarios, and larger – at $15-$27 billion – under high-speed rail (HSR) scenarios. Economic benefits arise from several areas, including rail user time savings and improved reliability, reduced congestion on the road network, productivity gains through enhanced connectivity, and environmental benefits through emission reductions. With many commentators highlighting that Canada is experiencing a “productivity crisis” and a “climate emergency,” the projected productivity gains and lower-emission transportation capacity from the Rapid Train project present particularly valuable opportunities.
This study has assessed major economic benefit categories as identified within mainstream transport appraisal guidance. Further research could include additional sensitivity analysis around key parameters, as well as consideration of potential dynamic clustering effects, and projections for housing and land values.
Clearly, there is a cost to investment in a new dedicated passenger rail service: upfront capital investment, ongoing operations and maintenance expenditure, and any financing costs. These costs are not assessed in this study and will need to be considered carefully by policymakers. However, inaction – by continuing with the status quo rail infrastructure – also has a significant opportunity cost. Canada would forgo billions of dollars worth of economic advantage if it fails to deal with current challenges, including congestion on the rail and road networks, stifled productivity, and environmental concerns.
This study identifies the multi-billion-dollar economic benefits from the proposed Rapid Train project. While these benefits will need to be weighed alongside the forecast project costs, this study provides a basis for subsequent project evaluation and highlights the significant opportunity costs that Canada is incurring in the absence of investment.
Appendix
For the Silo, Tasnim Fariha, David Jones. The authors thank Daniel Schwanen, Ben Dachis, Glen Hodgson and anonymous reviewers for comments on an earlier draft. The authors retain responsibility for any errors and the views expressed.
References
Ahlfeldt, G., Feddersen, A., 2017. “From periphery to core: measuring agglomeration effects using high-speed rail.” Journal of Economic Geography.
Albalate, D., and Bel, G. 2012. “High‐Speed Rail: Lessons for Policy Makers from Experiences Abroad.” Public Administration Review 72(3): 336-349.
Amtrak. 2023. Amtrak fact sheet: Acela service.
Atkins, AECOM and Frontier Economics. 2014. First Interim Evaluation of the Impacts of High Speed 1, Final Report, Volume 1. Prepared for the Department of Transport, UK.
Blanquart, C., and Koning, M. 2017. “The local economic impacts of high-speed railways: theories and facts.” European Transport Research Review 9(2): 12-25.
Bonnafous, A. 1987. “The Regional Impact of the TGV.” Transportation 14(2): 127-137.
California High-Speed Rail Authority. 2022. “2022 Business Plan.” Sacramento: State of California.
Central Japan Railway Company. 2020. “Annual Environmental Report 2020.” Tokyo: JR Central.
Crescenzi, R., Di Cataldo, M., and Rodríguez‐Pose, A. 2021. “High‐speed rail and regional development.” Journal of Regional Science 61(2): 365-395.
Dachis, B., 2013. Cars, Congestion and Costs: A New Approach to Evaluating Government Infrastructure Investment. Commentary. Toronto: C.D. Howe Institute. July.
Dachis, B., 2015. Tackling Traffic: The Economic Cost of Congestion in Metro Vancouver. Commentary. Toronto: C.D. Howe Institute. March.
Department of Transport (Ireland). 2023. “Transport Appraisal Framework, Appraisal Guidelines for Capital Investments in Transport, Module 8 – Detailed Guidance on Appraisal Parameters.”
Department for Transport (UK). 2024. “National Road Traffic Survey, TRA0308: tra0308-traffic-distribution-by-time-of-day-and-selected-vehicle-type.ods (live.com).”
International Railway Journal. 2024. “Spanish high-speed traffic up 37 percent in 2023.”
International Transport Forum-OECD. 2013. “High Speed Rail Performance in France: From Appraisal Methodologies to Ex-post Evaluation.”
International Union of Railways (UIC). 2022. “High-Speed Rail: World Implementation Report.” Paris: UIC Publications.
International Union of Railways (UIC). 2019. “Carbon Footprint of Railway Infrastructure.” Paris: UIC Publications.
Jacobs, J. 1969. The Economy of Cities. New York: Random House.
Kojima, Y., Matsunaga, T., and Yamaguchi, S. 2015. “Impact of High-Speed Rail on Regional Economic Productivity: Evidence from Japan.” Research Institute of Economy, Trade and Industry (RIETI) Discussion Paper Series 15-E-089.
Lawrence, M., Bullock, R. G., and Liu, Z. 2019. “China’s High-Speed Rail Development.” World Bank Publications.
LEK. 2019. New Routes to Profitability in High-Speed Rail.
Lord Berkeley Review. 2020. A Review of High Speed 2, Dissenting Report by Lord Tony Berkeley, House of Lords: Lord-Berkeley-HS2-Review-FINAL.pdf.
Marshall, A. 1920. Principles of Economics. London: Macmillan.
Metrolinx. 2018, GO Expansion Full Business Case.
________. 2021. Business Case Manual Volume 2: Guidance.
________. 2021, Traffic Impact Analysis Durham-Scarborough Bus Rapid Transit.
Morgan, M., Wadud, Z., Cairns, S. 2025, “Can rail reduce British aviation emissions?” Transportation Research Part D 138.
National High Speed Rail Corporation Limited (NHSRCL). 2023. “Mumbai-Ahmedabad High Speed Rail Project Status Report.”
Office National des Chemins de Fer (ONCF). 2022. “Al Boraq High-Speed Rail Service: Five Year Performance Review.”
OAS. 2019. “High Speed Rail vs Air: Eurostar at 25, The Story So Far.”
Oxera. 2018. “Deep impact: assessing wider economic impacts in transport appraisal.”
Reiter, V., Voltes-Dorta, A., Suau-Sanchez, P. 2022, “The substitution of short-haul flights with rail services in German air travel markets: A quantitative analysis.” Case Studies on Transport Policy.
Rete Ferroviaria Italiana (RFI). 2023. “Alta Velocità Network Expansion: Naples-Bari Route Completion Report.”
Rossa et al. 2024. “The valuation of delays in passenger rail using journey satisfaction data.” Elsevier, Part D (129).
Rungskunroch, P. 2022. “Benchmarking Operation Readiness of the High-Speed Rail (HSR) Network.”
Statistics Canada. 2023. Table 36-10-0468-01 Gross domestic product (GDP) at basic prices, by census metropolitan area (CMA) (x 1,000,000).
______________. 2024. Table 14-10-0420-01 Employment by occupation, economic regions, annual.
______________. 2024. Table 17-10-0057-01 Projected population, by projection scenario, age and gender, as of July 1 (x 1,000).
_____________. 2016. Table 8-1: Domestic Passenger Travel by Mode, Canada.
______________. 2014, Canadian vehicle survey: Canadian vehicle survey, passenger-kilometres, by type of vehicle, type of day and time of day, quarterly (statcan.gc.ca).
Transport Canada. 2021. Transportation in Canada 2020, Overview Report, Green Transportation.
_______________. 2018. RA16-Passenger and Passenger-Kms for VIA Rail Canada and Other Carriers.
Ministry of Transportation of Ontario & Transport Canada. 2011. “Updated feasibility study of a high-speed rail service in the Québec City – Windsor Corridor: Deliverable No. 13 – Final report.”
Vickerman, R. 2018. “Can high-speed rail have a transformative effect on the economy?” Transport Policy 62: 31-37.
Wang, X., and Chen, X. 2019. “High-speed rail networks, economic integration and regional specialisation in China.” Journal of Transport Geography 74: 223-235.
Coyotes, like other wild animals, sometimes come into conflict with humans. Since migrating to Ontario and the eastern provinces from western Canada more than 100 years ago, coyotes have adapted well to urban environments and can now be found in both rural and urban settings. Coyotes can be found across Ontario but are most abundant in southern and eastern agricultural Ontario and urban areas.
Changes in land use, agricultural practices, weather, supplemental feeding and natural food shortages may contribute to more coyote sightings in your community.
Homeowners can take steps to make sure coyotes aren’t attracted to their property and to keep their pets safe. To reduce the potential for coyote encounters, the Ministry of Natural Resources has the following tips for the public:
Do not approach or feed coyotes
Coyotes are usually wary of humans and avoid people whenever possible. However, they are wild animals and should not be approached.
People should NOT feed coyotes — either intentionally or unintentionally. It makes them less fearful of humans and makes them accustomed to food provided by humans.
Aggressive behavior towards people is unusual for coyotes, but people should always exercise caution around wildlife. Secure garbage, compost and other attractants
Do not provide food to coyotes and other wildlife. Properly store and maintain garbage containers to help prevent coyotes from becoming a problem.
In the fall, pick ripe fruit from fruit trees, remove fallen fruit from the ground and keep bird feeders from overflowing as coyotes eat fruit, nuts and seeds.
In the summer, protect vegetable gardens with heavy-duty garden fences or place vegetable plants in a greenhouse. Check with your local nursery to see what deterrent products are available.
Place trash bins inside an enclosed structure to discourage the presence of small rodents, which are an important food source for coyotes.
Put garbage at curb-side the morning of the scheduled pickup, rather than the night before.
Use enclosed composting bins rather than exposed piles. Coyotes are attracted to dog and cat waste as well as products containing meat, milk and eggs.
Consider eliminating artificial water sources such as koi ponds.
Keep pet food indoors. Use deterrents and fences to keep coyotes away from your home and gardens
Use motion-sensitive lighting and/or motion-activated sprinkler systems to make your property less attractive to coyotes and other nocturnal wildlife.
Fence your property or yard. It is recommended the fence be at least six-feet tall with the bottom extending at least six inches below the ground and/or a foot outward, so coyotes cannot dig under the fence. A roller system can be attached to the top of the fence, preventing animals from gaining the foothold they need to pull themselves up and over the top of a fence.
Electric fencing can also help deter coyotes from properties or gardens in some circumstances. Clear away bushes and dense weeds near your home where coyotes may find cover and small animals to feed upon.
Install proper fencing.
As coyotes are primarily nocturnal, pets should be kept inside at night.
Keep all pets on leashes or confined to a yard.
Keep cats indoors and do not allow pets to roam from home.
Spay or neuter your dogs. Coyotes are attracted to, and can mate with, domestic dogs that have not been spayed or neutered.
If you encounter a coyote:
Do not turn your back on or run. Back away while remaining calm.
Use whistles and personal alarm devices to frighten an approaching or threatening animal.
If a coyote poses an immediate threat or danger to public safety, call 911.
Never attempt to tame a coyote. Reduce risk of predation on livestock
Barns or sheds can provide effective protection from the threat of coyotes preying on livestock.
Guard animals, such as donkeys, llamas and dogs, can be a cost-effective way to protect livestock from coyotes. Guard animals will develop a bond with livestock if they are slowly integrated and will aggressively repel predators.
Managing problem wildlife
Landowners are responsible for managing problem wildlife, including coyotes, on their own property.
The Ministry of Natural Resources helps landowners and municipalities deal with problem wildlife by providing fact sheets, appropriate agency referrals, and information on steps they can take to address problems with wildlife.
Flecktarn is one of the most ubiquitous camouflage patterns in every military surplus enthusiast’s closet and I bet many of you guys and gals already own some. But every now and then, our friends at Kommandostore get in something arguably even more special: Flecktarn’s tropical cousin Tropentarn. Wait…what? What the heck are Germans doing making a desert camo?
via ufpro.com– Just like M81 Woodland and DPM, 5FT Flecktarn decisively influenced the development of other camouflage patterns and their adaptation by other countries. One might say these three patterns inspired the next generation of camouflage patterns, much as the three were inspired by the WW2 patterns that preceded them. Accordingly, several countries merit mention:
The People’s Republic of China outfitted its Border Defence Units with an unlicensed copy of Flecktarn. Also, utilized in Tibet and the Bejing Military Region was a recolored, brown-dominant variation (which is highly sought-after by collectors).
Belgium interpreted German 5FT Flecktarn in a variant that was worn by its Airbase Security Personnel until 2000.
Denmark developed a green-dominant variation using only three instead of five colors. Tested in 1978, it today calls attention to the close cooperation of textile companies back then, since it is rumored to have been jointly developed with the French company Texunion.
The Netherlands briefly considered fielding Flecktarn as a camouflage pattern, but for political reasons decided against it (Dutch decision-makers felt there was too close a resemblance to the patterns used by the SS during the Second World War).
Japan created its own Flecktarn version and in 1991 fielded it within the JNSDF.
And before you go and say- “Hey buddy, the Germans have had a bit of a history fighting in the desert”, Tropentarn comes from trials after the successful implementation of Flecktarn. Good ol ‘Fleck had a bit of a hard time getting fully fielded as Germany was a bit sensitive to using any kind of pattern that resembled the various Waffen SS experiments in the 40s for obvious reasons. This was back in the late 70s after all.
But after ‘fleck got through the filter, Germany’s increased presence as peacekeeping forces brought them to the doorstep of everyone’s favorite sandbox, the middle east. A new camo was needed. As early as ’93, Tropentarn would appear as a reduced 3-color (vs 5 colors in normal flecktarn) arid version of the now beloved pattern. Unlike many desert patterns of the era, the Germans tastefully sprinkled in a few specs of green to really make the camo versatile beyond the dunes in Iraq.
If you’re not aroused by the typical brown, brown and more brown nature of a lot of desert camos, Tropentarn might be the right one for you. It even has a few bonus features over the normal field shirt that make it a little more breathable in the summer if you live in the south or simply yearn for temperatures over 70 Fahrenheit in polar vortexes like today’s…
It even works wonders in the great plains since everything turns tan come the wintertime and it gets just as much attention from fellow milsurp enjoyers and normies alike. So if you’re in the mood for another flavor of flecktarn in your wardrobe you’ll definitely want to dive in and grab one on the kommandostore site while you can, they’re always popular…and stock won’t last long. For the Silo, Jarrod Barker.
Featured image- Erbsentarn 44 dot peas pattern German WW2 Waffen SS standard camouflage pattern.
Our friends at MSN have really stirred the maple syrup pot up with this story- which one of the following companies is the most surprising for you? Leave us a note in the comments section at the bottom of the article.
Many beloved Canadian brands that fill shopping carts and homes across the country have something surprising in common—they’re actually owned by foreign investors and companies. Behind familiar logos and proud Canadian histories stand international corporations that have quietly acquired these businesses, often maintaining their strong local identity while decisions are made overseas.
This eye-opening list reveals 8 well-known Canadian companies that now operate under foreign ownership.
While these businesses still employ thousands of Canadians and remain important parts of communities nationwide, their profits and major corporate choices flow to boardrooms in places like the United States, Europe, and Asia. Each example shows how Canada’s business landscape has evolved in today’s global economy.
A Canadian fast-food icon, Tim Hortons has been owned by Restaurant Brands International since 2014, with its headquarters in Toronto but major control from Brazil-based 3G Capital. The beloved coffee chain started in Hamilton, Ontario in 1964 as a single donut shop. Today, it serves millions of customers daily across Canada and has expanded into 14 countries. The Brazilian investment firm maintains the Canadian feel of the brand while pushing for global growth.
Hudson’s Bay Company, founded in 1670, is now owned by American businessman Richard Baker’s NRDC Equity Partners. The historic retailer shifted from Canadian ownership in 2008 through a $1.1 billion deal. HBC continues to operate The Bay stores across Canada while managing an extensive real estate portfolio. The company maintains its Canadian identity despite being controlled from south of the border.
The Montreal-based entertainment company, famous for its artistic circus shows, was acquired by TPG Capital, a U.S. private equity firm, in 2015. Following financial difficulties during the pandemic, ownership changed again in 2020 to a group including Catalyst Capital Group. The company still creates its shows in Montreal. The creative spirit of Cirque remains distinctly Quebec-based despite foreign investment control.
The luxury winter coat maker, started in Toronto in 1957, sold a majority stake to U.S.-based Bain Capital in 2013. The company continues to manufacture its core products in Canada, maintaining its made-in-Canada promise. The brand has expanded globally under foreign ownership while keeping its Canadian headquarters. The international success of Canada Goose proves that Canadian craftsmanship can thrive under foreign ownership.
The Canadian hardware retailer Rona underwent major ownership changes in recent years. After operating independently for decades, the Quebec-based chain was acquired by U.S. home improvement leader Lowe’s in a $3.2 billion deal completed in 2016. However, Lowe’s ownership proved relatively short-lived. In 2023, the American retailer divested Rona, selling it to Sycamore Partners, a private equity firm headquartered in New York, for $2.4 billion. Despite these corporate transitions, Rona maintained its distinct brand identity in the Canadian home improvement marketplace.
Ontario-based CARA Operations (now Recipe Unlimited) purchased Quebec’s St-Hubert restaurants for $537 million in 2016. The restaurant chain, founded in Montreal in 1951, maintains its distinct Quebec identity. Multiple foreign investment firms hold significant stakes in Recipe Unlimited through the parent company MTY Food Group. The company continues operating across Quebec while major business decisions are made outside the province.
In 2019, Toronto-based Onex Corporation acquired Westjet for $5 billion, with significant backing from international investors and foreign private equity firms. The airline maintains its headquarters in Calgary and continues operating as a Canadian carrier. Major foreign institutional investors hold substantial positions through Onex Corporation. While preserving its Canadian operations, the company’s ownership structure includes significant international investment.
Suncor Energy owns Petro-Canada stations, with significant foreign institutional investors holding major stakes. The company merged with Suncor in 2009 in a $19 billion deal. Petro-Canada remains a prominent Canadian retail fuel brand. International investment firms hold substantial voting shares in the parent company.
Loblaw Companies Limited, a Canadian company, acquired Shoppers Drug Mart in 2014 for $12.4 billion. Despite its Canadian roots, the pharmacy chain has significant foreign institutional investment. Under this foreign ownership, Shoppers Drug Mart continues to expand its healthcare services across Canada.
It’s cold and snowy out there. Damn cold and snowy weather, so why not warm up with a Cider Buttered Rum made with Chic Choc Spiced Rum?
Chic Choc Spiced Rum, made in Canada, is produced with six indigenous spices, creating a fresh take on rum that features a spicy bouquet with nuances of sugar cane and cinnamon, complemented by a subtle peppery tone.
LANDSCAPES 2025 is an impressive online survey exhibition adjudicated by the notable public art gallery programmer Krista Young and the celebrated artist Clint Griffin. The John B. Aird Gallery is proud to presentitsfirst large group project organized around the Landscape Genre, a genre of art practiced for centuries around the world.
Broadly defined, a landscape practice is a migratory representation of an artist’s creativity within the fluid realms of two- or three-dimensional art, whether representational or non-representational.
This intentionally broad definition allows for a diverse range of artworks, reflecting the variety of contemporary art techniques and practices today.
LANDSCAPES features new work by fifty-five artists inclusive of John Abrams, Rhonda Abrams, Sue Archibald, Joe Atikian, Phill Atwood, Jarrod Barker, Ioana Bertrand, Matthew Brown, J. Lynn Campbell, Alyson Champ, Ava P Christl, Frances Cordero de Bolaños, Glen Cumming, Grace Dam, Fanny Desroches, Jennifer Dobinson, Edward M. Donald, Janice Evans, Tanya Fenkell, Marie Finkelstein, Julie Florio, Robert Fogel, Anna & Richelle Gaby-Trotz & Forsey, Elena Gaevskaya, Arnie Guha, Stev’nn Hall, Michael Hannan, Emily Honderich, Carol Hughes, Connie Ivany, Marlene Klassen, Lisa Litowitz, Ramona Marquez-Ramraj, Claudia McKnight, Susan Munderich, Mahnez Nezarati, Allan O’Marra, Sherry Park, Karen Perlmutter, Piera Pugliese, Jackie Rancourt, Katie Rodgers, Lynne Ryall, Kaija Savinainen, Lee Schnaiberg, Wendy Skog, Carolynn Smallwood, Margaret Stawicki, Kate Taylor, Robert Teteruck, Steph Thompson, Joanna Turlej, Dejana Veljko, Victoria Wallace and Don Woodiwiss.
The landscape work of these artists spans various themes, from expansive vistas and sophisticated gardens to untamed wilderness.
These pieces engage with the dialogue between traditional art history and contemporary interpretations. Some explore the connections between mythologies and landscapes, investigating the relationship between spirituality and nature, which may lead to more abstract representations. Conversely, other works critically examine the impacts of the railroad, displacement, and extraction industries, illustrating the lasting scars these forces leave on the land.
JURORS BIOGRAPHY
Clint Griffin lives and works in Toronto. His work has been widely shown in Canada and the United States. Celebrated in both the contemporary and folk art worlds, Griffin’s work can be found in many private and public collections including the Art Gallery of Ontario, Bank of Montreal and Canada Council Art Bank. Clint currently owns and operates a fine art services business providing service to galleries, artists, collectors and institutions throughout Ontario.
Krista Young has held roles in both administrative and programming capacities at public art galleries in Northern and Northeastern Ontario. Krista has assisted in the development of programming, publications and touring exhibitions. Now based in Toronto, Krista is a small business owner and mother of three.
With the “circular economy” spurring a seismic shift toward sustainability and resource efficiency across sectors, industries are redefining themselves by transforming waste into lasting value. With projections of up to $4.5 trillion usd/ $6.4 trillion cad in economic benefits by 2030, companies are rethinking traditional models to protect the environment, cater to green consumerism mindsets and boost financial performance. At the forefront of this movement in the fine jewelry category is Sonalore, an innovative eTailer turning gold jewelry into a smart, sustainable investment amid its lifetime buyback guarantee. Sonalore not only ensures that every piece of 18-karat gold jewelry endures as a valuable asset, but also champions a closed-loop system that minimizes waste and preserves natural resources as detailed below.
Gold Into Green Investments
A seismic “circular economy” shift is happening across industries, as businesses rush to minimize waste and maximize resource value. This transformation isn’t just about sustainability—it’s about creating smarter business models that benefit both customers and the environment. So impactful this approach, Goldman Sachs projects the circular economy could deliver up to $4.5 trillion in economic benefits by 2030. Companies across sectors are citing how they’re “shaking things up” to waste less—and reap the financial benefits in the process. Geopolitics are also fostering this approach, as Goldman Sachs further asserts that “operating in this way could become ‘critical’ in the face of the rising cost of raw materials” among other drivers.
One innovative eTailer—Sonalore—is leading this change in fine jewelry, transforming how people buy, wear and sell their gold pieces. The company’s approach turns traditional gold jewelry from a depreciating purchase into a liquid investment, while naturally promoting sustainability. Its “lifetime buyback guarantee” program not only transforms its wares into bona fide commodity investments for consumers, but also ensures its products can be easily recycled and reborn.
“We sell 18-karat gold jewelry from the perspective of it being a sound investment—not an expense—that enduringly holds its value,” said Sonalore CEO Nidhi Singhvi. “Our buyback promise allows consumers to sell back their items at a fair current price valuation of gold, which has the potential to increase beyond the purchase price of the item. Gold is, after all, a heralded investment commodity amid its historically proven performance in various economic environments.”
In doing so, the company is also fostering circular economy principles. Here are three ways Sonalore is reshaping the industry:
1. Sustainability That Makes Business Sense –
Most sustainable initiatives ask you to pay more for less. Sonalore flips this on its head. When you’re done with a jewelry piece, they buy it back at market rates and recycle the gold into fresh designs. No more jewelry gathering dust, no unnecessary mining, and you get real value back. It’s sustainability that puts money in your pocket, not the other way around. This isn’t just good for the environment, it is plain smart business. The closed loop system reduces dependency on mining while preserving metal value. Sonalore’s market-rate buyback makes sustainability financially rewarding.
2. True Value, Finally Unlocked –
For a circular economy to work, you need a product with intrinsic value at the center. Fine jewelry should be a very good natural candidate for a circular economy. However, traditional retail markups of 3-5x have broken this cycle, trapping customers in a cycle of overpaying and undervaluing. Sonalore strips away artificial markups, showing customers exactly what they’re paying for – from metal content to craftsmanship. When gold prices rise, so does the value of your piece. This transparency extends beyond purchase – customers can track their jewelry’s value in real-time, treating each piece as the investment it should be.
3. Freedom to Change your Mind –
Life changes, and your jewelry should too. Most fine jewelry sits unworn, losing value in drawers or becoming a guilt-inducing reminder of money spent. Sonalore’s lifetime buyback guarantee transforms this dynamic. Want to switch styles? Trade up. Need liquidity? Convert to cash at market rates. Moving abroad? Downsize without loss. This flexibility turns jewelry from a static purchase into a dynamic asset that adapts to your life changes.
In today’s hugely competitive jewelry marketplace, Sonalore is reimagining not just how jewelry is sold, but also how it creates lasting value. By combining transparent pricing, investment potential and environmental responsibility, the company proves that luxury retail can evolve to meet modern demands for both sustainability and smart value.
As the circular economy gains momentum across industries, Sonalore’s innovative model shows how businesses can transform traditional products into dynamic assets that benefit customers, companies and the environment, alike. For the Silo, Karen Hayhurst.
Amid a surge of alarming headlines exposing the disruptive impact of deepfake photo and video scams on our digital, cultural, societal financial and political landscapes, game-changing, readily-available free solutions are emerging that let you instantly identify and flag AI-generated imagery.
This to preserve the credibility of digital media and safeguard users from falling victim to scams. As synthetic media becomes more sophisticated, identifying AI-generated manipulations presents a unique challenge, but numerous free apps and toolsare readily available allowing users to validate photo and video authenticity with ease—a major step forward in safeguarding trust in a world increasingly influenced by AI-generated visuals, ensuring transparency and security in the digital age. More below.
Amid the onslaught of highly concerning news headlines spotlighting how deepfake AI-generated photo and video scams are driving rampant misinformation and wreaking havoc across digital, cultural, workplace, political and other societal frameworks, solutions are emerging combat AI-driven misinformation and fraud before people fall victim to scams.
One AI disruptor transforming the fight against AI fraud is BitMind—an AI deepfake detection authority that offers a suite of free apps and tools that instantly identify and flag AI-generated images before you fall victim.
Built by a team of AI engineers hailing from leading tech companies like Amazon, Poshmark, NEAR, and Ledgersafe, BitMind’s instant detection of deepfakes helps uphold the credibility of the media, guaranteeing the authenticity of the information we use. A strong deepfake detection enhances digital interactions, supports better decision making and strengthens the integrity of the modern digital world—serving to protect reputations, shield finances and maintain trust for celebrities, politicians, public figures … and everyone else.
For both B2C and B2B use, these 5 BitMind tools are free and accessible to anyone:
AI Detector App: A simple web page where users can drag-and-drop suspicious images for fast deepfake detection results;
Chrome Extension: Flags AI-created content in real-time, while browsing.
X Bot: Verifies if images on X/Twitter are real or AI-generated;
Discord Bot: Verifies if images are real or AI-generated via its Discord Integration;
AI or Not Game: Fun Telegram bot that tests your ability to distinguish between AI-generated and human-created images.
“Recognizing the need to integrate deepfake detection into everyday technology use, our applications fit seamlessly into users’ lives,” notes Ken Miyachi, BitMind CEO. “For example, the BitMind Detection App is a user-friendly application that allows individuals to upload images and quickly assess the likelihood of them being real or synthetic. Additionally, the Browser Extension enhances online security by analyzing images on web pages in real time and providing immediate feedback on their authenticity through our subnet validators. These tools are designed to empower users, enabling them to navigate digital spaces with confidence and security.”
As the world’s first decentralized Deepfake Detection System, BitMind is an open-source technology that enables developers to easily integrate the technology into their existing platforms to provide accurate real-time detection of deepfakes.
“Deepfake technology has emerged as both a marvel and a menace,” continued Miyachi. “With the capacity to create synthetic media that closely mimics reality, deepfakes present unprecedented challenges in privacy, security, and information integrity. Responding to these challenges, we introduced the BitMind Subnet, a breakthrough on the Bittensor network, dedicated to the detection and mitigation of deepfakes.”
According to Miyachi, here are key reasons why BitMind technology is a game changer:
The BitMind Subnet, which represents a pivotal advancement in the fight against AI-generated misinformation. Operating on a decentralized AI platform, this deepfake detection system employs sophisticated AI models to accurately distinguish between real and manipulated content. This not only enhances the security of digital media but also preserves the essential trust in digital interactions.
The BitMind Subnet is equipped with advanced detection algorithms that utilize both generative and discriminative AI technologies to provide a robust mechanism for identifying deepfakes.
BitMind employs cutting-edge techniques, including Neighborhood Pixel Relationships, ensuring competitive accuracy in detection. The operation of the subnet is decentralized, with miners across the network running binary classifiers. This setup ensures that the detection processes are widespread and not confined to any centralized repository, enhancing both the reliability and integrity of the detection results.
Community collaboration is a cornerstone of the BitMind Subnet, actively encouraging the community to contribute to our evolving codebase, and by engaging with developers and researchers, the subnet is continuously improved and updated with the latest advancements in AI.
BitMind combines its extensive industry expertise, cutting-edge academic research, and a deep passion for technology. The team has a proven track record in AI, blockchain, and systems architecture, successfully leading tech projects and founding innovative companies.
What truly sets BitMind apart is their commitment to creating a safer, more transparent digital world where AI benefits humanity, driven by their passion for innovation, security and community engagement. Their technologies are expressly designed to safeguard the integrity of digital media and foster a trustworthy digital ecosystem.
In the modern world full of fake news and increasing cyber threats, BitMind’s innovations are paving the way for a future in which digital trust is not an option, but a necessity. As the threats increase, the global community must be equipped with the means to ingest digital information in a reliable and authentic in order to realize AI’s true potential safely and efficiently. For the Silo, Marsha Zorn.
Plastics that break down into particles as tiny as our DNA—small enough to be absorbed through our skin—are released into our environment at a rate of 82 million metric tons a year. These plastics, and the mix of chemicals they are made with, are now major contributors to disease, affecting the risk of afflictions ranging from cancer to hormonal issues.
Plastic pollution threatens everything from sea animals to human beings, a problem scientists, activists, business groups, and politicians are debating as they draft a global treaty to end plastic pollution. These negotiations have only highlighted the complexity of a threat that seems to pit economic growth and jobs against catastrophic damage to people and the planet.
Rapid growth in plastics didn’t begin until the 1950s, and since then, annual production has increased nearly 230-fold, according to two data sets processed by Our World in Data. More than 20 percent of plastic waste is mismanaged—ending up in our air, water, and soil.
Inescapable Problem
While plastic doesn’t biodegrade—at least not in a reasonable time frame—it does break down into ever smaller particles. We may no longer see it, but plastic constantly accumulates in our environment. These microscopic bits, known as microplastics and nanoplastics, can enter our bodies through what we eat, drink, and breathe.
Microplastics measure five millimeters or less. Nanoplastics are an invisible fraction of that size, down to one billionth of a meter or around the size of DNA.
While microplastics can be as small as a hair, they remain visible. Nanoplastics, however, are impossible to see without a microscope. (Illustration by The Epoch Times, Shutterstock)
Plastic pollution is a chemical remnant of petroleum with other chemicals added in to change the durability, elasticity, and color. PlastChem Project has cataloged more than 16,000 chemicals—4,200 considered highly hazardous, according to the initiative’s report issued in March.
The astounding level and types of plastics, many with unknown health effects, should be a wakeup call for everyone, says Erin Smith, vice president and head of plastic waste and business at the World Wildlife Fund (WWF).
“Plastic pollution is absolutely everywhere,” she said. “What’s hard right now is the body of science, trying to understand what the presence of plastic inside us means from a human health perspective, is still new.”
Ms. Smith said we may be waiting for the science to reveal the full scope of plastic’s biological effects, but one thing is certain: “We know it’s not good.”
Reproductive and Neurological Issues
Newer human health studies have shown plastic has far-reaching effects.
“The research is clear: Plastics cause disease, disability, and death. They cause premature birth, low birth weight, and stillbirth as well as leukemia, lymphoma, brain cancer, liver cancer, heart disease and stroke. Infants, children, pregnant women, and plastics workers are the people at greatest risk of these harms. These diseases result in annual economic costs of $1.2 trillion,” said Dr. Phil Landrigan, pediatrician and environmental health expert, in a Beyond Plastics news release in March.
Beyond Plastics, an advocacy group for policy change, warns that new research indicates plastic could be leading to an increased risk of heart disease, stroke, and death.
Successive studies have found microscopic plastic particles affect every system of our bodies and at every age.
Nearly 3,600 studies indexed by the Minderoo Foundation have detailed the effects of polymers and additives like plasticizers, flame retardants, bisphenols, and per- and polyfluoroalkyl substances. The vast majority of studies indicate plastics affect endocrine and metabolic function, the reproductive system, and contribute to mental, behavioral, and neurodevelopment issues.
One study published in Environmental Science & Technology looked at plastic food packaging from five countries and found hormone-disrupting chemicals were common.
“The prevalence of estrogenic compounds in plastics raises health concerns due to their potential to disrupt the endocrine system, which can, among others, result in developmental and reproductive issues, and an elevated risk of hormone-related cancers, such as breast and prostate cancer,” the authors noted.
Data mapped by Our World in Data shows national rates of per capita plastic pollution to the oceans. American individuals add about .01 kilograms (10 grams) of plastic waste to the world’s oceans each year. At 336,500,000 people today, that amounts to 3,311 tons or 7,418,555 pounds. (The Epoch Times)
The full scope of these chemical consequences is far from known. According to Minderoo, less than 30 percent of more than 1,500 plastics chemicals have been investigated for human health impacts. That includes the “substitution” chemicals used to replace additives that were restricted after being found problematic.
“All new plastic chemicals should be tested for safety before being introduced in consumer products, with ongoing post-introduction monitoring of their levels in human biospecimens and evaluation of health effects throughout the lives of individuals and across generations,” said professor Sarah Dunlop, Minderoo Foundation’s head of plastics and human health.
Absorbed Into Arteries and Skin
The relatively recent discovery that plastic particles can make their way into the human body through multiple methods has come with other unsetting insights. Microplastics and nanoplastics in human artery wall plaque were recently linked to a 350 percent increased risk of heart attack, stroke, and death.
Plastic pollution comes in all forms, from packaging and waste that clogs the Buckingham Canal in Chennai, India to plastic pellets from petrochemical companies that litter the ground in Ecaussinnes, Belgium. (R. SATISH BABU, Kenzo TRIBOUILLARD / AFP via Getty Images)
Published March 6 in the New England Journal of Medicine, the study followed 257 patients over 34 months. Among those involved in the study, 58.4 percent had polyethylene in carotid artery plaque and 12.1 percent had polyvinyl chloride.
Polyethylene is the most common plastic found in bottles and bags, including cereal box liners. Polyvinyl chloride, better known as PVC, is another common plastic, often used in medical and construction materials.
Besides finding entry through ingestion, polymers can also make their way into the bloodstream through our skin, according to another study published in April in Environment International. The findings, based on a human skin equivalent model, add to evidence that suggests that as plastics break down, it may be impossible for us to avoid absorbing them. Microscopic plastic has been found in our soil, water supply, air, and arctic ice.
Sweaty skin was found to be especially prone to absorbing the particles.
Once inside the body, plastic can mimic hormones, collect in arteries, and contribute to one of the most common disease pathologies today—an imbalance of free radicals and antioxidants known as oxidative stress.
Dr. Bradley Bale, a heart attack and stroke prevention specialist and co-author of “Beat The Heart Attack Gene,” says there’s plenty of evidence that plastic is causing oxidative stress.
“Plastics are ubiquitous on planet Earth,” Dr. Bale said. “You’re crazy to think you can eliminate your exposure to that. It would be next to impossible. But we can look at other issues that cause oxidative stress.”
Data processed by our Our World in Data shows the increase in plastic production in metric tonnes. (Illustration by The Epoch Times, Shutterstock)
Those other issues, including poor diet and other toxic exposures, may be resolved through lifestyle approaches, supplements, or avoidance.
Dr. Bale suspects future nanoplastics research will reveal a relationship between plastics exposure and early death, dementia, cancer, diabetes, and any disease impacted by oxidative stress.
How to Stop the Plastic Onslaught
Since cleaning up plastic is nearly impossible once it breaks down, advocacy groups are pushing for legislation that would reduce single-use products such as food wrappers, bottles, takeout containers, and bags—some of the most prolific and problematic plastics.
The United Nations Environment Programme, a global environmental decision-making body with representatives from all UN member states, decided in March 2022 that the plastics issue needed a coordinated response. It committed to fast-tracking a treaty meant to address the world’s growing plastic problem.
However, after holding the fourth of five sessions in late April in Canada, the group still hasn’t decided whether to identify problematic plastics or call for new plastic to be phased out or scaled back. The final meeting begins in late November with a treaty expected in 2025.
(Left) The secretariat of the Intergovernmental Negotiating Committee (INC) to Develop an International Legally Binding Instrument on Plastic Pollution consults on the dais during the closing plenary in Ottawa on April 30, 2024; (Center) Members of Greenpeace holds up placards during the discussions in Ottawa, Canada, on April 23, 2024.; (Right) Pro-plastic messaging was seen at hotels in Ottawa during the UN INC meetings. (IISD-ENB/Kiara Worth, DAVE CHAN/AFP via Getty Images)
Meanwhile, U.S. lawmakers are on a third attempt to gain Congressional consideration of the Break Free From Plastic Pollution Act. First introduced in 2020, it remains stuck in committee. Among the act’s proposals are reducing and banning some single-use plastics, creating grants for reusable and refillable products, requiring corporations to take responsibility for plastic pollution, and temporarily banning new plastic facilities until protections are established.
The Economics of Plastics
Plastics are important for many businesses and the plastics industry itself is significant and influential. However, plastics aren’t as profitable as one may expect. New plastic facilities often get subsidies and tax breaks that make plastics artificially cheap to produce. These financial supports have increased substantially in the past three years.
In addition to direct fossil fuel subsidies, the plastics and petrochemical industries benefit from grants, tax breaks, and incentives. Because of a lack of transparency, exact figures on subsidies are hard to come by, according to the Center for International Environmental Law. The group is urging the UN to ban certain subsidies, including any that would reduce the price of raw goods used to make plastic.
Some organizations question whether these incentives are beneficial to local economies and taxpayers as a whole.
The Environmental Integrity Project issued a report in March that found 64 percent of 50 plastic plants built or expanded in the United States since 2012 received nearly $9 billion in state and local subsidies. Unexpected events were common, including violations of air pollution permits among 42 plants and more than 1,200 accidents like fires and explosions. State-modified permits at 15 plants allowed for additional emissions that were often detected beyond the property line of the plants.
A case study report published June 2023 by the Ohio River Valley Institute examined the $6 billion Shell facility built in Beaver County, Pennsylvania to produce plastic pellets.
“Since the project’s inception, industry executives and government officials alike have argued that it would spur local economic growth and renewed business investment. Yet prosperity still has not arrived. Beaver County has seen a declining population, zero growth in GDP, zero growth in jobs, lackluster progress in reducing poverty, and zero growth in businesses—even when factoring in all the temporary construction workers at the site,” the report says.
The Shell Pennsylvania Petrochemicals Complex makes plastic from “cracking” natural gas in Beaver County, near Pittsburgh, PA. (Mark Dixon/Flickr)
Conflicted Solutions for a Plastic World
The Plastics Industry Association argues that plastic “makes the world a better place”—language it wants in the plastics treaty.
The association represents more than one million workers throughout the entire supply chain. A $468 billion industry, plastics are the sixth largest U.S. manufacturer, according to the association, which did not respond to media requests for an interview.
David Zaruk, a communications professor in Belgium with a doctorate in philosophy, said opposition to plastic is largely an attack on the fossil fuel industry—part of a larger “anti-capitalist political agenda.” The value of plastic on society, he said, is frequently understated.
He pointed to a 2024 study published in Environmental Science and Technology that concludes plastic is far more “sustainable” with lower greenhouse gas emissions than alternatives like paper, glass, and aluminum—many of which it was designed to replace. Arguments often overlook the environmental impact of alternatives, the study notes, and in some cases, there are no substitutions for plastic.
“This isn’t a recent revelation either. Academic scientists have said for years that plastic serves essential functions. Speaking specifically of short-lived plastic uses, a pair of supply chain experts argued in 2019 that ’some plastic packaging is necessary to prevent food waste and protect the environment.’ By the way, food waste produces roughly double the greenhouse emissions of plastic production,” Mr. Zaruk wrote recently on the Substack blog, Firebreak.
The Plastics Industry Association heavily promotes recycling and biodegradable plastics but critics say there are inherent problems with both.
Only 4 percent of plastic is recycled in the United States, while an equal amount ends up in rivers, oceans, and soil—breaking down into microplastics and nanoplastics that experts believe will persist for centuries.
The U.S. Plastics Pact—a collaboration of more than 100 businesses, nonprofit organizations, government agencies, and academic institutions initiated by The Recycling Partnership and World Wildlife Fund—identified 11 problematic plastics that its members aim to voluntarily eliminate by 2025. Members include major plastics users and the products are all finished items or components of plastics that either aren’t recycled or cause problems in the recycling system and could be eliminated or replaced.
While some major companies support the pact, the Plastics Industry Association has taken a dim view of the pact, describing it as an attempt to “tell others how to run their businesses by restricting their choices.”
The association says the best way to increase recycling is through education and innovation.
Recycled Mystery Chemicals
Unfortunately, recycling isn’t a perfect solution to the plastic problem. Recycled plastics present additional hazards because they are made from a blend of products and a more uncertain chemical makeup, according to Therese Karlsson, science advisor for the International Pollutants Elimination Network, a global consortium of public interest groups.
“We’ve looked a lot at recycled plastics. There you have a lot of different plastic materials that you don’t know what they contain and you combine that into a new plastic material that you have even less information about what it contains,” Ms. Karlsson. “As a consumer, you can’t look at a piece of plastic to figure out if it’s safe or not. We just don’t know, but we know a lot of the chemicals used in plastic are toxic.”
An IPEN investigation in April found plastic pellets recovered from recycling facilities in 24 countries had hundreds of toxic chemicals—including pesticides, industrial chemicals, pharmaceuticals, dyes, and fragrances.
“For our recycling technology, it just doesn’t work, and a lot of that ends up in landfills anyway,” said Ms. Smith from the WWF. “It shouldn’t require a decoder ring to decide what goes in that blue bin because everything should be designed for that system.” For the Silo, Amy Denny.
Little Changes Make a Big Difference
In the absence of government intervention, Ms. Smith said there are some easy tips consumers can take to limit their own plastic exposure:
Shop with reusable shopping bags.
Don’t use plastic in the microwave or dishwasher because heat can release additional polymers.
Buy metal or glass snack containers to replace sealable plastic bags.
Use beeswax cloth in place of plastic wrap.
Replace dryer sheets with wool balls.
Carry a refillable cup for water and coffee.
Consider reusable trash bags.
Use and carry metal straws, stir sticks, and/or reusable cutlery.
Don’t litter, and pick up trash you find outdoors.
Whether it’s a cozy post-coital cuddle or the serene satisfaction following a solo session, the afterglow is that unmistakable halo of happiness we carry long after the climax. Far from being a fleeting sensation, the afterglow is a scientifically grounded phenomenon driven by hormones and emotional connectivity.
Our friends at LELO share everything you need to know about the science of the afterglow and why it deserves a central place in the conversation about pleasure, intimacy, and well-being.
The afterglow is the warm, contented feeling that lingers after sexual activity or orgasm. It’s that magical moment when you feel deeply connected to your partner or yourself. This glowing sensation can last minutes, hours, or even days, influencing how you approach your relationships, work, and personal life with a rejuvenated sense of calm and joy.
The Science Behind the Glow
Orgasm triggers the release of a powerful cocktail of hormones. Oxytocin, often called the “love hormone,” strengthens trust and bonding, especially during partnered intimacy. Dopamine delivers an intense rush of pleasure, while serotonin enhances relaxation and happiness. These hormones are universal, playing the same role whether the experience is shared with a partner or savored solo.
The parasympathetic nervous system also kicks in post-orgasm, reducing stress and fostering a profound sense of well-being. This physiological response underscores that pleasure isn’t just about feeling good in the moment; it’s about nurturing your mind and body in meaningful ways.
The Benefits of the Afterglow
Numerous studies show that the effects of post-orgasmic bliss can persist for up to 48 hours. During this time, the afterglow fosters emotional connection in relationships, boosts mood, and even strengthens immune function. It can also enhance self-esteem, helping you approach your day with confidence and optimism.
Partnered Pleasure
In relationships, the afterglow is an emotional glue, reinforcing bonds and increasing satisfaction. By prolonging this shared connection, couples can navigate conflicts more effectively and deepen their intimacy. The key is to be present and savor the moment together through touch, eye contact, or quiet conversation.
Solo Afterglow
Self-pleasure offers the same hormonal and emotional rewards as partnered sex, making it a powerful form of self-care. Beyond physical release, it’s an act of self-discovery and affirmation, promoting body positivity and emotional recharge. The afterglow from solo sessions is a reminder that connecting with yourself is just as vital as connecting with others.
Extending The Glow
To fully savor the magic of the afterglow, consider these tips:
Extend the Glow: Take a slow, mindful approach to aftercare. A shared bath, journaling about your experience, or meditative breathing can amplify the benefits.
The afterglow is more than a momentary sensation; it’s a testament to the beauty of connection, intimacy, and self-awareness. By leaning into these moments, you embrace the joy of pleasure and unlock a deeper understanding of your emotional and physical needs.
So, next time you bask in that warm, lingering glow, let it remind you of the transformative power of pleasure to nourish your body, mind, and soul. Stay glowing
Prioritize Connection: For couples, linger in the moment by sharing a cuddle, eye contact, or a few whispered words. For solo sessions, take time to appreciate your body and the joy it brings.
Set the Scene: Create an environment that invites relaxation. Soft lighting, calming music, or a warm blanket can extend the moment’s serenity.
Practice Gratitude: Whether with a partner or alone, reflect on the experience and express gratitude – for your body, your partner, or simply the pleasure itself. For the Silo, Emilie Melloni Quemar/ Lelo.
About LELO LELO is not “just a sex toy brand”; it’s a self-care movement aimed at those who know that satisfaction transcends gender, sexual orientation, race, and age. We’re offering the experience of ecstasy without shame, the pleasure of discovering all the wonders of one’s body, thus facilitating our customers with confidence, that leads to a fulfilled intimate life. LELOi AB is the Swedish company behind LELO, where offices extend from Stockholm to San Jose, from Sydney to Shanghai
Seasoned Canadian Actor Randy Thomas Put Forward for Canadian Screen Award Nomination for Best Supporting Performance in a Drama.
Toronto, ON – Seasoned actor Randy Thomas has been officially put forward for nomination by Brainpower Studio for the Academy of Canadian Cinema & Television’s prestigious Canadian Screen Award for Best Supporting Performance in a Drama for his riveting role in The Jane Mysteries: Murder at Moseby starring Jodie Sweetin and Stephen Huszar.
Known for his commanding presence as stoic corporate executives and his undeniable charm as comedic father figures, Thomas stunned audiences with his transformative performance as an emotionally distraught and lonely antagonist in the Hallmark mystery franchise.
His portrayal—layered with heartbreak, volatility, and an unexpected path to redemption—took fans by surprise, further solidifying his range and talent as a seasoned performer.
The industry has taken notice, with Thomas’ nomination consideration reflecting the powerful impact of his performance.
Randy is a special actor. He has Leading Man charisma like George Clooney but with a unique knack for comedy. We are incredibly grateful to Director Marco Deufemia and Brainpower Studio CEO Beth Stevenson for seeing the depth of Randy’s talent and entrusting him with such a complex role. The fact that they put him forward for nomination tells us that Randy far exceeded their expectations. Let’s hope the Academy recognizes his extraordinary work as well. A win in this category would be a career highlight, and we anticipate this nomination will open even more doors for him across Canada and the U.S.
As the industry eagerly awaits the final nominations, Randy Thomas’ performance in Murder at Moseby stands as a testament to the rich and evolving talent pool within Canadian entertainment. With growing anticipation from his peers and fans alike, one thing is certain—this should be the beginning of a thrilling new chapter in Randy Thomas’ career that’s long overdue. For The Silo, Sandy Martinez. 514-286-6001
Hip-Hop is a culture, it’s a lifestyle, it’s an artistic expression and yes, it’s jewelry.
Since its inception Hip-Hop has always been edgy, outside the box, and ready to make a statement at all times; not just musically, but also in a fashion sense.
Rose gold, also known as red gold or pink gold made a big comeback in the world of Hip-Hop a few years ago but did you know Rose gold was popular at the beginning of the nineteenth century in Russia, formerly giving it the name “Russian gold”? This term is no longer used. This type of gold is also often used in making musical instruments.
Christian Jennings is one of the top flute players in the world- she chooses to play a customized flute made out of Rose Gold.
Companies such as King Ice are selling Hip-Hop inspired Rose gold jewelry, stating that it is reappearing due to artists such as Rick Ross and Kanye West being seen wearing items made from this gold and copper alloy.
Drake can even be seen with an extremely expensive Rose gold watch on the cover of his Grammy award winning sophomore album Take Care .
Whether you’re a “Hip-Hop head”, an occasional listener, or never venture into that world at all, Rose gold is becoming a statement piece and doesn’t seem to be disappearing anytime in the near future. For the Silo, Brent Flicks.
Stress is ever present in current society, both personal stress and workplace stress contribute to the well documented link, between stress and chronic conditions. Data available from Statistic Canada’s – National Population Health Survey, demonstrates that personal stress is predictive of the development of a chronic health condition over the next four years (Statistics Canada, 2003). The long term impact of these chronic health conditions can result in significant activity limitation from heart attack, diabetes, migraine, or arthritis or back problems. Even more daunting is the higher predictive value of death for individuals suffering from cancer, bronchitis/emphysema, heart disease or diabetes.
The practice of forest bathing itself is not a new concept.
Prior to the industrial revolution being “in nature” was part of everyday life. The Japanese term Shinrin-yoku meaning “taking in the forest atmosphere or forest bathing” was officially coined by the Japanese Ministry of Agriculture, Forestry, and Fisheries in 1982. (Park et al. 2010)
This novel practice of bathing in nature, demonstrates a wide variety of health benefits from which individuals in modern society can stand to gain. With the increasing amount of individuals living in urban settings the exposure to nature is diminishing.
Field studies performed in Japan measured salivary cortisol levels (more commonly known as “stress hormone”) in university individuals.
The students were divided into two groups, one to spend a day in a forest setting, the other in a city setting. Lower levels of stress hormone, as well as lower blood pressure and pulse rate was found in individuals in forest location. (Park et al. 2010)
Further evidence has been documented to the demonstrate the reduction of stress resulting from forest bathing, through the improvement immune function with exposure to the natural environment. Given that immune function is key in the prevention of chronic diseases this evidence is exciting. Natural killer cells as they are ingeniously named are cells within the immune system which kill tumors or virus infected cells, through the release of enzymes which break down the cells. In research studies natural killer cells have been found to be elevated for seven days after the forest bathing trip (Qing, 2010). This seven day window of improved immune function is great news for the weekend warrior in all of us.
Many of us who live in North America are blessed with exposure to forest just outside our doorsteps. This being said it doesn’t mean we always take advantage of it: between commuting to work, family and social commitments, going from the house to the car may be the norm. For the Silo, Ashley Beeton.
References
Park, B.J., Tsunetsugu, Y., Kasetani, T., Kagawa, T., & Miyazaki, Y. (2010) The physiological effects of Shinrin-yoku (taking in the forest atmosphere or forest bathing): evidence from field experiments in 24 forests across Japan. Environ Health Prev Med, 15,18–26.
It’s that time of the year again…time for a perennial favorite read. Why favorite? Because we all want to know how much longer Winter will last. At this point on the calendar, at the second day of February, it feels like warm days are lost forever. But there is always hope. Hope in a critter and a shadow. Let’s begin. Again.
Maybe Groundhog Day can become a National or Provincial Stat Holiday because February 2nd isn’t officially known as Groundhog Day. Technically it isn’t a National Holiday. It isn’t a Provincial Holiday. [Is Quebec the only province with a Provincial Holiday? CP]
But maybe it should be.
Groundhog Day isn’t an exclusive celebration that targets a specific demographic such as Family Day. It isn’t religiously or politically motivated. It doesn’t specify Muslim, Buddhist, Marxist, agnostic or atheist beliefs. It is inclusive, quirky, wacky and fun. There is no need to worry about political incorrectness.
Maybe Groundhog Day can become a rallying point for Ontarians because in many ways they are like us: Groundhogs are robust creatures. They handle a long winter with style. Groundhogs might be cute but they are also tough!
Maybe the Groundhog can become Canada’s national animal.
Does anyone remember the politician who wanted to make the polar bear our national animal? Most of us aren’t likely to run into polar bears. It’s that old adage: “Out of sight, out of mind” and since we’re more likely to see a groundhog and associate with a groundhog it is an ideal choice. Incidentally Canada’s national animal is the beaver. Another obscure animal that most of us have never seen.
Maybe Groundhog Day is spiritual after all.
If a Holiday needed ever to be justified on a basis of spirituality or community consider the following short list:
Mysticism (Shadow casting or lack their off = Long range weather forecast)
Fatalism (Let everyone believe that an animal can come out of the ground on a specific day and tell us how the next six weeks will turn out)
Anthropomorphism (Groundhogs can really see? Can they talk? How do we know if they have seen their shadow?)
Human/Animal Communication or Telepathy (Groundhog interpreters/ Groundhog whisperers? Are they specific to Wiarton, Punxsutawney?) Feature image- Punxsutawney Phil. For the Silo, Rick Fess.