Category Archives: Sci-Tech

New Audiophile Equipment Guide Is Ten Volumes Comprehensive

Boulder, Colorado, March, 2025 – PS Audio announces the release of The Audiophile’s Guide, a comprehensive 10-volume series on every aspect of audio system setup, equipment selection, analog and digital technology, speaker placement, room acoustics, and other topics related to getting the most musical enjoyment from an audio system. Written by PS Audio CEO Paul McGowan, it’s the most complete body of high-end audio knowledge available anywhere.

The Audiophile’s Guide hardcover book series is filled with clear, practical wisdom and real-life examples that guide readers into getting the most from their audio systems, regardless of cost or complexity. The book includes how-to tips, step-by-step instructions, and real-world stories and examples including actual listening rooms and systems. Paul McGowan noted, “think of it as sitting down with a knowledgeable friend who’s sharing hard-won wisdom about how to make music come alive in your home.”

The 10 books in the series include:

The Stereo – learn the essential techniques that transform good systems into great ones, including speaker placement, system matching, developing critical listening skills, and more.

The Loudspeaker – even the world’s finest loudspeakers will not perform to their potential without proper setup. Master the techniques that help speakers disappear, leaving the music to float in three-dimensional space.

Analog Audio – navigate the world of turntables, phono cartridges, preamps and power amplifiers, and vacuum tubes, and find out about how analog sound continues to offer an extraordinary listening experience.

Digital Audio – from sampling an audio signal to reconstructing it in high-resolution sound, this volume explains and demystifies the digital audio signal path and the various technologies involved in achieving ultimate digital sound quality.

Vinyl – discover the secrets behind achieving the full potential of analog playback in this volume that covers every aspect of turntable setup, cartridge alignment, and phono stage optimization.

The Listening Room – the space in which we listen is a critical yet often overlooked aspect of musical enjoyment. This volume tells how to transform even challenging spaces into ideal listening environments.

The Subwoofer – explore the world of deep bass reproduction, its impact on music and movies, and how to achieve the best low-frequency performance in any listening room.

Headphones – learn about dynamic, planar magnetic, electrostatic, closed-back and open-air models and more, and how headphones can create an intimate connection to your favorite music.

Home Theater – enjoy movies and TV with the thrilling, immersive sound that a great multichannel audio setup can deliver. The book explains how to bring the cinema experience home.

The Collection – this volume distills the knowledge of the above books into everything learned from more than 50 years of Paul McGowan’s experience in audio. Like the other volumes in the series, it’s written in an accessible style yet filled with technical depth, to provide the ultimate roadmap to audio excellence and musical magic.

Volumes one through nine of The Audiophile’s Guide are available for a suggested retail price of $39.99 usd , with Volume 10, The Collection, offered at $49.99 usd. In addition, The Audiophile’s Guide Limited Run Collectors’ Edition is available as a deluxe series with case binding, with the books presented in a custom-made slipcase. Each Collectors’ Edition set is available at $499.00 usd with complimentary worldwide shipping.

About PS Audio
Celebrating 50 years of bringing music to life, PS Audio has earned a worldwide reputation for excellence in manufacturing innovative, high-value, leading-edge audio products. Located in Boulder, Colorado at the foothills of the Rocky Mountains, PS Audio’s staff of talented designers, engineers, production and support people build each product to deliver extraordinary performance and musical satisfaction. The company’s wide range of award-winning products include the all-in-one Sprout100 integrated amplifier, audio components, power regenerators and power conditioners.
 
www.psaudio.com

For the Silo, Frank Doris.

A Geek’s Guide To Microfiber Towels

Microfibers were invented by Japanese textile company Toray in 1970, but the technology wasn’t used for cleaning until the late 1980s.

The key, as the name suggests, is in the fiber: Each strand is really tiny—100 times finer than human hair—which allows them to be packed densely on a towel. That creates a lot of surface area to absorb water and pick up dust and dirt. Plus, microfibers have a positive electric charge when dry (you might notice the static cling on your towels), which further helps the towel to pick up and hold dirt. “They tend to trap the dirt in but not allow it to re-scratch the finish,” explains professional concours detailer Tim McNair, who ditched old T-shirts and terry cloths for microfibers back in the 1990s.

These days, the little towels are ubiquitous and relatively cheap, but in order to perform wonders consistently, they need to be treated with respect. Below, a miniature guide to microfibers.

Care for Your Towels: Dos and Don’ts

“They’re just towels,” you might say to yourself. But if you want them to last and retain their effectiveness, microfiber towels need more care than your shop rags:

DO: Keep your microfiber towels together in a clean storage space like a Rubbermaid container. They absorb dirt so readily that a carelessly stored one will be dirty before you even use it.

DON’T: Keep towels that are dropped on the ground. It’s hard to get that gunk out and it will scratch your paint.

DO: Reuse your towels. “I have towels that have lasted 15 years,” says McNair. That said, he recommends keeping track of how they’re used. “I’ll use a general-purpose microfiber to clean an interior or two, and I’ll take them home and wash them. After about two, three washings, it starts to fade and get funky, and then that becomes the towel that does lower rockers. Then the lower rocker towel becomes the engine towel. After engines, it gets thrown away.”

DON’T: Wash your microfibers with scented detergent, which can damage the fibers and make them less effective at trapping dirt. OxiClean works great, according to McNair.

DO: Separate your microfibers from other laundry. “Make sure that you keep the really good stuff with the really good stuff and the filthy stuff with the filthy stuff,” says McNair.

DO: Air-dry your towels. Heat from the dryer can damage the delicate fibers. If you’re in a rush, use the dryer’s lowest setting.

How Do You Know Microfiber Is Split Or Not?

A widespread misunderstanding is that you can “feel” if a microfiber towel is made from split microfiber or not by stroking it with your hand. This is false!

The theory is that if it feels like the towel “hooks” onto tiny imperfections on dry  unmoisturized hands, this is because the fibers are split and they microscopically grab your skin. Although this is partially true, you cannot feel split microfiber “hook” onto your skin. These microscopic hooks are way too small to feel, but do generate a general surface resistance called “grab”. Yet, this is not the “individual” hooking sensation you feel when you touch most microfiber towels. It’s the tiny loops in loop-woven microfiber that are large enough to actually feel grabbing imperfections on your hands (minute skin scales). 

Try it for yourself: gently stroke a loop-weave microfiber towel of any kind, split or not. If your hands are dry and unmoisturized, you will feel the typical “hooking” sensation most people hate. It’s simply the loops that catch around the scales on your skin like mini lassos. Take a picture of the microfiber material with your smartphone, zoom in and you can clearly see the loops.

Now try stroking a cut microfiber towel which is not loop-woven, split or not, and it will not give that awful hooking sensation. If you take a picture of this material, you will see a furry surface without those loops. Because there are no loops, it won’t “hook”.

Now you know the truth: it’s the loops that latch onto your skin when you touch a microfiber towel, regardless if the towel is split microfiber or not. Tightly woven microfiber towels without pile (e.g. glass towels) can also have the “hooking” effect, caused by the way their fibers are woven, but less pronounced than loop weave towels.

Another misunderstanding is that a towel that is made of non-split microfiber will “push away” water and is non-absorbent. This also is not true!

Although a non-split microfiber fiber is not absorbent, water is still caught in between the fibers. You can do the test: submerge a 100% polyester fleece garment (check the label), which is always non-split fiber, in a bucket of water and take it out after about 10 sec. Wring it out over an empty bucket and you’ll see that it holds quite a bit of water, meaning it is absorbent.

So, another myth is busted: non-split microfiber can’t be determined simply by testing if it holds water. You can however test how much water it holds. Compare it to a similar dry-weight towel that is known to be split 70/30 microfiber: Submerge both in a bucket of water. If they hold about the same amount of water, they are both split microfiber. If the 70/30 towel holds more than twice as much water, the test towel is more than likely non-split material.   

How do you know if Microfiber is split or not?

A widespread misunderstanding is that you can “feel” if a microfiber towel is made from split microfiber or not by stroking it with your hand. This is false!

The theory is that if it feels like the towel “hooks” onto tiny imperfections on dry  non-moisturized hands, this is because the fibers are split and they microscopically grab your skin. Although this is partially true, you cannot feel split microfiber “hook” onto your skin. Our friends at classiccarmaintenance.com have more to say about this- “these microscopic hooks are way too small to feel, but do generate a general surface resistance called “grab”.” Yet, this is not the “individual” hooking sensation you feel when you touch most microfiber towels. It’s the tiny loops in loop-woven microfiber that are large enough to actually feel grabbing imperfections on your hands (minute skin scales). 

Try it for yourself: gently stroke a loop-weave microfiber towel of any kind, split or not. If your hands are dry and unmoisturized, you will feel the typical “hooking” sensation most people hate. It’s simply the loops that catch around the scales on your skin like mini lassos. Take a picture of the microfiber material with your smartphone, zoom in and you can clearly see the loops.

Now try stroking a cut microfiber towel which is not loop-woven, split or not, and it will not give that awful hooking sensation. If you take a picture of this material, you will see a furry surface without those loops. Because there are no loops, it won’t “hook”.

Now you know the truth: it’s the loops that latch onto your skin when you touch a microfiber towel, regardless if the towel is split microfiber or not. Tightly woven microfiber towels without pile (e.g. glass towels) can also have the “hooking” effect, caused by the way their fibers are woven, but less pronounced than loop weave towels.

Another misunderstanding is that a towel that is made of non-split microfiber will “push away” water and is non-absorbent. This also is not true!

Although a non-split microfiber fiber is not absorbent, water is still caught in between the fibers. You can do the test: submerge a 100% polyester fleece garment (check the label), which is always non-split fiber, in a bucket of water and take it out after about 10 sec. Wring it out over an empty bucket and you’ll see that it holds quite a bit of water, meaning it is absorbent.

So, another myth is busted: non-split microfiber can’t be determined simply by testing if it holds water. You can however test how much water it holds. Compare it to a similar dry-weight towel that is known to be split 70/30 microfiber: Submerge both in a bucket of water. If they hold about the same amount of water, they are both split microfiber. If the 70/30 towel holds more than twice as much water, the test towel is more than likely non-split material.   

Tim’s Towels

The budget pack of microfiber towels will serve you fine, but if you want to go down the detailing rabbit hole, there’s a dizzying variety of towel types that will help you do specific jobs more effectively. Here’s what McNair recommends:

General Use: German janitorial supply company Unger’s towels are “the most durable things I’ve ever seen,” says McNair.

Drying: Towels with a big heavy nap are great for drying a wet car (but not so great for taking off polish).

Griot's Garage blanket towel
Griot’s Extra-Large Edgeless Drying towel, $45usd/ $65.09cad Griot’s Garage

Polishing: Larger edgeless towels are good at picking up polishing compound residue without scratching the paint.

Wheels and other greasy areas: This roll of 75 microfiber towels from Walmart is perfect for down-and-dirty cleaning, like wire wheels. When your towel gets too dirty, throw it away and rip a new one off the roll.

Glass: There are specific two-sided towels for glass cleaning. One side has a thick nap that is good for getting bugs and gunk off the windshield. The other side has no nap—just a smooth nylon finish—that’s good for a streak-free final wipe down.

Griot's Garage glass towels
Griot’s Dual Weave Glass Towels, Set of 4, $20usd/ $28.93 cad Griot’s Garage

The Theory of Intrinsic Energy

Donald H. MacAdam

Abstract

Gravitational action at a distance is non-Newtonian and independent of mass, but is proportional to intrinsic energy, distance, and time. Electrical action at a distance is proportional to intrinsic energy, distance, and time.

The conventional assumption that all energy is kinetic and proportional to velocity and mass has resulted in an absence of mechanisms to explain important phenomena such as stellar rotation curves, mass increase with increase in velocity, constant photon velocity, and the levitation and suspension of superconducting disks.

In addition, there is no explanation for the existence of the fine structure constant, no explanation for the value of the proton-electron mass ratio, no method to derive the spectral series of atoms larger than hydrogen, and no definitive proof or disproof of cosmic inflation.

All of the above issues are resolved by the existence of intrinsic energy.

Table of contents

  • Part One “Gravitation and the fine structure constant” derives the fine structure constant, the proton-electron mass ratio, and the mechanisms of non-Newtonian gravitation including the precession rate of mercury’s perihelion and stellar rotation curves.
  • Part Two “Structure and chirality” describes the structure of particles and the chirality meshing interactions that mediate action at a distance between particles and gravitons (gravitation) and particles and quantons (electromagnetism) and describes the properties of photons (with the mechanism of diffraction and constant photon velocity).
  • Part Three “Nuclear magnetic resonance” is a general derivation of the gyromagnetic ratios and nuclear magnetic moments of isotopes.
  • Part Four “Particle acceleration” derives the mechanism for the increase in mass (and mass-energy) in particle acceleration.
  • Part Five “Atomic Spectra” reformulates the Rydberg equations for the spectral series of hydrogen, derives the spectral series of helium, lithium, beryllium, and boron, and explains the process to build a table of the spectral series for any elemental atom.
  • Part Six “Cosmology” disproves cosmic inflation.
  • Part Seven “Magnetic levitation and suspension” quantitatively explains the levitation of pyrolytic carbon, and the levitation, suspension and pinning of superconducting disks.

Part One

Gravitation and the fine structure constant

That gravity should be innate inherent & essential to matter so that one body may act upon another at a distance through a vacuum without the mediation of anything else by & through which their action or force may be conveyed from one to another is to me so great an absurdity that I believe no man who has … any competent faculty of thinking can ever fall into it.”1

Intrinsic energy is independent of mass and velocity. Intrinsic energy is the inherent energy of particles such as the proton and electron. Neutrons are composite particles composed of protons, electrons, and binding energy. Atoms, composed of protons, neutrons, and electrons, are the substance of larger three-dimensional physical entities, from molecules to galaxies.

Gravitation, electromagnetism, and other action at a distance phenomenon are mediated by gravitons, quantons and neutrinos. Gravitons, quantons and neutrinos are quanta that have a discrete amount of intrinsic energy and are emitted by particles in one direction at a time and absorbed by particles from one direction at a time. Emission-absorption events can be chirality meshing interactions that produce accelerations or achiral interactions that do not produce accelerations. Chirality meshing absorption of gravitons produces attractive accelerations, chirality meshing absorption of quantons produces either attractive or repulsive accelerations, and achiral absorption of neutrinos do not produce accelerations. The word neutrino is burdened with non-physical associations thus achiral quanta are henceforth called neutral flux.

A single chirality meshing interaction produces a deflection (a change in position), but a series of chirality meshing interactions produces acceleration (serial deflections). A single deflection in the direction of existing motion produces a small finite positive acceleration (and inertia) and a single deflection in the direction opposite existing motion produces a small finite negative acceleration (and inertia).

There are two fundamental differences between the mechanisms of Newtonian gravitation and discrete gravitation. The first is the Newtonian probability two particles will gravitationally interact is 100% but the discrete probability two particles will gravitationally interact is significantly less. The second difference is the treatment of force. In Newtonian physics a gravitational force between objects always exists, the force is infinitesimal and continuous, and the strength of the force is inversely proportional to the square of the separation distance. In discrete physics the existence of a gravitational force is dependent on the orientations of the particles of which objects are composed, the force is discrete and discontinuous, and the number of interactions is inversely proportional to the square of the separation distance. While there are considerable differences in mechanisms, in many phenomena the solutions of Newtonian and discrete gravitational equations are nearly identical.

There are similar fundamental differences between mechanisms of electromagnetic phenomena and in many cases the solutions of infinitesimal and discrete equations are nearly identical.

A particle emits gravitons and quantons at a rate proportional to particle intrinsic energy. A particle absorbs gravitons and quantons, subject to availability, at a maximum rate proportional to particle intrinsic energy. Each graviton or quanton emission event reduces the intrinsic energy of the particle and each graviton or quanton absorption event increases the intrinsic energy of the particle. Because graviton and quanton emission events continually occur but graviton and quanton absorption events are dependent on availability, these mechanisms collectively reduce the intrinsic energy of particles.

Only particles in nuclear reactions or undergoing radioactive disintegration emit neutral flux but in the solar system all particles absorb all available neutral flux.

In the solar system, discrete gravitational interactions mediate orbital phenomena and, for objects in a stable orbit the intrinsic energy loss due to the emission-absorption of gravitons is balanced by the absorption of intrinsic energy in the form of solar neutral flux.

Within the solar system, particle absorption of solar neutral flux (passing through a unit area of a spherical shell centered on the sun) adds intrinsic energy at a rate proportional to the inverse square of orbital distance, and over a relatively short period of time, the graviton, quanton, and neutral flux emission-absorption processes achieve Stable Balance resulting in constant intrinsic energy for particles of the same type at the same orbital distance, with particle intrinsic energies higher the closer to the sun and lower the further from the sun.

The process of Stable Balance is bidirectional.

If a high energy body consisting of high energy particles is captured by the solar gravitational field and enters into solar orbit at the orbital distance of earth, the higher particle intrinsic energies will result in an excess of intrinsic energy emissions compared to intrinsic energy absorptions at that orbital distance, and the intrinsic energy of the body will be reduced to bring it into Stable Balance.

If, on the other hand, a low energy body consisting of low energy particles is captured by the solar gravitational field and enters into solar orbit at the orbital distance of earth, the lower particle intrinsic energies will result in an excess of intrinsic energy absorptions at that orbital distance compared to the intrinsic energy emissions, and the intrinsic energy of the body will be increased to bring it into Stable Balance.

In an ideal two-body earth-sun system, a spherical and randomly symmetrical earth is in Stable Balance orbit about a spherical and randomly symmetrical sun. A randomly symmetrical body is composed of particles that collectively emit an equal intensity of gravitons (graviton flux) through a unit area on a spherical shell centered on the emitting body.

Unless otherwise stipulated, in this document references to the earth or sun assume they are part of an ideal two-body earth-sun system.

The gravitational intrinsic energy of earth is proportional to the gravitational intrinsic energy of the sun because total emissions of solar gravitons are proportional to the number of gravitons passing into or through earth as it continuously moves on a spherical shell centered on the sun (and also proportional to the volume of the spherical earth, to the cross-sectional area of the earth, to the diameter of the earth and to the radius of the earth).

Likewise, because the sun and the earth orbit about their mutual barycenter, the gravitational intrinsic energy of the sun is proportional to the gravitational intrinsic energy of the earth because total emissions of earthly gravitons are proportional to the number of gravitons passing into or through the sun as it continuously moves on a spherical shell centered on the earth (and also proportional to the volume of the spherical sun, to the cross-sectional area of the sun, to the diameter of the sun and to the radius of the sun).

We define the orbital distance of earth equal to 15E10 meters and note earth’s orbit in an ideal two-body system is circular. If additional planets are introduced, earth’s orbit will become elliptical and the diameter of earth’s former circular orbit will be equal to the semi-major axis of the elliptical orbit.

We define the intrinsic photon velocity c equal to 3E8 m/s and equal in amplitude to the intrinsic constant Theta which is non-denominated. We further define the elapsed time for a photon to travel 15E10 meters equal to 500 seconds.

The non-denominated intrinsic constant Psi, 1E-7, is equal in amplitude to the intrinsic magnetic constant denominated in units of Henry per meter.

Psi is also equal in amplitude to the 2014 CODATA vacuum magnetic permeability divided by 4 (after 2014 CODATA values for permittivity and permeability are defined and no longer reconciled to the speed of light); half the electromagnetic force (units of Newton) between two straight ideal (constant diameter and homogeneous composition) parallel conductors with center-to-center distance of one meter and each carrying a current of one Ampere; and to the intrinsic voltage of a magnetically induced minimum amplitude current loop (3E8 electrons per second).

The intrinsic electric constant, the inverse of the product of the intrinsic magnetic constant and the square of the intrinsic photon velocity, is equal to the inverse of 9E9 and denominated in units of Farad per meter.

The Newtonian mass of earth, denominated in units of kilogram, is equal to 6E24, and equal in amplitude to the active gravitational mass of earth, denominated in units of Einstein (the unit of intrinsic energy).

The active gravitational mass is proportional to the number of gravitons emitted and the Newtonian mass is proportional to the number of gravitons absorbed. Every graviton absorbed contributes to the acceleration and inertia of the absorber, therefore the Newtonian mass is also the inertial mass.


We define the radius of earth, the square root of the ratio of the Newtonian inertial mass of earth divided by orbital distance, or the square root of the ratio of the active gravitational mass of earth divided by its orbital distance, equal to the square root of 4E13, 6.325E6, about 0.993 the NASA volumetric radius of 6.371E6. Our somewhat smaller earth has a slightly higher density and a local gravitational constant equal to 10 m/s2 at any point on its perfectly spherical surface.

We define the Gravitational constant at the orbital distance of earth, the ratio of the local gravitational constant of earth divided by its orbital distance, equal to the inverse of 15E9.

The unit kilogram is equal to the mass of 6E26 protons at the orbital distance of earth, and the proton mass equal to the inverse of 6E26.

The proton intrinsic energy at the orbital distance of earth is equal to the inverse of the product of the proton mass and the mass-energy factor delta (equal to 100). Within the solar system, the proton intrinsic energy increases at orbital distances closer to the sun and decreases at orbital distances further from the sun. Changes in proton intrinsic energy are proportional to the inverse square of orbital distance.

The Newtonian mass of the sun, denominated in units of kilogram, is equal to 2E30, and equal in amplitude to the active gravitational mass of the sun, denominated in units of Einstein.

The active gravitational mass is proportional to the number of gravitons emitted and the Newtonian mass is proportional to the number of gravitons absorbed. Every graviton absorbed contributes to the acceleration and inertia of the absorber, therefore the Newtonian mass is also the inertial mass.

The active gravitational mass of earth divided by the active gravitational mass of the sun is equal to the intrinsic constant Beta-square and its square root is equal to the intrinsic constant Beta.

The charge intrinsic energy ei, denominated in units of intrinsic Volt, is proportional to the number of quantons emitted by an electron or proton. The charge intrinsic energy is equal to Beta divided by Theta-square, the inverse of the square root of 27E38.

Intrinsic voltage does not dissipate kinetic energy.

The electron intrinsic energy Ee, equal to the ratio of Beta-square divided by Theta-cube, the ratio of Psi-square divided by Theta-square, the product of the square of the charge intrinsic energy and Theta, and the ratio of the intrinsic electron magnetic flux quantum divided by the intrinsic Josephson constant, is denominated in units of Einstein.

The intrinsic electron magnetic flux quantum, equal to the square root of the electron intrinsic energy, is denominated in units of intrinsic Volt second.

The intrinsic Josephson constant, equal to the inverse of the square root of the electron intrinsic energy, the ratio of Theta divided by Psi and the ratio of the photon velocity divided by the intrinsic sustaining voltage of a minimum amplitude superconducting current, is denominated in units of Hertz per intrinsic Volt.

The discrete (dissipative kinetic) electron magnetic flux quantum, equal to the product of 2π and the intrinsic electron magnetic flux quantum, is denominated in units of discrete Volt second, and the discrete rotational Josephson constant, equal to the intrinsic Josephson constant divided by 2π and the inverse of the discrete electron magnetic flux quantum, is denominated in units of Hertz per discrete Volt. These constants are expressions of rotational frequencies.

We define the electron amplitude equal to 1. The proton amplitude is equal to the ratio of the proton intrinsic energy divided by the electron intrinsic energy.

We define the Coulomb, ec, equal to the product of the charge intrinsic energy and the square root of the proton amplitude divided by two. The Coulomb denominates dissipative current.

We define the Faraday equal to 1E5, and the Avogadro constant equal to the Faraday divided by the Coulomb.

Lambda-bar, the quantum of particle intrinsic energy, equal to the intrinsic energy content of a graviton or quanton, is the ratio of the product of Psi and Beta divided by Theta-cube, the ratio of Psi-cube divided by the product of Beta and Theta-square, the product of the charge intrinsic energy and the intrinsic electron magnetic flux quantum, and the charge intrinsic energy divided by the intrinsic Josephson constant.

CODATA physical constants that are defined as exact have an uncertainty of 10-12 decimal places therefore the exactness of Newtonian infinitesimal calculations is of a similar order of magnitude. We assert that Lambda-bar and proportional physical constants are discretely exact (equivalent to Newtonian infinitesimal calculations) because discretely exact physical properties can be exactly expressed to greater accuracy than can be measured in the laboratory.

All intrinsic physical constants and intrinsic properties are discretely rational. The ratio of two positive integers is a discretely rational number.

  • The ratio of two discretely rational numbers is discretely rational.
  • The rational power or rational root of a discretely rational number is discretely rational.
  • The difference or sum of discretely rational numbers is discretely rational. This property is important in the derivation of atomic spectra where it serves the same purpose as a Fourier transform in infinitesimal mathematics.

The intrinsic electron gyromagnetic ratio, equal to the ratio of the cube of the charge intrinsic energy divided by Lambda-bar square, is denominated in units of Hertz per Tesla.

The intrinsic proton gyromagnetic ratio, equal to the ratio the intrinsic electron gyromagnetic ratio divided by the square root of the cube of the proton amplitude divided by two and the ratio of eight times the photon velocity divided by nine, is denominated in units of Hertz per Tesla.

The intrinsic conductance quantum, equal to the product of the intrinsic Josephson constant and the discrete Coulomb, is denominated in units of intrinsic Siemen.

The kinetic conductance quantum, equal to the intrinsic conductance quantum divided by 2π, is denominated in units of kinetic Siemen.

The CODATA conductance quantum is equal to 7.748091E-5.

The intrinsic resistance quantum, equal to the inverse of the intrinsic conductance quantum, is denominated in units of Ohm.

The kinetic resistance quantum, equal to the inverse of the kinetic conductance quantum, is denominated in units of Ohm.

The CODATA resistance quantum is equal to 1.290640E4.

The intrinsic von Klitzing constant, equal to the ratio of the discrete Planck constant divided by the square of the intrinsic electric constant, is denominated in units of Ohm.

The kinetic von Klitzing constant, equal to the ratio of the discrete Planck constant divided by the square of the discrete Coulomb, is denominated in units of Ohm.

The CODATA von Klitzing constant is equal to 2.581280745E4.

In Newtonian physics the probability particles at a distance will interact is 100% but in discrete physics a certain granularity is needed for interactions to occur.

A particle G-axis is a single-ended hollow cylinder. The mechanism of the G-axis is analogous to a piston which moves up and down at a frequency proportional to particle intrinsic energy. At the end of the up-stroke a single graviton is emitted and during a down-stroke the absorption window is open until the end of the downstroke or the absorption of a single graviton.

The difference (the intrinsic granularity) between the inside diameter of the hollow cylindrical G-axis and the outside diameter of the graviton allows absorption of incoming gravitons at angles that can deviate from normal (straight down the center) by plus or minus 20 arcseconds.

There are three kinds of intrinsic granularity: the intrinsic granularity in phenomena mediated by the absorption of gravitons and quantons; the intrinsic granularity in phenomena mediated by the emission of gravitons and quantons; and the intrinsic granularity in certain electromagnetic phenomena.

  • The intrinsic granularity in phenomena mediated by the absorption of gravitons or quantons by particles in tangible objects (with kilogram mass greater than one microgram or 1E20 particles) is discretely infinite therefore the average value of 20 arcseconds is discretely exact.
  • The intrinsic granularity in phenomena mediated by the emission of gravitons or quantons by particles is 20 arcseconds because gravitons and quantons emitted in the direction in which the emitting axis is pointing have an intrinsic granularity of not more than plus or minus 10 arcseconds.
  • The intrinsic granularity of certain electromagnetic phenomena, in particular a Faraday disk generator, governed by a “Lorentz force” that causes the velocity of an electron to be at a right angle to the force also causes an additional directional change of 20 arcseconds in the azimuthal direction.

In the above diagram, the intrinsic granularity of graviton absorption is illustrated on the left.

Above center illustrates the aberration between the visible and the actual positions of the sun with respect to an observer on earth as the sun moves across the sky. Position A is the visible position of the sun, position B is the actual position of the sun, position B will be the visible position of the sun in 500 seconds, and position C will be the actual position of the sun in 500 seconds. The elapsed time between successive positions is proportional to the separation distance, but 20 arcseconds of aberration is independent of separation distance.

Above right illustrates the six directions within a Cartesian space and the six possible forms describing the six possible facing directions in which a vector can point. A vector pointing up the G-axis of particle A in the facing direction of particle B has one and only one of the six possible forms. The probability a gravitational interaction will occur, if the vector is facing in one of the other five facing directions, is zero. Therefore, a gravitational interaction involving a graviton emitted by a specific particle A and absorbed by a specific particle B is possible (not probable) in only one-sixth the total volume of Cartesian space.

We define the intrinsic steric factor equal to 6. The intrinsic steric factor is inversely proportional to the probability a specific gravitational intrinsic energy interaction can occur on a scale where the probability a Newtonian gravitational interaction will occur is 100%.

The intrinsic steric factor points outward from a specific particle located at the origin of a Cartesian space facing outward into the surrounding space. The intrinsic steric factor applies to action at a distance in phenomena mediated by gravitons and quantons.

To convert 20 arcseconds of intrinsic granularity into an inverse possibility, divide the 1,296,000 arcseconds in 360 degrees by the product of 20 arcseconds and the intrinsic steric factor.

A possibility is not the same as a probability. The possibility two particles can gravitationally interact (each with the other) is equal to 1 out of 10,800. The probability two particles will gravitationally interact (each with the other) is dependent on the geometry of the interaction.

Because Newtonian gravitational interactions are proportional to the quantum of kinetic energy, the discrete Planck constant, and discrete gravitational interactions are proportional to the quantum of intrinsic energy, Lambda-bar, the factor 10,800 is a conversion factor.

In a bidirectional gravitational interaction, the ratio of the square of the discrete Planck constant divided by the square of Lambda-bar is equal to 10,800.

In a one-directional gravitational interaction the ratio of the discrete Planck constant divided by Lambda-bar is equal to the square root of 10,800.

The discrete Planck constant is equal to Lambda-bar times the square root of 10,800 and denominated in units of Joule second.

The value of the discrete Planck constant, approximately 1.006 times larger than the 2018 CODATA value, is the correct value for the two-body earth-sun system and proportional to the intrinsic physical constants previously defined.

The CODATA fine structure constant alpha is equal to the ratio of the square of the CODATA electron charge divided by the product of two times the CODATA Planck constant, the CODATA vacuum permittivity and the CODATA speed of light (2018 CODATA values).

The intrinsic constant Beta is a transformation of the CODATA expression.

By substitution of the charge intrinsic energy for the CODATA electron charge, Lambda-bar for two times the CODATA Planck constant, the intrinsic electric constant for the CODATA vacuum permittivity and the intrinsic photon velocity for the CODATA speed of light, the dimensionless CODATA fine structure constant alpha is transformed into the dimensionless intrinsic constant Beta.

The existence of the fine structure constant and its ubiquitous appearance in seemingly unrelated equations is due to the assumption that phenomena are governed by kinetic energy, consequently measured values of phenomena governed or partly governed by intrinsic energy do not agree with the theoretical expectations.

A gravitational phenomenon governed by intrinsic energy is the solar system Kepler constant equal to the square root of the cube of the planet’s orbital distance divided by 4π-square times the orbital period of the planet, the product of the active gravitational mass of the sun and the Gravitational constant at the orbital distance of earth divided by 4π-square, and the ratio of the product of the square of the planet’s velocity and the orbital distance of the planet divided by 4π-square.

The intrinsic constant Beta-square, previously shown to be the ratio of the active gravitational mass of earth divided by the active gravitational mass of the sun, is also proportional to the key orbital properties of the sun, earth, and moon.

An electromagnetic phenomenon governed by intrinsic energy is the proton-electron mass ratio, here termed the electron-proton deflection ratio, equal to the square root of the cube of the proton intrinsic energy divided by the cube of the electron intrinsic energy, and to the square root of the cube of the proton amplitude divided by the cube of the unit electron amplitude.

The CODATA proton-electron mass ratio is a measure of electron deflection (1836.15267344) in units of proton deflection (equal to 1). Because the directions of proton and electron deflections are opposite, the electron-proton deflection ratio is approximately equal to the CODATA proton-electron mass ratio plus one.

In this document, unless otherwise specified (as in CODATA constants denominated in units of Joule proportional to the CODATA Planck constant), units of Joule are proportional to the discrete Planck constant.

The ratio of the discrete Planck constant divided by Lambda-bar, equal to the product of the mass-energy factor delta and omega-2, is denominated in units of discrete Joule per Einstein.

In the above equation the denomination discrete Joule represents energy proportional to the discrete Planck constant and the denomination Einstein represents energy proportional to Lambda-bar. The mass-energy factor delta converts non-collisional energy (action at a distance) into collisional energy in units of intrinsic Joule. The factor omega-2 converts units of intrinsic Joule into units of discrete Joule.

Omega factors correspond to the geometry of graviton-mediated and quanton-mediated phenomena.

We will begin with a brief discussion of electrical (quanton-mediated) phenomena then exclusively focus on gravitational phenomena for the remainder of Part One.

Electrical phenomena

The discrete steric factor, equal to 8, is the number of octants defined by the orthogonal planes of a Cartesian space.

Each octant is one of eight signed triplets (—, -+-, -++, –+, +++, +-+, +–, ++-) which correspond to the direction of the x, y, and z Cartesian axes.

A large number of random molecules, each with a velocity coincident with its center of mass, are within a Cartesian space. If the origin is the center of mass of specific molecule1, then random molecule2 is within one of the eight signed octants and, because the same number of random molecules are within each octant, then the specific molecule1 is within one of the eight signed octants with respect to random molecule2, and the possibility (not probability) of a center of mass collisional interaction between molecule2 and molecule1 is equal to the inverse of the discrete steric factor (one in eight).

The discrete and intrinsic steric factors correspond to the geometries of phenomena governed by discrete kinetic energy (proportional to the discrete Planck constant) and to phenomena governed by intrinsic energy:

  • The discrete steric factor points inward from a random molecule in the direction of a specific molecule and applies to phenomena mediated by collisional interactions.
  • The intrinsic steric factor points outward from a specific particle into the surrounding space and applies to phenomena mediated by gravitons and quantons (action at a distance).

The intrinsic molar gas constant, equal to the discrete steric factor, is the intrinsic energy (units of intrinsic Joule) divided by mole Kelvin.

The discrete molar gas constant, equal to the product of the intrinsic molar gas constant and omega-2, is the intrinsic energy (units of discrete Joule) divided by mole Kelvin. The discrete molar gas constant agrees with the CODATA value within 1 part in 13,000.

The ratio of the CODATA electron charge (the elementary charge in units of Coulomb) divided by the charge intrinsic energy (in units of intrinsic Volt) is nearly equal to the discrete molar gas constant.

The intrinsic Boltzmann constant, equal to the ratio of the intrinsic molar gas constant divided by the Avogadro constant, is denominated in units of Einstein per Kelvin.

The discrete Boltzmann constant, equal to the product of omega-2 and the intrinsic Boltzmann constant, and the ratio of the discrete molar gas constant divided by the Avogadro constant, is denominated in units of discrete Joule per Kelvin. The CODATA Boltzmann constant is equal to 1.380649×10-23.

Gravitational phenomena

Omega-2, the square root of 1.08, corresponds to one-directional gravitational interactions between non-orbiting objects (objects not by themselves in orbit, that is, the object might be part of an orbiting body but the object itself is not the orbiting body), for example graviton emission by the large lead balls or absorption by the small lead balls in the Cavendish experiment.

Omega-4, 1.08, corresponds to two-directional gravitational interactions (emission and absorption) between non-orbiting objects, for example the acceleration of the large lead balls or the acceleration of the small lead balls in the Cavendish experiment.

Omega-6, the square root of the cube of 1.08, corresponds to gravitational interactions between a planet and moon in a Keplerian orbit where the square root of the cube of the orbital distance divided by the orbital period is equal to a constant.

Omega-8, the square of 1.08, corresponds to four-directional gravitational interactions by non-orbiting objects, for example the acceleration of the small lead balls and the acceleration of the large lead balls in the Cavendish experiment.

Omega-12, equal to the cube of 1.08, corresponds to gravitational interactions between two objects in orbit about each other, for example the sun and a planet in orbit about their mutual barycenter.

Except where previously defined (the Gravitational constant at the orbital distance of earth, the orbital distance of earth, the mass and volumetric radius of earth, the mass of the sun) the following equations use the NASA2 values for the Newtonian masses, orbital distances, and volumetric radii of the planets.

The local gravitational constant for any of the planets is equal to the product of the Gravitational constant of earth and the Newtonian mass (kilogram mass) of the planet divided by the square of the volumetric radius of the planet.

The v2d value of a planetary moon is equal to the product of the Gravitational constant at the orbital distance of earth and the Newtonian mass of the planet.

The active gravitational mass of a planet, denominated in units of Einstein, is equal to the product of the square of the volumetric radius of the planet and the orbital distance of the planet, divided by the square of the orbital distance of the planet in units of the orbital distance of earth.

The mass of a planet in a Newtonian orbit about the sun (the planet and sun orbit about their mutual barycenter) is a kinetic property. The active gravitational mass of such a planet, denominated in units of Joule, is equal to the product of the active gravitational mass of the planet in units of Einstein and omega-12.

The Gravitational constant at the orbital distance of the planet is equal to the product of the local gravitational constant of the planet and the square of the volumetric radius of the planet, divided by the active gravitational mass of the planet.

The v2d value of a planetary moon is equal to the product of the Gravitational constant at the orbital distance of the planet and the active gravitational mass of the planet.

The v2d values calculated using the NASA orbital parameters for the moon is larger than the above calculated values by 1.00374; the v2d calculations using the NASA orbital parameters for the major Jovian moons (Io, Europa, Ganymede and Callisto) are larger than the above calculated values by 1.0020, 1.0016, 1.00131, and 1.00133.

Newtonian gravitational calculations are extremely accurate for most gravitational phenomena but there are a number of anomalies for which the Newtonian calculations are inaccurate. The first of these anomalies to come to the attention of scientists in 1859 was the precession rate of the perihelion of mercury for which the observed rate was about 43 arcseconds per century larger than the Newtonian calculated rate.3

According to Gerald Clemence, one of the twentieth century’s leading authorities on the subject of planetary orbital calculations, the most accurate method for calculating planetary orbits, the method of Gauss, was derived for calculating planetary orbits within the solar system with distance expressed in astronomical units, orbital period in days and mass in solar masses.4

The Gaussian method was used by Eric Doolittle in what Clemence believed to be the most reliable theoretical calculation of the perihelion precession rate of mercury.5

With modifications by Clemence including newer values for planetary masses, newer measurements of the precession of the equinoxes and a careful analysis of the error terms, the calculated rate was determined to be 531.534 arc-seconds per century compared to the observed rate of 574.095 arc-seconds per century, leaving an unaccounted deficit of 42.561 arcseconds per century.

The below calculations are based on the method of Price and Rush.6 This method determines a Newtonian rate of precession due to the gravitational influences on mercury by the sun and five outer planets external to the orbit of mercury (venus, earth, mars, jupiter and saturn) The solar and planetary masses are treated as Newtonian objects and in calculations of planetary gravitational influences the outer planets are treated as circular mass rings.

The Newtonian gravitational force on mercury due to the mass of the sun is equal to ratio of the product of the negative Gravitational constant at the orbital distance of earth, the mass of the sun and the mass of mercury divided by the square of the orbital distance of mercury.

The Newtonian gravitational force on mercury due to the mass of the five outer planets is equal to the sum of the gravitational force contributions of the five outer planets external to the orbit of mercury. The gravitational force contribution of each planet is equal to the ratio of the product of the Gravitational constant at the orbital distance of earth, the mass of the planet, the mass of mercury and the orbital distance of mercury, divided by the ratio of the product of twice the planet’s orbital distance and the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.

The gravitational force ratio is equal to the gravitational force on mercury due to the mass of the five outer planets external to the orbit of mercury divided by the gravitational force on mercury due to the mass of the sun.

The gamma factor is equal to the sum of the gamma contributions of the five outer planets external to the orbit of mercury. The gamma contribution of each planet is equal to the ratio of the product of the mass of the planet, the orbital distance of mercury, and the sum of the square of the planet’s orbital distance and the square of the orbital distance of mercury, divided by the product of 2π, the planet’s orbital distance and the square of the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.

Psi-mercury is equal to the product of π and the sum of one plus the difference between the negative of the gravitational force ratio and the ratio of the product of the Gravitational constant at the orbital distance of earth, π, the mass of mercury and the gamma factor divided by twice the gravitational force on mercury due to the mass of the sun.

The number of arc-seconds in one revolution is equal to 360 degrees times sixty minutes times sixty seconds.

The number of days in a Julian century is equal to 100 times the length of a Julian year in days.

The perihelion precession rate of mercury is equal to the ratio of the product of the difference between 2ψ-mercury and 2π, the number of arc-seconds in one revolution and the number of days in a Julian century, divided by the product of 2π and the NASA sidereal orbital period of mercury in units of day (87.969).

The Newtonian perihelion precession rate of mercury determined above is 0.139 arc-seconds per century less than the Clemence calculated rate of 531.534 arc-seconds per century.

The following equations, the same format as the Newtonian equations, derive the non-Newtonian values (when different).

The Newtonian gravitational force on mercury due to the mass of the sun is equal to ratio of the product of the negative Gravitational constant at the orbital distance of earth, the mass of the sun and the mass of mercury divided by the square of the orbital distance of mercury.

The non-Newtonian gravitational force on mercury due to the mass of the five outer planets is equal to the sum of the gravitational force contributions of the five outer planets external to the orbit of mercury. The gravitational force contribution of each planet is equal to the product of the ratio of the product of the Gravitational constant at the orbital distance of earth, the active gravitational mass (in units of Joule) of the planet, the Newtonian mass of mercury and the orbital distance of mercury, divided by the ratio of the product of twice the planet’s orbital distance and the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.

The non-Newtonian gravitational force ratio is equal to the gravitational force on mercury due to the mass of the five outer planets external to the orbit of mercury divided by the gravitational force on mercury due to the mass of the sun.

The gamma factor is equal to the sum of the gamma contributions of the five outer planets external to the orbit of mercury. The gamma contribution of each planet is equal to the ratio of the product of the mass of the planet, the orbital distance of mercury, and the sum of the square of the planet’s orbital distance and the square of the orbital distance of mercury, divided by the product of 2π, the planet’s orbital distance and the square of the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.

The non-Newtonian value for Psi-mercury is equal to the product of π and the sum of one plus the difference between the negative of the gravitational force ratio and the ratio of the product of the Gravitational constant at the orbital distance of earth, π, the mass of mercury and the gamma factor divided by twice the gravitational force on mercury due to the mass of the sun.

The non-Newtonian perihelion precession rate of mercury is equal to the ratio of the product of the difference between 2ψ-mercury and 2π, the number of arc-seconds in one revolution and the number of days in a Julian century, divided by the product of 2π and the NASA sidereal orbital period of mercury in units of day (87.969).

The non-Newtonian perihelion precession rate of mercury is 6.128 arc-seconds per century greater than the Clemence observed rate of 574.095 arc-seconds per century.

We have built a model of gravitation proportional to the dimensions of the earth-sun system. A different model, with different values for the physical constants, would be equally valid if it were proportional to the dimensions of a different planet in our solar system or a planet in some other star system in our galaxy.

Our sun and the stars in our galaxy, in addition to graviton flux, emit large quantities of neutral flux that establish Stable Balance orbits for planets that emit relatively small quantities of neutral flux.

Our galactic center emits huge quantities of gravitons and neutral flux, and its dimensional relationship with our sun is dependent on the neutral flux emissions of our sun. If the intrinsic energy of our sun was less, its orbit would be further out from the galactic center, and if it was greater, its orbit would be closer in.

  • Of two stars at the same distance from the galactic center with different velocities, the star with higher velocity has a higher graviton absorption rate (higher stellar internal energy) and the star with lower velocity has a lower graviton absorption rate (lower stellar internal energy).
  • Of two stars with the same velocity at different distances from the galactic center, the star closer in will have a higher graviton absorption rate (higher stellar internal energy) and the star further out will have a lower graviton absorption rate (lower stellar internal energy).

The active gravitational mass of the Galactic Center is equal to the active gravitational mass of the sun divided by Beta-fourth and the cube of the active gravitational mass of the sun divided by the square of the active gravitational mass of earth.

The second expression of the above equation, generalized and reformatted, asserts the square root of the cube of the active gravitational mass of any star in the Milky Way divided by the active gravitational mass of any planet in orbit about the star is equal to a constant.

The above equation, combined with the detailed explanation of the chirality meshing interactions that mediate gravitational action at a distance, the derivation of solar system non-Newtonian orbital parameters, the derivation of the non-Newtonian rate of precession of the perihelion of mercury, and the detailed explanation of non-Newtonian stellar rotation curves, disproves the theory of dark matter.

Part Two

Structure and chirality

A particle has the property of chirality because its axes are orthogonal and directed, pointing in three perpendicular directions and, like the fingers of a human hand, the directed axes are either left-handed (LH) or right-handed (RH). The electron and antiproton exhibit LH structural chirality and the proton and positron exhibit RH structural chirality. The two chiralities are mirror images.

The electron G-axis (black, index finger) points into the paper, the electron Q-axis (blue, thumb) points up in the plane of the paper, and the north pole of the electron P-axis (red, middle finger) points right in the plane of the paper.

The orientation of the axes of an RH proton are the mirror image: the proton G-axis (black, index finger) points into the paper, the proton Q-axis (blue, thumb) points up in the plane of the paper, and the north pole of the proton P-axis (red, middle finger) points left in the plane of the paper.

Above, to visualize orientations, models are easier to manipulate than human hands.

When Michael Faraday invented the disk generator in 1831, he discovered the conversion of rotational force, in the presence of a magnetic field, into electric current. The apparatus creates a magnetic field perpendicular to a hand-cranked rotating conductive disk and, providing the circuit is completed through a path external to the disk, produces an electric current flowing inward from axle to rim (electron flow not conventional current), photograph below.7

Above left, the electron Q-axis points in the CCW direction of motion. The inertial force within a rotating conductive disk aligns conduction electron G-axes to point in the direction of the rim. The alignment of the Q-axes and G-axes causes the orthogonal P-axes to point down.

Above right, the electron Q-axis points in the CW direction of motion. The inertial force within a rotating conductive disk aligns conduction electron G-axes to point in the direction of the rim. The alignment of the Q-axes and G-axes causes the orthogonal P-axes to point up.

In generally accepted physics (GAP), the transverse alignment of electron velocity with respect to magnetic field direction is attributed to the Lorentz force but, as explained above it is a consequence of electron chirality.

In addition to the transverse alignment of the electron direction with respect to the direction of the magnetic field, the electron experiences an additional directional change of 20 arcseconds in the azimuthal direction which causes the electron to spiral in the direction of the axle. Thus, in both a CCW rotating conductive disk and a CW rotating conductive disk, the current (electron flow not conventional current) flows from the axle to the rim.

The geometries of the Faraday disk generator apply to the orientation of conduction electrons in the windings of solenoids and transformers. CCW and CW windings advance in the same direction, below into the plane of the paper. In contrast to the rotating conductor in the disk generator, the windings are stationary, and the conduction electrons spiral through in the direction of the positive voltage supply (which continually reverses in transformers and AC solenoids).

Above left, the electron Q-axes point down in the direction of current flow through the CCW winding. The inertial force on conduction electrons moving through the CCW winding aligns the direction of the electron G-axes to the left. The electron P-axes, perpendicular to both the Q-axes and P-axes, point S→N out of the paper.

Above right, the electron Q-axes point up in the direction of current flow through the CW winding. The inertial force on conduction electrons moving through the CW winding aligns the direction of the electron G-axes to the left. The electron P-axes, perpendicular to both the Q-axes and G-axes, point S→N into the paper.

Above is a turnbuckle composed of a metal frame tapped at each end. On the left end an LH bolt passes through an LH thread and on the right end an RH bolt passes through an RH thread. If the LH bolt is turned CCW (facing right into the turnbuckle frame) the bolt moves to the right and the frame moves to the left and if the LH bolt is turned CW the bolt moves to the left and the frame moves to the right. If the RH bolt is turned CW (facing left into the turnbuckle frame) the bolt moves to the left and the frame moves to the right and if the RH bolt is turned CCW the bolt moves to the right and the frame moves to the left.

In the language of this analogy, a graviton or quanton emitted by the emitting particle is a moving spinning bolt, and the absorbing particle is a turnbuckle frame with a G-axis, Q-axis or P-axis passing through.

In a chirality meshing interaction, absorption of a graviton or quanton by the LH or RH G-axis, Q-axis or P-axis of a particle, causes an attractive or repulsive acceleration proportional to the difference between the graviton or quanton velocity and the velocity of the absorbing particle.

An electron G-axis has a RH inside thread and a proton G-axis has a LH inside thread. An electron G-axis emits CW gravitons and a proton G-axis emits CCW gravitons.

In the bolt-turnbuckle analogy, a graviton is a moving spinning bolt, and the absorbing particle through which the G-axis passes is a turnbuckle frame:

  • If a CCW graviton emitted by a proton is absorbed into a proton LH G-axis, the absorbing proton is attracted, accelerated in the direction of the emitting proton.
  • If a CW graviton emitted by an electron is absorbed into an electron RH G-axis, the absorbing electron is attracted, accelerated in the direction of the emitting electron.

Protons and electrons do not gravitationally interact with each other because a proton is larger than an electron, a graviton emitted by a proton is larger than a graviton emitted by an electron, the inside thread of a proton G-axis is larger than the inside thread of an electron G-axis, and the size differences prevent the ability of a graviton emitted by an electron to mesh with a proton G-axis or a graviton emitted by a proton to mesh with an electron G-axis.

Tangible objects are composed of atoms which are composed of protons, electrons and neutrons.

In gravitational interactions between tangible objects (with kilogram mass greater than one microgram or 1E20 particles) the total intensity of the interaction is the sum of the contributions of the electrons and protons of which the object is composed (note that neutrons themselves do not gravitationally interact but each neutron is composed of one electron and one proton both of which do gravitationally interact).

A particle Q-axis is a single-ended hollow cylinder. The mechanism of the Q-axis is analogous to a piston which moves up and down at a frequency proportional to charge intrinsic energy. At the end of each up-stroke a single quanton is emitted. The absorption window opens at the beginning of the up-stroke and remains open until the beginning of the downstroke or the absorption of a single quanton.

The difference (the intrinsic granularity) between the inside diameter of the hollow cylindrical Q-axis and the outside diameter of the quanton allows absorption of incoming quantons at angles that can deviate from normal (straight down the center) by plus or minus 20 arcseconds.

An electron Q-axis has a RH inside thread and a proton Q-axis has a LH inside thread. An electron Q-axis emits CCW quantons and a proton Q-axis emits CW quantons.

In the bolt-turnbuckle analogy, a quanton is a moving spinning bolt, and the absorbing particle through which the G-axis passes is a turnbuckle frame:

  • If a CCW p-quanton emitted by a proton is absorbed into an electron RH Q-axis, the absorbing electron is attracted, accelerated in the direction of the emitting proton.
  • If a CCW p-quanton emitted by a proton (or the anode plate in a CRT) is absorbed into a proton LH Q-axis, the absorbing proton is repulsed, accelerated in the direction of the cathode plate (opposite the direction of the emitting proton).
  • If a CW e-quanton emitted by an electron is absorbed into an electron RH Q-axis, the absorbing electron is repulsed, accelerated in the direction opposite the emitting electron.
  • If a CW e-quanton emitted by an electron (or the cathode plate in a CRT) is absorbed into a proton LH Q-axis, the absorbing proton is repulsed, accelerated in the direction of the cathode plate (the direction opposite the emitting electron).

In a CRT, the Q-axis of an accelerated electron is oriented in the linear direction of travel and its P-G-axis are oriented transverse to the linear direction of travel. After the electron is linearly accelerated, the electron passes between oppositely charged parallel plates that emit quantons perpendicular to the linear direction of travel and these e-quantons are absorbed into the electron P-axes. The chirality meshing interactions between an electron with a linear direction of travel and a quantons emitted by either plate results in a transverse acceleration in the direction of the anode plate:

  • An incoming CCW p-quanton approaching an electron RH P-axis within less than 20 arcseconds deviation from normal (straight down the center) is absorbed in an attractive chirality meshing interaction in which the electron is deflected in the direction of the anode plate.
  • An incoming CW e-quanton approaching an electron RH P-axis within less than 20 arcseconds deviation from normal (straight down the center) is absorbed in a repulsive chirality meshing interaction in which the electron is deflected in the direction of the anode plate.

This is the mechanism of the experimental determination of the electron-proton deflection ratio.

The magnitude of the ratio between these masses is not equal to the ratio of the measured gravitational deflections but rather to the inverse of the ratio of the measured electric deflections. It would not matter which of these measurable quantities were used in the experimental determination if Newton’s laws of motion applied. However, in order for Newton’s laws to apply the assumptions behind Newtons laws, specifically the 100% probability that particles gravitationally and electrically interact, must also apply. But this is not the case for action at a distance.

The electron orientation below top left, rotated 90 degrees CCW, is identical to the electron orientations previously illustrated for a CW disk generator or a CW-wound transformer or solenoid; and the electron orientation bottom left is a 180 degree rotation of top left.

Above are reversals in Q-axis orientation due to reversals in direction of incoming quantons

Above top right and bottom right are the left-side electron orientations with the electron Q-axis directed into the plane of the paper (confirmation of the perspective transformation is easier to visualize with a model). These are the orientations of conduction electrons in an AC current.

In the top row CW quantons, emitted by the positive voltage source are absorbed in chirality meshing interactions by the electron RH Q-axis, attracting the absorbing electron. In the bottom row CCW quantons, emitted by the negative voltage source are absorbed in chirality meshing interactions into the electron RH Q-axis repelling the absorbing electron.

In either case the direction of current is into the paper.

In an AC current, a reversal in the direction of current is also a reversal in the rotational chirality of the quantons mediating the current.

  • In a current moving in the direction of a positive voltage source each linear chirality meshing absorption of a CW p-quanton into an electron RH Q-axis results in an attractive deflection.
  • In a current moving in the direction of a negative voltage source each linear chirality meshing absorption of a CCW e-quanton into an electron RH Q-axis results in a repulsive deflection.

In an AC current, each reversal in the direction of current, reverses the direction of the Q-axes of the conduction electrons. This reversal in direction is due to a complex rotation (two simultaneous 180 degree rotations) that results in photon emission.

During a shorter or longer period of time (the inverse of the AC frequency) during which the direction of current reverses, a shorter or longer inductive pulse of electromagnetic energy flows into the electron Q and P axes and the quantons of which the electromagnetic energy is composed are absorbed in rotational chirality meshing interactions.

Above left, the electron P and Q axes mesh together at their mutual orthogonal origin in a mechanism analogous to a right angle bevel gear linkage.8

Above center and right, an incoming CCW quanton induces an inward CCW rotation in the Q-axis and causes a CW outward (CCW inward) rotation of the P-axis. The rotation of the Q-axis reverses the orientation of the P-axis and G-axis, and the rotation of the P-axis reverses the orientation of the Q-axis and the orientation of the G-axis thereby restoring its orientation to the initial direction pointing left and perpendicular to a tangent to the cylindrical wire.

Above center and right, an incoming CW quanton induces an inward CW rotation in the Q-axis and causes a CCW outward (CW inward) rotation of the P-axis. The rotation of the Q-axis reverses the orientation of the P-axis and G-axis, and the rotation of the P-axis reverses the orientation of the Q-axis and the orientation of the G-axis thereby restoring its orientation to the initial direction pointing left and perpendicular to a tangent to the cylindrical wire.

In either case the electron orientations are identical, but CCW electron rotations cause the emission of CCW photons and CW electron rotations cause the emission of CW photons.

The absorption of CCW e-quantons by the Q-axis rotates the Q-axis CCW by the square root of 648,000 arcseconds (180 degrees) and the P-Q axis linkage simultaneously rotates the P-axis CW by the square root of 648,000 arcseconds (180 degrees).

If the orientation of the electron G-axis is into the paper in a plane defined by the direction of the Q-axis, the CCW rotation of the Q-axis tilts the plane of the G-axis down by the square root of 648,000 arcseconds and the CW rotation of the P-axis tilts the plane of the G-axis to the right by the square root of 648,000 arcseconds.

The net rotation of the electron G-axis is equal to the product of the square root of 648,000 arcseconds and the square root of 648,000 arcseconds.

In the production of photons by an AC current, the photon wavelength and frequency are proportional to the current reversal time, and the photon energy is proportional to the voltage.

Above, an axial projection of the helical path of a photon traces the circumference of a circle and the sine and cosine are transverse orthogonal projections.9 The crest to crest distance of the transverse orthogonal projections, or the distance between alternate crossings of the horizontal axis, is the photon wavelength.

The helical path of photons explains diffraction by a single slit, by a double slit, by an opaque circular disk, or a sphere (Arago spot).

In a beam of photons with velocity perpendicular to a flat screen or sensor, each individual photon makes a separate impact that can be sensed or is visible somewhere on the circumference of one of many separate and non-overlapping circles corresponding to all of the photons in the beam. The divergence of the beam increases the spacing between circles and the diameter of each individual photon circle which is proportional to the wavelength of each individual photon. The sensed or visible photon impacts form a region of constant intensity.

Below, the top image shows those photons, initially part of a photon beam illuminating a single slit, which passed through the single slit.10

Above, the bottom image shows those photons, initially part of a photon beam illuminating a double slit, that passed through a double slit.

Below, the image illustrating classical rays of light passing through a double slit is equally illustrative of a photon beam illuminating a double slit but, instead of constructive and destructive interference, the photons passing through the top slit diverge to the right and photons passing through the bottom slit diverge to the left. The spaces between divergent circles are dark and, due to coherence, the photon circles are brightest at the distance of maximum overlap, resulting in the characteristic double slit brighter-darker diffraction pattern.11

The mechanism of diffraction by an opaque circular disk or a sphere (Arago spot) is the same. In either case the opaque circular disk or sphere is illuminated by a photon beam of diameter larger than the diameter of the disk or sphere.

The photons passing close to the edge of the disk or sphere diverge inwards, and the spiraling helical path of a inwardly diverging CW photon passing one side of the disk will intersect in a head-on collision the spiraling helical path of a inwardly diverging CCW photon passing on the directly opposite side of the disk or sphere (if the opposite chirality photons are equidistant from the center of the disk or sphere).

In the case of a sphere illuminated by a laser, the surface of the sphere must be smooth and the ratio of the square of the diameter of the sphere divided by the product of the distance from the center of the sphere to the screen and the laser wavelength must be greater than one (similar to the Fresnel number).

Photon velocity

Constant photon velocity is due to a resonance driven by the emission of photon intrinsic energy which results in an increase in wavelength and a proportional decrease in frequency. In a related phenomenon, Arthur Holly Compton demonstrated Compton scattering in which the loss of photon kinetic energy does not change velocity but increases wavelength and proportionally decreases frequency.12

The mechanism of constant photon velocity is the emission of quantons and gravitons.

Below top, looking down into the plane of the paper a photon G-axis points in the direction of photon velocity and the P and Q-axes are orthogonal. In the language of the turnbuckle analogy, the mechanism of the photon P and Q-axes are analogous to pistons which move up and down or back and forth and emit a single quanton or graviton at the end of each stroke.

Above middle, in column A of the P-axis row, at the position of the oscillation the up-stroke has just completed, a single graviton has been emitted, and the current direction of the oscillation is now down. In column B of the P-axis row, the position of the oscillation is mid-way, and the direction of the oscillation is down. In column C of the P-axis row, at the position of the oscillation the downstroke has just completed, a single graviton has been emitted, and the current direction of the oscillation is up. In column D of the P-axis row, the position of the oscillation is mid-way, and the direction of the oscillation is up.

Above middle, in column A of the Q-axis row, the position of the oscillation is mid-way and the direction of oscillation is left. In column B of the Q-axis row, at the position of the oscillation the left-stroke has just completed, a single quanton has been emitted, and the current direction of the oscillation is right. In column C of the Q-axis row, the position of the oscillation is mid-way and the direction of the oscillation is right. In column D of the Q-axis row, at the position of the oscillation the right-stroke has just completed, a single quanton has been emitted, and the current direction of the oscillation is left.

Above left or right bottom, in each cycle of the photon frequency there are eight sequential CCW or CW alternating quanton/graviton emissions and the intrinsic energy of the photon is reduced by Lambda-bar on each emission.

This is the mechanism of intrinsic redshift.

Part Three

Nuclear magnetic resonance

In the 1922 Stern-Gerlach experiment, a molecular beam of identical silver atoms passed through an inhomogeneous magnetic field. Contrary to classical expectations, the beam of atoms did not diverge into a cone with intensity highest at the center and lowest at the outside. Instead, atoms near the center of the beam were deflected with half the silver atoms deposited on a glass slide in an upper zone and half deposited in a lower zone, illustrating “space quantization.”

The Stern-Gerlach experiment, designed to test directional quantization in a magnetic field as predicted by old quantum theory (the Bohr-Sommerfeld hypothesis)13, was conducted two years before intrinsic spin was conceived by Wolfgang Pauli and six years before Paul Dirac formalized the concept. Intrinsic spin became part of the foundation of new quantum theory.

The concept of intrinsic spin, where the property that causes the deflection of silver atoms in two opposite directions “space quantization” is inherent in the particle itself, is incorrect.

However, a molecular beam composed of atoms with magnetic moments passed through a Stern-Gerlach apparatus does exhibit the numerical property attributed to intrinsic spin but this property, interactional spin, is not inherent in the atom but is dependent on external factors.

The protons within a nucleus are the origin of spin, magnetic moment, Larmor frequency, and other nuclear gyromagnetic properties. A nucleus contains “ordinary protons” which, for clarity, will be termed Pprotons, and “protons within neutrons” will be termed Nprotons.

In nuclei with an even number of Pprotons, the Pproton magnetic flux is contained within the nucleus and does not contribute to the nuclear magnetic moment.

With neutrons the situation is quite different. A neutron is achiral: it is a composite particle composed of an Nproton-electron pair and binding energy, it has no G-axis therefore does not gravitationally interact, and no Q-axis therefore is electrically neutral.

Within a nucleus, a neutron does not have a magnetic moment (during its less than 15-minute mean lifetime after a neutron is emitted from its nucleus, a free neutron has a measurable magnetic moment, but there are no free neutrons within nuclei) but the Nproton and electron of which a neutron is composed do have magnetic moments.

The gyromagnetic properties of a nucleus, its magnetic moment, its spin, its Larmor frequency, and its gyromagnetic ratio are due to Pprotons and Nprotons.

A molecular beam (composed of nuclei, atoms and/or molecules) emerging from an oven into a vacuum will have a thermal distribution of velocities. Molecules within the beam are subject to collisions with faster or slower molecules that cause rotations and vibrations, and the orientations of unpaired Pprotons and unpaired Nprotons are constantly subject to change.

In a silver atom there is a single unpaired Pproton and the orientation of its P-axis, with respect to its direction of motion through an inhomogeneous magnetic field, will be either leading or trailing. Out of a large number of unpaired Pprotons, the P-axes will be leading 50% of the time and trailing 50% of the time, and a silver atom containing an unpaired Pproton with a leading P-axis can be deflected in the direction of the inhomogeneous magnetic north pole while a silver atom containing an unpaired Pproton with a trailing P-axis can be deflected in the direction of the south pole.

If the magnetic field is strong enough for a sufficient percentage of unpaired Pprotons (the orientation of which is constantly changing) to encounter within 20 arcseconds lines of magnetic flux and be deflected up or down, the molecular beam of silver atoms deposited on a glass slide at the center of the magnetic field (where it is strongest) will be split into two zones and, consistent with the definition of spin as equal to the difference between the number of zones minus one divided by 2 (S = (z-1)/2), a Stern-Gerlach experiment determines a spin equal to ½. This result is the only example of spin clearly determined by the position of atoms deposited on a glass slide.14

The above explanation is correct for silver atoms passed through the inhomogeneous magnetic fields of the Stern-Gerlach apparatus, but in the 1939 Rabi experimental apparatus15 (upon which modern molecular beam apparatus are modeled) the mechanism of deflection due to leading or trailing P-axes has nothing to do with the results achieved.

The 1939 Rabi experimental apparatus included back-to-back Stern-Gerlach inhomogeneous magnetic fields with opposite magnetic field orientations, but the result that dramatically changed physics, the accurate measurement of the Larmor frequency of nuclei, was done in a separate Rabi analyzer placed between the inhomogeneous magnetic fields. To Rabi, the importance of the Stern-Gerlach inhomogeneous magnets was for use in the alignment and tuning of the entire apparatus.

In a Rabi analyzer there is a strong constant magnetic field and a weaker transverse oscillating magnetic field. The purpose of the strong constant field is to decouple (increase the separation distance between) electrons and protons. The purpose of the transverse oscillating field is to stimulate the emission of photons by the decoupled protons.

When the Rabi apparatus is initially assembled, before installation of the Rabi analyzer the Stern-Gerlach apparatus is set up and tuned such that the intensity of the molecular beam leaving the apparatus is equal to its intensity upon entering.

After the unpowered Rabi analyzer is mounted between the Stern-Gerlach magnets, and the molecular beam exiting the first inhomogeneous magnetic field passes through the Rabi analyzer and enters the second inhomogeneous magnetic field, the intensity of the molecular beam leaving the apparatus decreases. In this state the entire Rabi apparatus is tuned and adjusted until the intensity of the entering molecular beam is equal to the intensity of the exiting beam.

When the crossed magnetic fields of the Rabi analyzer are switched on, for a second time the intensity of the exiting beam decreases. Then, by adjustment of the relative positions and orientations of the three magnetic fields (and also adjustment of the detector position to optimally align with decoupled protons in the nucleus of interest) the intensity of the exiting beam is returned to its initial value.

During an operational run, the transverse oscillating field stimulates the emission of photons at the same frequency as that of the transverse oscillating magnetic field. The ratio of the photon frequency divided by the strength of the strong magnetic field is equal to the Larmor frequency of the nucleus, and the Larmor frequency divided by the strong magnetic field strength is equal to the gyromagnetic ratio. The Larmor frequency has a very sharp resonant peak limited only by the accuracy of the two experimental measurables: the intensity of the strong magnetic field and the frequency of the oscillating weak magnetic field.

The gyromagnetic ratios of Li6, Li7, and F19, experimentally determined by Rabi in 1939, agree with the 2014 INDC16 values to better than 1 part in 60,000. Importantly, measurements of the gyromagnetic ratios of Li6 and Li7 were made in three different lithium molecules (LiCl, LiF, and Li2) requiring three separate operational runs, thereby demonstrating the Rabi analyzer was adjusted to optimally detect the nucleus of interest.

Modern determinations of spin are based on various types of spectroscopy, the results of which stand out as peaks in the collected data.

The magnetic flux of nuclei with an even number of Pprotons and Nprotons circulates in flux loops between pairs of Pprotons and pairs of Nprotons, and such nuclei do not have magnetic moments. The flux loops within nuclei with an odd number of Pprotons and/or Nprotons do have magnetic moments. In order for all nuclei of the same isotope to have zero or non-zero magnetic moments of the same amplitude, it is necessary for the magnetic flux loops to be circulating in the same plane.

All of the 106 selected magnetic nuclear isotopes from Lithium and Uranium, including all stable isotopes with atomic number (Z) greater than 2, plus a number of important isotopes with relatively long half-lives, belong to one of twelve different Types. The Type is determined based the spin of the isotope and the number of odd and even Pprotons and Nprotons.

An isotope contains an internal physical structure to which the property of magnetic moment correlates, but the magnetic moment is not entirely determined by the internal physical structure of a nucleus. The property of interactional spin is that portion of the magnetic moment due to factors external to the nucleus, including electromagnetic radiation, magnetic fields, electric fields and excitation energy.

Of significance to the present discussion, the detectable magnetic properties of 82 of the 106 selected isotopes (the relative spatial orientations of the flux loops associated with the Pprotons and Nprotons) can be manipulated by four different orientations of directed planar electric fields.

The magnetic signatures of the 106 selected isotopes can be sorted into twelve isotope Types with seven spin values.

Spin ½ isotopes with an odd number of Pprotons and even number of Nprotons are Type A-0. Of the 106 selected isotopes, 10 are Type A-0.

Spin ½ isotopes with an even number of Pprotons and odd number of Nprotons (odd/even Reversed) are Type RA-0. Of the 106 selected isotopes, 14 are Type RA-0.

Spin 1 isotopes with an odd number of Pprotons and an odd number of Nprotons are Type B-1. Of the 106 selected isotopes, 2 are Type B-1.

Spin 3/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type C-1. Of the 106 selected isotopes, 18 are Type C-1.

Spin 3/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RC-1. Of the 106 selected isotopes, 12 are Type RC-1.

Spin 5/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type C-2. Of the 106 selected isotopes, 13 are Type C-2.

Spin 5/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RC-2. Of the 106 selected isotopes, 11 are Type RC-2.

Spin 3 isotopes with an odd number of Pprotons and an odd number of Nprotons are Type B-3. Of the 106 selected isotopes, 2 are Type B-3.

Spin 7/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type A-3.

Of the 106 selected isotopes, 9 are Type A-3.

Spin 7/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RA-3. Of the 106 selected isotopes, 8 are Type RA-3.

Spin 9/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type C-4. Of the 106 selected isotopes, 3 are Type C-4.

Spin 9/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RC-4. Of the 106 selected isotopes, 4 are Type RC-4.

Above, the horizontal line is in the inspection plane. The vertical line, the photon path to the Rabi analyzer, is parallel to the constant magnetic field. The circle indicates the diameter of the molecular beam, and the crosshairs indicate the velocity of the beam is directed into the paper.

A molecular beam is not needed for the operation of a Rabi analyzer, all that is required is for an analytical sample (gas or liquid phase) comprising a large number of molecules containing a larger number of nuclei enclosing an even larger number of particles to be located at the intersection of the cross hairs.

The position of the horizontal inspection plane is irrelevant to Rabi analysis but it is crucial for spectroscopic analysis of flux loops.

Above left, the molecular beam (directed into the paper in the previous illustration) is directed from right to left, and the photon path to the Rabi analyzer is in the same location as in the previous illustration.

For spectroscopic analysis, the inspection plane is the plane defined by the direction the molecular beam formerly passed and the direction of the positive electric field when pointing up.

Above right, the inspection plane for spectroscopic analysis, is labelled at each corner. The dashed line in place of the former position of the molecular beam is an orthogonal axis (OA) passing through the direction of the positive side of the electric field when pointing up (UP),

and passing through the direction of the spectroscopic detectors (SD).

The intersection of OA, UP and SD is the location where the analytical sample (gas or liquid phase) is placed in the inspection plane. The electric field that orients particle Q-axes is in the inspection plane.

The detection of ten of the twelve Types of magnetic signatures (in the 106 selected isotopes) requires one of four alignments of directed electric fields: the positive side of the electric field pointing up, the positive side of the electric field pointing right, the positive side of the electric field pointing down, or the positive side of the electric field pointing left.

The four possible alignments of the electric field are illustrated on either side of the inspection plane (but in operation the entire breadth of the electric field points in the same direction) and the directed lines on the edges of the inspection plane represent the positions of thin wire cathodes that produce planar electric fields.

Prior to an operational run, the spectroscopic detectors are adjusted to optimally detect the magnetic properties of the isotope to be analyzed.

Above is a summary of isotope magnetic signatures.

Column 1 lists the twelve magnetic isotope Types.

In column 2, with the P-axes of particles oriented by a constant magnetic field directed up in the direction of the magnetic north pole and in the absence of a directed electric field, the magnetic signatures due to flipping odd Pproton P-axes (the arrow on the left of the vignette) and odd Nproton P-axes (the arrow on the right of the vignette) are illustrated.

See below, in the detailed discussion of Type B-1, for the reason there is a zero instead of an arrow in Types B-1 and B-3.

The magnetic signatures due to flux loops in the presence of the four orientations of an electric field, are given in columns 3, 4, 5 and 6 for electric fields directed up, directed down, directed to the right, or directed to the left.

In illustrations of flux loop magnetic signatures, if the arrows are oriented up and down the arrow on the left of the vignette represents the direction of Pproton flux loops and the arrow on the right represents the direction of Nproton flux loops, if the arrows are oriented left and right the arrow on the top of the vignette represents the direction of Pproton flux loops and the arrow on the bottom represents the direction of Nproton flux loops.

In total there are six directed orthogonal planes in Cartesian space but only four of these are represented in columns 3, 4, 5 and 6. This omission is due to the elliptical planar shape of magnetic flux loops: the missing orientations provide edge-on views without detectable magnetic signatures.

Type A-0

7N15, with 7 Pprotons and 8 Nprotons, is the lowest atomic number Type A-0 isotope. In Type A-0 isotopes the flux loops associated with Pprotons and Nprotons lie in a directed Cartesian plane without detectable flux loop signatures.

In an analytical sample, 50% of the odd (unpaired) Pproton P-axes will be oriented in one direction and 50% in the opposite direction. The orientation of the magnetic axes of the odd Pproton are flipped by the transverse oscillating magnetic field and the spectroscopic detectors sense two different magnetic signatures resulting in two peaks corresponding to a spin of ½.

Above is the magnetic signature of Type A-0. The left arrow pointing up is the direction of the odd Pproton P-axis after emission of a photon (previously the constant magnetic field aligned the Pproton P-axis in this orientation, then absorption of intrinsic energy from the transverse oscillating magnetic field flipped the axis to pointing down then, due to the 180 degree rotation of the P-Q axes with respect to the direction of the G-axis, the absorbed intrinsic energy was released as a photon when the axis was flipped back to pointing up). The arrow pointing down is the antiparallel direction of the P-axis of a paired Nproton (which does not emit a photon).

The experimental detection of Type A-0 isotopes requires a constant magnetic field oriented in the direction of magnetic north.

Type RA-0

6C13, with 6 Pprotons and 7 Nprotons, is the lowest atomic number Type RA-0 isotope. In Type RA-0 isotopes the flux loops associated with Pprotons and Nprotons lie in a directed Cartesian plane without detectable flux loop signatures.

In an analytical sample, 50% of the odd (unpaired) Nproton P-axes will be oriented in one direction and 50% in the opposite direction. The orientation of the magnetic axes of the odd Nproton are flipped by the transverse oscillating magnetic field and the spectroscopic detectors produce two different magnetic signatures resulting in two peaks corresponding to a spin of ½.

Above is the magnetic signature of Type RA-0. The left arrow pointing up is the direction of the P-axis of a paired Pproton (which does not emit a photon). The right arrow pointing down is the direction of the odd Nproton P-axis after emission of a photon (previously the constant magnetic field aligned the Nproton P-axis in this orientation, then absorption of intrinsic energy from the transverse oscillating magnetic field flipped the axis to pointing up then, due to the 180 degree rotation of the P-Q axes with respect to the direction of the G-axis, the absorbed intrinsic energy was released as a photon when the axis was flipped back to pointing down).

The experimental detection of Type RA-0 isotopes requires a constant magnetic field oriented in the direction of magnetic north.

Type B-1

3Li6, with 3 Pprotons and 3 Nprotons, is the lowest atomic number Type B-1 isotope. In isotopes with an odd number of Pprotons and Nprotons, the odd Pproton interacts with the electron in the odd Nproton preventing electron-Nproton decoupling by the constant magnetic field and the odd Nproton P-axis is unable to be flipped by the transverse oscillating magnetic field, but the electron-Pprotonis decoupled and the orientation of the odd Pproton magnetic axis is flipped by the transverse oscillating magnetic field, and the spectroscopic detectors, adjusted to optimally recognize the magnetic signatures of 3Li6, sense one distinctive magnetic signature, resulting in one peak.

In Type B-1, the odd Nproton P-axis is unable to be flipped thus there is no magnetic signature due to the Nproton itself, but both the Nproton and the Pproton have associated flux loops and spectroscopic detectors can sense the magnetic signatures of the flux loops in the presence of a directed electric field pointing up.

In the analysis of isotopes with detectable flux loop signatures there are four possible orientations of the directed electric fields. The magnetic flux loops associated with Type-1 isotopes are detectable if the directed electric field is pointing up. The magnetic flux loops associated with Type-2 isotopes are detectable if the directed electric field is pointing down. The magnetic flux loops associated with Type-3 isotopes are detectable if the directed electric field is pointing right. The magnetic flux loops associated with Type-4 isotopes are detectable if the directed electric field is pointing left.

Each of these directed electric field orientations require different experiments, therefore the results of five experiments (including one experiment without directed electric fields) are needed to fully establish the Type of an unknown isotope.

The flux loops circulating through particle P-axes can pass through all radial planes. The radial flux planes in the above diagram are in the plane of the paper demonstrating, when detected from opposite directions, flux loops will be CW (directed right-left) or CCW (directed left-right).

Since Pprotons and Nprotons are oppositely aligned, a CW Pproton signature is identical to an Nproton CCW signature, and a CCW Pproton signature is identical to an Nproton CW signature.

Because the magnetic signatures of the particles in the field of view of a detector are differently oriented, on average 50% of the flux loop magnetic signatures will be CW and 50% CCW. Of the 50% of the CW signatures 25% will be due to Pprotons and 25% due to Nprotons, and of the 50% of the CCW signatures 25% will be due to Pprotons and 25% due to Nprotons.

Thus, there will be two different magnetic signatures resulting to two peaks, but we are unable to distinguish which is due to CW Pproton flux loops or CCW Nproton flux loops, and which is due to CCW Pproton flux loops or CW Nproton flux loops.

In Type B-1, the magnetic signature due to the odd Pproton (experimentally determined in the absence of an electric field) has one peak, and the magnetic signature due to flux loops associated with Pprotons and Nprotons (experimentally determined in an electric field oriented parallel to the magnetic field) has two peaks, totaling three peaks corresponding to a spin of 1.

Here we come to a fundamental issue. Is the uncertainty in situations involving linked physical properties (complementarity) described by probability or is it causedby probability? In 1925 Werner Heisenberg theorized this type of uncertainty was caused by probability and that opinion became, along with intrinsic spin, an important part of the foundation of new quantum theory.

In nature, the orientation of the magnetic signatures of isotopes and the orientation of the nuclei containing the particles responsible for the magnetic signatures are random. The magnetic signatures due to a large number of randomly oriented particles are indistinguishable from background noise, but under the proper experimental conditions, the magnetic signatures are discernable.

The magnetic signatures of flux loops, imperceptible in nature, are perceptible when the Q-axes of the associated particles are aligned.

A constant magnetic field is not needed to detect the magnetic signatures of flux loops, but compared to the Rabi analyzer the inspection plane to detect the magnetic signatures of flux loops is in the identical position, and the directed orthogonal plane pointing up in the direction of magnetic north in the Rabi analyzer is the identical to the directed orthogonal plane pointing up in the direction of the positive electric field in the flux loops analyzer, that is, the direction of the electric field is parallel to the magnetic field.

Therefore, even though the magnetic field is not needed to detect the magnetic signatures of flux loops, if the magnetic field is present in addition to the directed electric field, its presence would not alter the experimental results, but it might provide additional information.

Here is a prediction of the present theory. If the experiment detecting the magnetic signature of Type B-1 is conducted in the presence of a constant magnetic field and a directed electric field pointing up, that one experiment will determine the magnetic signatures shown above plus two additional signatures: (1) the magnetic signature due to CW Pproton flux loops and CCW Nproton flux loops and (2) the magnetic signature due to CW Nproton flux loops and CCW Pproton flux loops.

This result would demonstrate the uncertainty in at least one situation involving linked physical properties is described by probability but is not causedby probability. This and other experiments yet to be devised, will overturn the concept of causation by probability, and validate Einstein’s intuition that God “does not play dice with the universe.”17

Type C-1

3Li7, with 3 Pprotons and 4 Nprotons, is the lowest atomic number Type C-1 isotope.

As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. In total, Type C-1 isotopes have four peaks corresponding to a spin of 3/2.

Type RC-1

4Be9, with 4 Pprotons and 5 Nprotons, is the lowest atomic number RC-1 isotope.

As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. In total, Type RC-1 isotopes have four peaks corresponding to a spin of 3/2.

Type C-2

13Al27, with 13 Pprotons and 14 Nprotons, is the lowest atomic number Type C-2 isotope.

As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks.

In the identification of Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. In total, Type C-2 isotopes have six peaks corresponding to a spin of 5/2.

Type RC-2

8O17, with 8 Pprotons and 9 Nprotons, is the lowest atomic number Type RC-2 isotope. 8O17 has one odd Nproton and no odd Pprotons.

As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks.

In the identification of Type RC-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. In total, Type RC-2 isotopes have six peaks corresponding to a spin of 5/2.

Type B-3

5B10, with 5 Pprotons and 5 Nprotons, is the lowest atomic number Type B-3 isotope.

As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks.

In the identification of Type B-3, the odd Pproton flux loops, determined in an electric field pointing right, has two peaks. In total, Type B-3 isotopes have seven peaks corresponding to a spin of 3.

A-3

21Sc45, with 21 Pprotons and 24 Nprotons, is the lowest atomic number Type A-3 isotope.

As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In total, Type A-3 isotopes have eight peaks corresponding to a spin of 7/2.

RA-3

20Ca43, with 20 Pprotons and 23 Nprotons, is the lowest atomic number Type RA-3 isotope.

As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type RC-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In total, Type RA-3 isotopes have 8 peaks corresponding to a spin of 7/2.

C-4

41NB93, with 41 Pprotons and 52 Nprotons, is the lowest atomic number Type C-4 isotope.

As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In the identification of Type C-4, the odd Nproton flux loops, determined in an electric field pointing left, has two peaks. In total, Type C-4 isotopes have 10 peaks corresponding to a spin of 9/2.

RC-4

32Ge73, with 32 Pprotons, 41 Nprotons, is the lowest atomic number Type RC-4 isotope.

As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type RC-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In the identification of Type RC-4, the odd Nproton flux loops, determined in an electric field pointing left, has two peaks. In total, Type RC-4 isotopes have 10 peaks corresponding to a spin of 9/2.


ZNZ+NSpinPeaksType







7N1578150.52A-0
9F19910190.52A-0
15P311516310.52A-0
39Y893950890.52A-0
45Rh10345581030.52A-0
47Ag10947621090.52A-0
47Ag10747601070.52A-0
69Tm169691001690.52A-0
81Tl203811222030.52A-0
81Tl205811242050.52A-0
6C1367130.52RA-0
14Si291415290.52RA-0
26Fe572631570.52RA-0
34Se773443770.52RA-0
48Cd11148631110.52RA-0
50Sn11750671170.52RA-0
50Sn11550651150.52RA-0
52Te12552731250.52RA-0
54Xe12954751290.52RA-0
74W183741091830.52RA-0
76Os187761111870.52RA-0
78Pt195781171950.52RA-0
80Hg199801191990.52RA-0
82Pb207821252070.52RA-0







3Li63361.03B-1
7N1477141.03B-1







3Li73471.54C-1
5B1156111.54C-1
11Na231112231.54C-1
17Cl351718351.54C-1
17Cl371720371.54C-1
19K391920391.54C-1
19K411922411.54C-1
29Cu632934631.54C-1
29Cu652936651.54C-1
31Ga693138691.54C-1
31Ga713140711.54C-1
33As753342751.54C-1
35Br793544791.54C-1
35Br813546811.54C-1
65Tb15965941591.54C-1
77Ir193771161931.54C-1
77Ir191771141911.54C-1
79Au197791181971.54C-1
4Be94591.54RC-1
10Ne211011211.54RC-1
16S331617331.54RC-1
24Cr532429531.54RC-1
28Ni612833611.54RC-1
54Xe13154771311.54RC-1
56Ba13556791351.54RC-1
56Ba13756811371.54RC-1
64Gd15564911551.54RC-1
64Gd15764931571.54RC-1
76Os189761131891.54RC-1
80Hg201801212011.54RC-1







13Al271314272.56C-2
25Mn512526512.56C-2
25Mn552530552.56C-2
37Rb853748852.56C-2
51Sb12151701212.56C-2
53I12753741272.56C-2
59Pr14159821412.56C-2
61Pm14561841452.56C-2
63Eu15163881512.56C-2
63Eu15363901532.56C-2
75Re185751101852.56C-2
8O1789172.56RC-2
12Mg251213252.56RC-2
22Ti472225472.56RC-2
30Zn673037672.56RC-2
40Zr914051912.56RC-2
42Mo954253952.56RC-2
42Mo974255972.56RC-2
44Ru10144571012.56RC-2
44Ru994455992.56RC-2
46Pd10546591052.56RC-2
66Dy16166951612.56RC-2
66Dy16366971632.56RC-2
70Yb173701031732.56RC-2







5B1055103.07B-3
11Na221111223.07B-3







21Sc452124453.58A-3
23V512328513.58A-3
27Co592732593.58A-3
51Sb12351721233.58A-3
55Cs13355781333.58A-3
57La13957821393.58A-3
67Ho16567981653.58A-3
71Lu175711041753.58A-3
73Ta181731081813.58A-3
20Ca432023433.58RA-3
22Ti492227493.58RA-3
60Nd14360831433.58RA-3
60Nd14560851453.58RA-3
62Sm14962871493.58RA-3
68Er16768991673.58RA-3
72Hf177721051773.58RA-3
92U235921432353.58RA-3







41Nb934152934.510C-4
49In11349641134.510C-4
83Bi209831262094.510C-4
32Ge733241734.510RC-4
36Kr833647834.510RC-4
38Sr873849874.510RC-4
72Hf179721071794.510RC-4

ZNZ+NSpinPeaksType







3Li63361.03B-1
3Li73471.54C-1
4Be94591.54RC-1
5B1055103.07B-3
5B1156111.54C-1
6C1367130.52RA-0
7N1477141.03B-1
7N1578150.52A-0
8O1789172.56RC-2
9F19910190.52A-0
10Ne211011211.54RC-1
11Na231112231.54C-1
11Na221111223.07B-3
12Mg251213252.56RC-2
13Al271314272.56C-2
14Si291415290.52RA-0
15P311516310.52A-0
16S331617331.54RC-1
17Cl351718351.54C-1
17Cl371720371.54C-1
19K391920391.54C-1
19K411922411.54C-1
20Ca432023433.58RA-3
21Sc452124453.58A-3
22Ti472225472.56RC-2
22Ti492227493.58RA-3
23V512328513.58A-3
24Cr532429531.54RC-1
25Mn512526512.56C-2
25Mn552530552.56C-2
26Fe572631570.52RA-0
27Co592732593.58A-3
28Ni612833611.54RC-1
29Cu632934631.54C-1
29Cu652936651.54C-1
30Zn673037672.56RC-2
31Ga693138691.54C-1
31Ga713140711.54C-1
32Ge733241734.510RC-4
33As753342751.54C-1
34Se773443770.52RA-0
35Br793544791.54C-1
35Br813546811.54C-1
36Kr833647834.510RC-4
37Rb853748852.56C-2
38Sr873849874.510RC-4
39Y893950890.52A-0
40Zr914051912.56RC-2
41Nb934152934.510C-4
42Mo954253952.56RC-2
42Mo974255972.56RC-2
44Ru10144571012.56RC-2
44Ru994455992.56RC-2
45Rh10345581030.52A-0
46Pd10546591052.56RC-2
47Ag10747601070.52A-0
47Ag10947621090.52A-0
48Cd11148631110.52RA-0
49In11349641134.510C-4
50Sn11550651150.52RA-0
50Sn11750671170.52RA-0
51Sb12151701212.56C-2
51Sb12351721233.58A-3
52Te12552731250.52RA-0
53I12753741272.56C-2
54Xe12954751290.52RA-0
54Xe13154771311.54RC-1
55Cs13355781333.58A-3
56Ba13556791351.54RC-1
56Ba13756811371.54RC-1
57La13957821393.58A-3
59Pr14159821412.56C-2
60Nd14360831433.58RA-3
60Nd14560851453.58RA-3
61Pm14561841452.56C-2
62Sm14962871493.58RA-3
63Eu15163881512.56C-2
63Eu15363901532.56C-2
64Gd15564911551.54RC-1
64Gd15764931571.54RC-1
65Tb15965941591.54C-1
66Dy16166951612.56RC-2
66Dy16366971632.56RC-2
67Ho16567981653.58A-3
68Er16768991673.58RA-3
69Tm169691001690.52A-0
70Yb173701031732.56RC-2
71Lu175711041753.58A-3
72Hf177721051773.58RA-3
72Hf179721071794.510RC-4
73Ta181731081813.58A-3
74W183741091830.52RA-0
75Re185751101852.56C-2
76Os187761111870.52RA-0
76Os189761131891.54RC-1
77Ir191771141911.54C-1
77Ir193771161931.54C-1
78Pt195781171950.52RA-0
79Au197791181971.54C-1
80Hg199801191990.52RA-0
80Hg201801212011.54RC-1
81Tl203811222030.52A-0
81Tl205811242050.52A-0
82Pb207821252070.52RA-0
83Bi209831262094.510C-4
92U235921432353.58RA-3

In GAP, the gyromagnetic ratio of a nucleus is equal to the product of the INDC isotope g-factor and the CODATA nuclear magneton divided by the product of the INDC intrinsic spin and the CODATA reduced Plank constant, and the magnetic moment of a nucleus is equal to the product of the INDC isotope g-factor and the CODATA nuclear magneton.

In discrete physics, the magnetic moment of a nucleus is equal the product of two times the interactional spin (converts spin to number of odd Pprotons and/or odd Nprotons), the kinetic steric factor (converts molecular beam thermal energy into Joules), Lambda-bar, and the GAP value for the gyromagnetic ratio (assumed correct).

In the 106 isotopes tested, the ratio of the INDC isotope magnetic moment divided by the value denominated in discrete units is equal to 1.0288816.

The difference can be narrowed by adjustment but cannot be eliminated because CODATA constants are not exactly reconciled.

Part Four

Particle acceleration

Einstein believed mass was constant and many of his revolutionary discoveries were based on that concept. Constancy of mass is an eminently reasonable assumption because Newtonian equations are also founded on mass conservation and in the majority of situations his equations accurately predict the observables. But in fact, as was succinctly expressed in his letter to Richard Bentley, his equations do not correspond to physical reality.18

Einstein also believed the speed of light was constant and since kinetic energy is proportional to mass and velocity, he concluded that the mass of a particle increases with velocity and approaches (but never reaches) a maximum value as the velocity approaches the speed of light. In special relativity he was able to derive, in a few simple equations, the relativistic momentum and energy (mass-energy) of a particle.

In general relativity, Einstein’s field equations described the curvature of space-time in intense gravitational fields in agreement with the measured value for the precession of the perihelion of mercury. It seems likely the field equations were derived with that result in mind. Even so, this approach is eminently justifiable because measurables are valid assumptions for a physical theory.

Einstein’s prediction that the curvature of space-time in intense gravitational fields was not only responsible for the precession of the perihelion of mercury but would also bend rays of light was verified in two astronomical expeditions led by Arthur Eddington and Andrew Crommelin. Their observations were acclaimed as verification of general relativity and today the curvature of space-time is considered by most scientists to be undisputed.

Unfortunately, this undisputed theory cannot determine the velocity of a relativistically accelerated electron or proton and does not provide a mechanism for the increase in energy and mass (mass-energy).

The present theory derives the velocity and mass-energy of accelerated electrons and protons, and provides a mechanism.

In particle acceleration, charged particles are electrostatically formed into a linear beam and accelerated, then injected into a circular accelerator (or cyclotron) where they are magnetically formed into a circular beam and further accelerated by oscillating magnetic fields. Particle acceleration in linear and circular beams is mediated by chirality meshing interactions.

An electrostatic voltage is the emission of quantons:

  • In electrostatic acceleration of negatively charged particles between a negative cathode on the left emitting CCW quantons and a positive anode on the right emitting CW quantons, chirality meshing absorptions of CCW quantons results in repulsive deflections (voltage acceleration) to the right and chirality meshing absorptions of CCW quantons results in attractive deflections (voltage acceleration) to the right.
  • If positively charged particles are between a negative cathode on the left emitting CCW quantons and a positive anode on the right emitting CW quantons, chirality meshing absorptions of CW quantons results in attractive deflections (voltage acceleration) to the left and chirality meshing absorptions of CW quantons results in repulsive deflections (voltage acceleration) to the left.

Quantons are also produced transverse to a magnetic field with CCW quantons emitted by the magnetic North pole and CW quantons emitted by the magnetic South pole:

  • In acceleration by a transverse oscillating magnetic field, charged particles are alternately pushed (repulsively deflected) from one direction and pulled (attractively deflected) from the opposite direction.
  • Negatively charged particles are alternately pushed (deflected in the direction of the positive anode) due to the absorption of CCW quantons and pulled (deflected in the direction of the positive anode) due to the absorption of CW quantons.
  • Positively charged particles are alternately pulled (deflected in the direction of the negative cathode) due to the absorption of CCW quantons, and pushed (deflected in the direction of the negative cathode) due to the absorption of CW quantons.

In either case (electrostatic voltage or oscillating magnetic voltage) the energy of simultaneous acceleration by oppositely directed voltages is proportional to the square of the voltage.

A chirality meshing absorption of a quanton increases the intrinsic energy of a particle and produces an intrinsic deflection that increases the particle velocity. Like kinetic acceleration, an intrinsic deflection increases the velocity but does so without the dissipation of kinetic energy.

The number of particles and quantons is directly proportional to the intrinsic Josephson constant: 3.0000E15 quantons are absorbed by 3.0000E15 particles per second per Volt. At 400 Volts 1.2000E18 quantons are absorbed by 1.2000E18 particles per second; and at 250,000 Volts 7.5000E20 quantons are absorbed by 7.5000E20 particles per second.

Each quanton absorption produces a deflection (acceleration) equal to the square root of Lambda-bar divided by the particle amplitude. Quanton absorption by an electron produces a deflection of 2.5327E-18 meters, and quanton absorption by a proton produces a deflection of 2.0680E-19 meters.

The number of chirality meshing interactions is equal to the square of the voltage divided by the square root of Lambda-bar. The intrinsic energy absorbed by a particle in a chirality meshing interaction is equal to the product of the number of chirality meshing interactions and Lambda-bar, divided by the number of particles. The accelerated particle intrinsic energy is equal to the sum of the particle intrinsic energy plus the intrinsic energy absorbed by the particle in a chirality meshing interaction.

The kinetic mass-energy in units of Joule is equal to the product of the accelerated particle intrinsic energy, the square of the photon velocity, and the ratio of the discrete Planck constant divided by Lambda-bar.

Electron acceleration

Below left, the GAP equation for electron velocity due to electrostatic or electromagnetic voltage is equal to the square root of the ratio of the product of 2, the CODATA elementary charge (units of Coulomb) and the voltage, divided by the CODATA electron mass (units of kilogram).

Above right, the discrete equation for electron velocity due to electrostatic or electromagnetic voltage is equal to the square root of the ratio of the product of 2, the charge intrinsic energy and the voltage, divided by the electron intrinsic energy.

The velocity calculated by the GAP equation is higher than the discrete equation by a factor of 1.007697. The difference can be narrowed by adjustment but cannot be eliminated because CODATA constants are not reconciled.

The analysis of electron acceleration includes a range of ten voltages between a minimum voltage and a maximum voltage. The maximum voltage is equal to a few millivolts less than the theoretical voltage required to accelerate an electron to the photon velocity (an impossibility), which, if calculated to fifteen significant digits, is 259807.621135332 Volts.

Top row column 1, the voltages used in this example analysis are 1, 100, 400, 800, 4000, 10000, 25000, 100000, 250000, and 259807.621135 Volts. The highest voltage, calculated to thirteen significant digits, exactly converts to the photon velocity (an impossibility) to eleven significant digits but is less than the photon velocity (the correct result) at 12 significant digits (this is an excellent example of a discretely exact property).

The equations following, calculations for 100 Volts, are identical to the equations for any other of the nine voltages, or for any other range of ten voltages greater than zero and less than the theoretical maximum.

Top row column 2, the calculated electron velocity per the discrete equation.

Top row column 3, the number of accelerated (deflected) electrons is equal to the ratio of the voltage divided by the intrinsic electron magnetic flux quantum.

Top row column 4, the deflection per quanton is equal to the square root of Lambda-bar divided by the electron amplitude.

This is the deflection of a chirality meshing interaction between a quanton and an electron.

Bottom row column 1, the number of chirality meshing interactions is equal to the square of the voltage divided by the square root of Lambda-bar.

Bottom row column 2, the increase in intrinsic energy per electron due to chirality meshing interactions, equal to the product of the number of chirality meshing interactions and Lambda-bar divided by the number of electrons, is denominated in units of Einstein.

Bottom row column 3, the accelerated electron energy is equal to the sum of the electron intrinsic energy and the increase in intrinsic energy per electron.

Bottom row column 4, the mass-energy in units of Joule is equal to the product of the accelerated electron intrinsic energy, the square of the photon velocity and the ratio of the discrete Planck constant divided by Lambda-bar.

Proton acceleration

The analysis of proton acceleration includes a range of ten voltages between a minimum voltage and a maximum voltage. For purposes of comparison, we specify the same voltages as used for the electron.

The theoretical voltage required to accelerate a proton to the photon velocity (an impossibility) is 38971143.1702997 Volts. Any voltage less than this theoretical maximum will accelerate a proton to less than the photon velocity.

The voltage range used in this example analysis is 1, 100, 400, 800, 4000, 10000, 25000, 100000, 250000, and 259807.621135 Volts. The equations below, the calculations for 100 Volts, are identical to the equations for any other accelerating voltage range greater than zero and less than the theoretical maximum.

The analysis of proton acceleration includes a range of ten voltages between a minimum voltage and a maximum voltage. The maximum voltage is equal to a few millivolts less than the theoretical voltage required to accelerate a proton to the photon velocity (an impossibility), which, if calculated to fifteen significant digits, is 259807.621135332 Volts.

Below left, the GAP equation for proton velocity due to electrostatic or electromagnetic voltage is equal to the square root of the ratio of the product of 2, the CODATA elementary charge (units of Coulomb) and the voltage, divided by the CODATA proton mass (units of kilogram).

Above right, the discrete equation for proton velocity, due to electrostatic or electromagnetic voltage, is equal to the square root of the ratio of the product of 2, the charge intrinsic energy (in units of intrinsic Volt) and the voltage, divided by the proton intrinsic energy (in units of Einstein).

The discrete proton velocity is lower than the discrete electron velocity by the square root of 150 (the square root of the proton amplitude).

The equations below, calculations for 100 Volts, are identical to the equations for any other of the nine voltages, or for any other range of ten voltages greater than zero and less than the theoretical maximum.

Top row column 1, the voltages used in this example analysis are 1, 100, 400, 800, 4000, 10000, 25000, 100000, 250000, and 259807.621135 Volts. The highest voltage, calculated to thirteen significant digits, exactly converts to the photon velocity (an impossibility) to eleven significant digits but is less than the photon velocity (the correct result) at 12 significant digits.

The equations following, calculations for 100 Volts, are identical to the equations for any other of the nine voltages, or for any other range of ten voltages greater than zero and less than the theoretical maximum.

Top row column 2, the calculated proton velocity per the discrete equation.

Top row column 3, the number of accelerated (deflected) protons is equal to the ratio of the voltage divided by the intrinsic electron magnetic flux quantum.

Top row column 4, the deflection per quanton is equal to the square root of Lambda-bar divided by the proton amplitude.

This is the deflection of a chirality meshing interaction between a quanton and a proton.

Bottom row column 1, the number of chirality meshing interactions is equal to the square of the voltage divided by the square root of Lambda-bar.

Bottom row column 2, the increase in intrinsic energy per proton due to chirality meshing interactions, equal to the product of the number of chirality meshing interactions and Lambda-bar divided by the number of protons, is denominated in units of Einstein.

Bottom row column 3, the accelerated proton energy is equal to the sum of the intrinsic proton energy and the increase in intrinsic energy per proton.

Bottom row column 4, the mass-energy in units of Joule is equal to the product of the accelerated proton intrinsic energy, the square of the photon velocity and the ratio of the discrete Planck constant divided by Lambda-bar.

Part Five

Atomic Spectra

The Rydberg equations correspond to high accuracy with the hydrogen spectral series and the Newtonian equations correspond to high accuracy with orbital motion but, despite many years of considerable effort, physicists have been unable to account for the spectrum of helium or for non-Newtonian stellar rotation curves.

Previously, we reformulated the Newtonian equations and explained stellar rotation curves. In this chapter we will reformulate the Rydberg equations for the spectral series of hydrogen and derive a general explanation for atomic spectra.

The equation formulated by Johann Balmer in 1885, in which the hydrogen spectrum wave numbers are proportional to the product of a constant and the difference between the inverse square of two integers, is correct, but the Bohr Model is not.

The electron is not a point particle, the electron does not orbit the proton, the force conveyed by an electron is not transmitted an infinite distance, at an infinitesimal distance the force is not infinite, electrons with lower energy and lower wave number are closer to the proton, and electrons with higher energy and higher wave number are further away from the proton (the Bohr distance-energy relationship must be reversed).

In hydrogen an electron and proton are engaged in a positional resonance. In atoms larger than hydrogen many electrons and protons are engaged in positional resonances. Each resonance is between one electron external to the nucleus and one proton internal to the nucleus, in which the electron and the nuclear proton are facing in opposite directions and each particle emits quantons that are absorbed by the other particle. On emission by the electron the quanton is CCW and on emission by the nuclear proton the quanton is CW. On emission the emitting particle recoils by a distance proportional to the particle intrinsic energy and on absorption the absorbing particle is attractively deflected (a chirality meshing interaction) by a distance proportional to the particle intrinsic energy. The result is a sustained positional resonance of a CCW quanton emitted in one direction by the electron and absorbed by the nuclear proton and a CW quanton emitted in the opposite direction by the nuclear proton and absorbed by the electron.

In the hydrogen atom, the resonance can be situated at any one of several quantized positions proportional to energy and corresponding to spectral emission and absorption lines. On emission of a photon the energy of the resonance decreases, and the electron drops to the adjacent lower energy level. On absorption of a photon the energy of the resonance increases, and the electron jumps to the adjacent higher energy level. The highest stable energy level, corresponding to an emission-only line, the maximum electron-proton separation distance beyond which the positional resonance no longer exists, is the hydrogen ionization energy.

The above paragraphs summarize the spectral mechanism which, for the time being, shall be considered a hypothesis.

The intrinsic to kinetic energy factor is equal to the ratio of the discrete Planck constant divided by the Coulomb divided by the ratio of Lambda-bar divided by the charge intrinsic energy, the ratio of the discrete Planck constant divided by the product of Lambda-bar and the square root of the proton amplitude divided by two, and two times the intrinsic steric factor.

The ionization energy of hydrogen (in larger atoms the ionization energy required to remove the last electron) is a discretely exact single value above which the atom no longer exists. The measured energy of hydrogen ionization is 1312 kJ/mol, and the corresponding CRC value is 13.59844 (units of kinetic electron Volts).19 Kinetic electron Volts divided by Omega-2 equals intrinsic Volts (units of Joule), which divided by 12 (the intrinsic to kinetic energy factor) equals intrinsic Volts (units of Einstein), which multiplied by the intrinsic electron charge equals intrinsic energy, which divided by Lambda-bar is equal to the photon frequency of hydrogen ionization.

Working backwards from the calculation sequences above, the discretely exact value of the photon ionization frequency is 3.28000000E15.

The intrinsic energy of hydrogen ionization, denominated in units of Einstein, is equal to the product of the photon frequency and Lambda-bar.

The intrinsic energy of hydrogen ionization, denominated in units of Joule, is equal to the product of the photon frequency and the discrete Planck constant.

The intrinsic voltage of hydrogen ionization, denominated in units of Einstein, is equal to the product of the photon frequency and Lambda-bar, divided by the charge intrinsic energy.

The ratio of the intrinsic voltage of hydrogen ionization divided by Psi is equal to the discrete Rydberg constant and denominated in units of inverse meter (spatial frequency).

The intrinsic voltage of hydrogen ionization, denominated in units of Joule, is equal to the product of 12 (the intrinsic to kinetic energy factor) and the discrete Rydberg constant, and the product of the photon frequency and the discrete Planck constant, divided by the Coulomb.

The kinetic voltage of hydrogen ionization, denominated in units of electron Volt, is equal to the product of the intrinsic voltage of hydrogen ionization and omega-2.

The difference between the above calculated energy of ionization and the CRC value is less than 0.30%. The poor accuracy is due to the performance standards of calorimeters.20 In the measurement of a sample against a calibration standard, a statistical analysis of the results will show the data lie within three standard deviations (sigma-3) of the mean (the expected value) and the accuracy will be 0.15% (99.85% of the measurements will lie in the range of higher than the calibration standard by no more than 0.15% or lower than the calibration standard by no more than 0.15%). If the identical procedure is used without prior knowledge of the expected result and whether the measurement is higher or lower than the actual value is unknown, the accuracy falls to no more than 0.30%.

The calculated value of the kinetic voltage of hydrogen ionization divided by the measured CRC value, expressed as a percentage, is 0.2666%.

Spectral series consist of a number of emission-absorption lines with a lower limit on the left and an upper limit on the right. Both limits are asymptotes: the lower limit corresponds to minimum energy, minimum frequency, and maximum wavelength; and the upper limit corresponds to maximum energy, maximum frequency, and minimum wavelength.

The below diagram of the Lyman spectral series consists of seven black emission-absorption lines to the left and a red emission-only line on the right. From left to right these lines are the Lyman lower limit (Lyman-A), Lyman-B, Lyman-C, Lyman-D, Lyman-E, Lyman-F, Lyman-G, and the Lyman upper limit.

The Rydberg equation expresses the wave numbers of the hydrogen spectrum equal to the product of the discrete Rydberg constant and the difference between the inverse square of the m-index minus the inverse square of the n-index.

The m-index has a constant value for each spectral series within the hydrogen spectrum. The six series ordered by highest energy (at the series upper limit) are Lyman, Balmer, Paschen, Brackett, Pfund and Humphreys.

Each line of a spectral series can be expressed in terms of energy, wave number, wavelength and photon frequency. The energy, wave number, and frequency increase from left to right, but the wavelength decreases from left to right.

For each spectral series the m-index increases from lowest to highest positional energy (Lyman = 1, Balmer = 2, Paschen = 3, Brackett = 4, Pfund = 5, Humphreys = 6). Each spectral series is composed of a sequence of lines (A, B, C, D, E, F, G) in which the n-index is equal to m+1, m+2, m+3, m+4, etc.

In the following analysis we will apply the Rydberg formula to calculate, based on the discretely exact value of the photon ionization frequency of 3.280000E15, the values for energy, wave number and frequency of the six spectral series of hydrogen.

The below calculations begin with the discretely exact values for the Lyman limit photon frequency and the hydrogen ionization energy (intrinsic voltage units of Joule), and the value of the discrete Rydberg constant.

The Lyman upper limit is an emission-only line because at any energy above the Lyman upper limit the hydrogen atom no longer exists. The calculation for the line prior to the Lyman upper limit is based on an n-index equal to 8, but there are additional discernable lines after Lyman-G because the Lyman upper limit is an asymptote. The identical situation holds for the limit of any spectral series.

The spectral series lower limit, the A-line (Lyman-A, Balmer-A, etc.) is also an asymptote and there are additional discernable lines between the C-line and the A-line. The number of lines included in a spectral series analysis is optional, but it is convenient to use the same number of lines in spectral series to be compared.

In this presentation, 8 Lyman and Balmer lines are included because these lines are specified in at least one of the easily available online sources. In the Paschen, Brackett, Pfund and Humphreys spectral series, 6 lines are included because these are also easily available.21

The ratio of the Lyman upper limit divided by the upper limit of another hydrogen spectral series is equal to the square of the m-index of the other series:

  • The Lyman upper limit divided by the Balmer upper limit is equal to 4.
  • The Lyman upper limit divided by the Paschen upper limit is equal to 9.
  • The Lyman upper limit divided by the Brackett upper limit is equal to 16.
  • The Lyman upper limit divided by the Pfund upper limit is equal to 25.
  • The Lyman upper limit divided by the Humphreys upper limit is equal to 36.

The ratio of the Lyman spectral series upper limit divided by the Lyman spectral series lower limit is equal to the ratio of the Rydberg wave number calculation for the upper limit divided by the Rydberg wave number calculation for the lower limit.

In all spectral series the Rydberg ratio is equal to the upper limit energy divided by the lower limit energy, the ratio of the upper limit structural frequency divided by the lower limit structural frequency, and the ratio of the lower limit wavelength divided by the upper limit wavelength.

The ratio of the Balmer spectral series upper limit divided by the Balmer spectral series lower limit is equal to the ratio of the Rydberg wave number calculation for the upper limit divided by the Rydberg wave number calculation for the lower limit.

The same calculation is used for the other four hydrogen spectral series:

  • The ratio of the Paschen spectral series upper limit divided by the Paschen lower limit is equal to 1312/574 (2.285714).
  • The ratio of the Brackett spectral series upper limit divided by the Brackett lower limit is equal to 25/9 (2.777777).
  • The ratio of the Pfund spectral series upper limit divided by the Pfund lower limit is equal to 36/11 (3.272727).
  • The ratio of the Humphreys spectral series upper limit divided by the Humphreys lower limit is equal to 3.769230.

Above, the frequencies under the A, B, C, D, E, F, G-lines and the series limit are the positional structural frequencies, and the transition frequencies between lines (B-A, C-B … F-E, G-F) are the photon emission-absorption frequencies.

The structural frequency of the G-line is equal to the product of the Rydberg calculated wave number and the photon velocity. The energy of the G-line (intrinsic Volts units of Joule) is equal to the product of the structural frequency of the G-line and the Coulomb divided by the discrete Planck constant.

The structural frequency of the F-line is equal to the product of the Rydberg calculated wave number and the photon velocity. The energy of the F-line (intrinsic Volts units of Joule) is equal to the product of the structural frequency of the F-line and the Coulomb divided by the discrete Planck constant.

The photon emission-absorption frequency of the G-F transition is equal to the structural frequency of the G-line minus the structural frequency of the F-line. The energy of the G-F transition (intrinsic Volts units of Joule) is equal to the energy of the G-line minus the energy of the F-line.

The identical process is used to calculate the emission-absorption frequencies and energies for all spectral series.

Note there is no transition frequency or energy between the G-line and the series limit because the series limit is emission-only.

Lyman series transition photons identical to Balmer series photons:

  • When a Lyman-C positional resonance drops down to Lyman-B, the Lyman-C energy is emitted as two photons: a 11.662222 Vi(J) Lyman-B photon frequency 2.915555E15 and a 0.637777 Vi(J) Lyman C-B photon frequency 1.594444E14. The frequency and wavelength of the transition photon is identical to the Balmer B-A transition photon.
  • When a Lyman-D positional resonance drops down to Lyman-C, the Lyman-D energy is emitted as two photons: a 12.300000 Vi(J) Lyman-C photon frequency 3.075000E15 and a 0.295200 Vi(J) Lyman D-C photon frequency 7.380000E13. The frequency and wavelength of the transition photon is identical to the Balmer C-B transition photon.
  • When a Lyman-E positional resonance drops down to Lyman-D, the Lyman-E energy is emitted as two photons: a 12.595200 Vi(J) Lyman-D photon frequency 3.148800E15 and a 0.160356 Vi(J) Lyman E-D photon frequency 4.008888E13. The frequency and wavelength of the transition photon is identical to the Balmer D-C transition photon.
  • When a Lyman-F positional resonance drops down to Lyman-E, the Lyman-F energy is emitted as two photons: a 12.755555 Vi(J) Lyman-E photon frequency 3.188888E15 and a 0.096689 Vi(J) Lyman F-E photon frequency 2.41723E13. The frequency and wavelength of the transition photon is identical to the Balmer E-D transition photon.
  • When a Lyman-G positional resonance drops down to Lyman-F, the Lyman-G energy is emitted as two photons: a 12.852245 Vi(J) Lyman-F photon frequency 3.21306E15 and a 0.062755 Vi(J) Lyman G-F photon frequency 1.568878E13. The frequency and wavelength of the transition photon is identical to the Balmer F-E transition photon.

The equivalence of Balmer-A and Lyman series transitions can be extended to the Paschen, Brackett, Pfund and Humphreys series.

The Lyman C-B transition is equal to the energy and frequency of Paschen-A.

The Lyman D-C transition is equal to the energy and frequency of Brackett-A.

The Lyman E-D transition is equal to the energy and frequency of Pfund-A.

The Lyman F-E transition is equal to the energy and frequency of Humphreys-A.

An explanation of atomic spectra begins with the ionization energies.

In atoms with more than one proton, the discretely exact energy (in red) for elemental ionization energy above which the atom no longer exists, is equal to product of the square of the number of protons times the discretely exact value for the hydrogen ionization energy. The intermediate ionization energies (in blue) are equal to the CRC value divided by omega-2.

The ionization frequency is equal to the product of the ionization energy and the Coulomb divided by the discrete Planck constant.

The ionization wave number is equal to the ionization frequency divided by the photon velocity.

The photon wavelength is the inverse of the wave number.

The difference between the calculated and measured value for the hydrogen ionization energy, divided by the difference between the measured wavelength and calculated wavelength for hydrogen ionization is very nearly equal to the difference between the photon velocity and the speed of light.

The difference between these two values, independent of how it is calculated, is a measurement error term of approximately 0.00468%.

The differences between the measured and calculated values for hydrogen are of no concern and, even though the Rydberg equations derive the measurable wavelengths to high accuracy, the explanation requiring the simultaneous emission of two photons is not consistent with the spectral mechanism hypothesis.

The Rydberg explanation for the emission of atomic spectra requires two frequencies:

  • One frequency is the structural frequency. Structural frequency is proportional to the energy of the positional resonance between an electron and proton (the energy required to hold the electron and proton in the positional resonance).
  • The photon frequency, equal to the difference between adjacent structural frequencies, is proportional to an ionization energy (the energy required to remove an electron from the positional resonance).

The photon frequency and wavelength are not directly proportional to structural energy and, in atoms larger than hydrogen, cannot be calculated by a Rydberg equation.

Proofs that wavelength and frequency are not directly proportional to energy:

  • Spectral wavelengths emitted by sources differing greatly in energy, by a discharge tube in the laboratory, by the sun or by the galactic center, are indistinguishable.
  • In 60 Hertz power transformers the energy of the emitted photons is proportional to the energy of the current (or the magnetic field).

A general explanation for atomic spectra requires an examination of the measured ionization energies and the measured wavelengths of the first four elements larger than hydrogen.

The number of CRC ionization energies (electron Volts in units of kinetic Joule) for each elemental atom larger than hydrogen is equal to the number of nuclear protons; and the number of atomic energies (intrinsic Volts in units of discrete Joule) is also equal to the number of nuclear protons.

While it is true that measured wavelengths are not directly proportional to energy, it is also true that shorter wavelengths are proportional to lower energies and longer wavelengths are proportional to higher energies. For example, ultraviolet photons have shorter wavelengths and lower energies, and visible photons have longer wavelengths and higher energies.

In any atomic spectrum, each measured wavelength corresponds to one specific energy and, in order for each measured wavelength to correspond to one specific energy, the number of wavelengths must either be equal to the number of energies or equal to an integer multiple of the number of energies.

For example, in helium there are two CRC ionization energies (electron Volts in units of kinetic Joule) corresponding to two atomic energies (intrinsic Volts in units of discrete Joule), fourteen measured wavelengths, and one transition between a wavelength proportional to a lower energy and a wavelength proportional to a higher energy.

In the below table, seven lower and seven higher helium atomic energies are in the first row, the measured wavelengths from shortest to longest are in the third row, and the second row is the ratio of the column wavelength divided by the adjacent lower wavelength. This is the definitive test for a transition from a wavelength corresponding to a lower energy to a wavelength corresponding to a higher energy. In the helium atom, the transition wavelength is also detectable by inspection of the previous wavelengths compared to the following wavelengths.

The transitions are less clear in lithium, beryllium, and boron.

In lithium, beryllium and boron the transition wavelengths are not definitively detectable by simple inspection. However, after the higher energy transitions are established by the ratios of the column wavelength divided by the adjacent lower wavelength, the first transition becomes apparent by inspection of the measured wavelengths.

The spectral mechanism hypothesis has been transformed into a general explanation for atomic spectra:

In hydrogen a single electron and proton are engaged in a positional resonance at a discretely exact frequency equal to 3.28E15 Hz. In atoms larger than hydrogen many electrons and protons are engaged in sustained positional resonances, equal to the product of the square of the number of nuclear protons and 3.28E15 Hz, in which CCW quantons are emitted in one direction by electrons and absorbed by nuclear protons, and CW quantons are emitted in the opposite direction by nuclear protons and absorbed by electrons. The positional resonances can be situated at any one of several quantized positions proportional to energy and corresponding to spectral emission and absorption lines. On emission of a photon the energy of the resonance decreases, and the electron drops to a lower energy level. On absorption of a photon the energy of the resonance increases and the electron jumps to a higher energy level.

Part Six

Cosmology

The purpose of this chapter is to disprove cosmic inflation:

  • The radiated intrinsic energy which drives the resonance of constant photon velocity is converted into units of intrinsic redshift per megaparsec.
  • A detailed general derivation of intrinsic redshift (applicable to any galaxy) is made.
  • The final results of the HST Key Project to measure the Hubble Constant are explained by intrinsic redshift.22

The only measurables in the determination of galactic redshifts are the photon wavelength emitted and received in the laboratory, the photon wavelength emitted by a galaxy and received by an observatory, and the ionization energies.

In the following equations Hydrogen-alpha (Balmer-A) wavelengths are used in calculations of intrinsic redshift.

Intrinsic redshift per megaparsec

The photon intrinsic energy radiated per second due to quanton/graviton emissions is equal to the product of 8 and the discrete Planck constant.

The 2015 IAC value for the megaparsec is proportional to the IAC exact SI definition of the astronomical unit (149,597,870,700 m).

The time of flight per megaparsec is equal to one mpc divided by the photon velocity.

The photon intrinsic energy radiated per megaparsec is equal to the product of time of flight per mpc and the photon intrinsic energy radiated per second due to quanton/graviton emissions.

The decrease in photon frequency due to the energy radiated is equal to the photon intrinsic energy radiated per megaparsec divided by the discrete Planck constant.

The increase in photon wavelength due to the photon intrinsic energy radiated is equal to the ratio of the photon velocity divided by decrease in photon frequency.

Note that wavelength and energy are independent thus wavelength cannot be directly determined from energy, but frequency is proportional to energy and the decrease in frequency is proportional to the increase in wavelength.

The intrinsic redshift per megaparsec is equal to the Hydrogen-alpha (Balmer-A) emission wavelength plus the wavelength increase.

General derivation of galactic intrinsic redshift

The distance of the galaxy in units of mpc is that determined by the Hubble Space Telescope Key Project.23 Below, the example calculations are for NGC0300.

The time of flight of photons emitted by NGC0300 is equal to the product of the time of flight per megaparsec and the Hubble Space Telescope Key Project distance of the galaxy.

The photon intrinsic energy radiated by NGC0300 is equal to the product of the time of flight at the distance of NGC0300 and the photon intrinsic energy radiated per second due to quanton/graviton emissions.

The decrease in photon frequency is equal to the photon intrinsic energy radiated by NGC0300 divided by the discrete Planck constant.

The increase in photon wavelength due to the photon intrinsic energy radiated is equal to the ratio of the photon velocity divided by decrease in photon frequency.

The intrinsic redshift at the distance of NGC0300 is equal to the Hydrogen-alpha (Balmer-A) emission wavelength plus the wavelength increase.

Results of the HST Key Project to measure the Hubble Constant

The goal of this massive international project, involving more than fifteen years of effort by hundreds of researchers, was to build an accurate distance scale for Cephied variables and use this information to determine the Hubble constant to an accuracy of 10%.

The inputs to the HST key project were the observed redshifts and the theoretical relativistic expansion rate of cosmic inflation.

In column 2 below, the galactic distances of 22 galaxies in units of mpc are the values determined by the HST Key Project.24

In column 3 below, the galactic distances are expressed in units of meter.

In column 4 below, the time of flight of photons emitted by the galaxy is equal to the distance of the galaxy in meters divided by the photon velocity.

The photon intrinsic energy radiated due to quanton/graviton emissions at the distance of the galaxy is equal to the product of the time of flight of photons emitted by the galaxy and the photon intrinsic energy radiated per second.

The decrease in photon frequency is equal to the photon intrinsic energy radiated by the galaxy divided by the discrete Planck constant.

The increase in photon wavelength due to the photon intrinsic energy radiated is equal to the ratio of the photon velocity divided by decrease in photon frequency.

Above column 5, the intrinsic redshift at the distance of the galaxy is equal to the Hydrogen-alpha (Balmer-A) emission wavelength plus the wavelength increase.

The Hubble parameter for a galaxy, equal to the product of the ratio of 2 omega-2 (converts intrinsic energy to kinetic energy) divided by the time of flight of photons received at the observatory that were emitted by the galaxy, and the ratio of the distance of the galaxy in units of kilometer divided by the distance of the galaxy in units of megaparsec, is denominated in units of km/s per mpc.

The Hubble constant is equal to the sum of the Hubble parameters for the galaxies examined divided by the number of galaxies.

The theory of cosmic inflation has been disproved.

Part Seven

Magnetic levitation and suspension

This chapter was motivated by a video about quantum magnetic levitation and suspension in which superconducting disks containing thin films of YBCO are levitated and suspended on a track composed of neodymium magnet arrays in which a unit array contains four neodymium magnets (two diagonal magnets oriented N→S and the other two S→N).25

An understanding of levitation and suspension by neodymium magnet arrays begins with consideration of the differences between the levitation of a superconducting disk containing thin films of metal oxides and the levitation of thin slice of pyrolytic carbon.

Oxygen is paramagnetic. An oxygen atom is magnetized by the magnetic field of a permanent magnet in the direction of the external magnetic field (for example, a S→N external magnetic field induces a S→N internal field) and reverts to a demagnetized state when the field is removed. The levitation of a superconducting disk requires an array of neodymium magnets and cooling below the critical temperature. In quantum levitation or suspension, the position of the disk is established by holding (pinning) it in the desired location and orientation, and if a pinned disk is forced into a new location and orientation, it remains pinned in the new location.

Carbon is diamagnetic. A carbon atom is magnetized by a magnetic field in the direction opposite to the magnetic field (for example, a N→S external magnetic field induces a S→N internal field) and reverts to a demagnetized state when the field is removed. Magnetic levitation occurs at room temperature, a thin slice of pyrolytic carbon levitates at a fixed distance parallel to the surface of an array of neodymium magnets, and a levitated slice forced closer to the surface springs back to the fixed distance once the force is removed.

Above, levitation of pyrolytic carbon.26

In the levitation of pyrolytic carbon, CCW quantons are emitted by a magnetic North pole and CW quantons are emitted by a magnetic South pole (magnetic emission of quantons is discussed in Part Four).

The number of chirality meshing interactions required to exactly oppose the gravitational force on a thin slice of pyrolytic carbon (or any object) is equal to the local gravitational constant of earth divided by the product of the proton amplitude and the square root of Lambda-bar.

In the above equation, the local gravitational constant of earth (as derived in Part One) is equal to 10 meters per second per second and the proton amplitude (also derived in Part One) is equal to 150 and, (as derived in Part Four) the square root of Lambda-bar is the deflection distance (units of meter) of a single chirality meshing interaction between a quanton and an electron.

The above equation is proportional to energy: the higher the energy, the higher the number of chirality meshing interactions, and the higher the levitation distance; the lower the energy, the lower the number of chirality meshing interactions, and the lower the levitation distance.

Pyrolytic carbon is composed of planar sheets of carbon atoms in which a unit cell is composed of a hexagon of carbon atoms joined by double bonds. Carbon atoms are bonded by either lower energy single bonds proportional to the first ionization energy or higher energy double bonds proportional to the second ionization energy. The measured first and second ionization energies of carbon are 1086.5 and 2352.0 (units of kJ/mol)27.

Due to the discretely exact value of PE charge resonance, in carbon (or any elemental atom) the quanton emission-absorption frequency is equal to 3.28E15 Hz.

The quanton emission frequency of a unit cell of pyrolytic carbon is equal to the product of the discretely exact PE charge resonance frequency of 3.28E15 Hz and the ratio of the second ionization energy of carbon divided by the first ionization energy of carbon.

The levitation distance of a thin slice of pyrolytic carbon (in units of mm) is equal to the product of the ratio of quanton emission frequency of a pyrolytic carbon unit cell divided by six (the number of carbon atoms in a unit cell) times 1000 mm/m and the square root of Lambda-bar.

The oxygen atoms in YBCO oxides are bonded by either lower energy single bonds proportional to the first ionization energy or higher energy double bonds proportional to the second ionization energy. The measured first and second ionization energies of oxygen are 1313.9 and 3388.3 (units of kJ/mol).

The three YBCO metallic oxides are composed of low energy single bonds, high energy double bonds, or single and double bonds. In yttrium oxide (Y2O3), a single bond connects each yttrium atom with the inside oxygen, and a double bond connects each yttrium atom with one of the two outside oxygens. In barium oxide (BaO) the two atoms are connected by a double bond. Copper oxide is a mixture of cupric oxide (copper I oxide) in which a single bond connects each of two copper atoms with the oxygen atom, and cuprous oxide (copper II oxide) in which a double bond connects the copper atom with the oxygen atom.

Voltage is the emission of quantons either directly by the Q-axis of an electron or proton or transversely by a magnetic field from which CCW quantons are emitted by the North pole and CW quantons by the South pole.

The mechanism of magnetic levitation or suspension of a superconducting disk is the absorption of quantons, emitted by a neodymium magnet array, in chirality meshing interactions by electrons in the oxygen atoms of superconductingYBCO oxides resulting in repulsive deflections due to CCW quantons (in quantum levitation) and attractive deflections due to CW quantons (in quantum suspension).

The levitation or suspension distance of a superconductingYBCO oxide is higher (the maximum distance) for double bonded oxides and lower (the minimum distance) for single bonded oxides. The initial position of the YBCO disk is established by momentarily holding (pinning) it in the desired location and orientation at some specific distance from the neodymium magnet array.

In each one-hundredth of a second more than 2E14 chirality meshing interactions establishes the intrinsic energy of electrons within the superconducting oxides. At the same time, at any specific distance above or below the neodymium magnet array the number of quanton interactions, inversely proportional to the square of distance, establishes the availability of quantons to be absorbed at that specific distance. The result is an electrical Stable Balance of the electrons in superconducting oxides at specific distances from the neodymium magnet array, analogous to the gravitational Stable Balance of particles in planets at a specific orbital distance from the sun.

This is the mechanism of pinning in YBCO superconducting disks.

The levitation or suspension distance (units of mm) of a single bonded superconductingYBCO oxide is equal to the product of the ratio of the first ionization energy of oxygen divided by itself, the discretely exact PE charge resonance of 3.28E15 Hz, the square root of Lambda-bar, the ratio of the discrete steric factor divided by 1 (single bond), and 1000 (to convert m to mm).

The levitation or suspension distance (units of mm) of a double bonded superconductingYBCO oxide is equal to the product of the ratio of the second ionization energy of oxygen divided by the first ionization energy of oxygen, the discretely exact PE charge resonance of 3.28E15 Hz, the square root of Lambda-bar, the ratio of the discrete steric factor divided by 2 (double bond), and 1000 (to convert m to mm).

1 Original letter from Isaac Newton to Richard Bentley, 189.R.4.47, ff. 7-8, Trinity College Library, Cambridge, UK http://www.newtonproject.ox.ac.uk

2 https://nssdc.gsfc.nasa.gov/planetary/planetfact.html, accessed Dec 24, 2021

3 Urbain Le Verrier, Reports to the Academy of Sciences (Paris), Vol 49 (1859)

4 Clemence G.M. The relativity effect in planetary motions. Reviews of Modern Physics, 1947, 19(4): 361-364.

5 Eric Doolittle, The secular variations of the elements of the orbits of the four inner planets computed for the epoch 1850 GMT, Trans. Am. Phil. Soc. 22, 37(1925).

6 Michael P. Price and William F. Rush, Nonrelativistic contribution to mercury’s perihelion precession. Am. J. Phys. 47(6), June 1979.

7 Wikimedia, by Daderot made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication, location National Museum of Nature and Science, Tokyo, Japan.

8 Illustration from 1908 Chambers’s Twentieth Century Dictionary. Public domain.

9 Wikimedia “Sine and Cosine fundamental relationship to Circle and Helix” author Tdadamemd.

10 By Jordgette – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=9529698

11 By Ebohr1.svg: en:User:Lacatosias, User:Stanneredderivative work: Epzcaw (talk) – Ebohr1.svg, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=15229922

12 https://www.nobelprize.org/prizes/physics/1927/summary/

13 O. Stern, Z. fur Physik, 7, 249 (1921), title in English: “A way to experimentally test the directional quantization in the magnetic field”.

14 Ronald G. J. Fraser, Molecular Rays, Cambridge University Press, 1931.

15 The Molecular Beam Resonance Method for Measuring Nuclear Magnetic Moments. II Rabi, S Millman, P Kusch, JR Zacharias – Physical review, 1939 – APS

16 INDC: N. J. Stone 2014. Nuclear Data Section, International Atomic Energy Agency, www-nds.iaea.org/publications

17 “Quantum theory yields much, but it hardly brings us close to the Old One’s secrets. I, in any case, am convinced He does not play dice with the universe.” Letter from Einstein to Max Born (1926).

18 “That gravity should be innate inherent & essential to matter so that one body may act upon another at a distance through a vacuum without the mediation of anything else by & through which their action or force may be conveyed from one to another is to me so great an absurdity that I believe no man who has … any competent faculty of thinking can ever fall into it.” Original letter from Isaac Newton to Richard Bentley, 189.R.4.47, ff. 7-8, Trinity College Library, Cambridge, UK http://www.newtonproject.ox.ac.uk

19 Ionization energies of the elements (data page), https://en.wikipedia.org/

20 How to determine the range of acceptable results for your calorimeter, Bulletin No. 100, Parr Instrument Company, www.parrinst.com.

21 See www.wikipedia.org, www.hyperphysics.com, www.shutterstock.com

22 Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant, Astrophysical Journal 0012-376v1, 18 Dec 2000.

23 Page 60, Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant, Astrophysical Journal 0012-376v1, 18 Dec 2000.

24 Page 60, Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant, Astrophysical Journal 0012-376v1, 18 Dec 2000.

25 “Dr. Boaz Almog: Quantum Levitation” https://www.youtube.com/watch?v=4HHJv8lPERQ .

26 This image has been released into the public domain by its creator, Splarka. https://commons.wikimedia.org/wiki/File:Diamagnetic_graphite_levitation.jpg

27 Ionization energies of the elements (data page), https://en.wikipedia.org/

High Speed Toronto Quebec Rail Plan Underway

A special ‘Study in Brief’ via our friends at cdhowe.org

  • This study estimates the economic benefits of a new, dedicated passenger rail link in the Toronto-Québec City corridor, either with or without high-speed capabilities.
  • Cumulatively, in present value terms over 60 years, economic benefits are estimated to be $11-$17 billion under our modelled conventional rail scenarios, and $15-$27 billion under high-speed rail scenarios.
  • This study estimates economic benefits, rather than undertaking a full cost-benefit analysis. The analysis is subject to a range of assumptions, particularly passenger forecasts.

Introduction

Canada’s plans for faster, more frequent rail services in the Toronto-Québec City corridor are underway.

In 2021, the federal government announced plans for a new, high frequency, dedicated passenger rail link in the Toronto-Québec City corridor. More recently, the government has considered the potential for this passenger line to provide high-speed rail travel. These two options are scenarios within the current proposed rail project, which VIA-HFR has named “Rapid Train.” This paper analyzes the economic benefits of the proposed Rapid Train project, considering both scenarios, and by implication the costs of forgoing them.

The project offers substantial economic and social benefits to Canada. At a time when existing VIA Rail users must accept comparatively modest top speeds (by international standards) and regular delays, this project offers a dedicated passenger line to solve network capacity constraints. With Canada’s economy widely understood to be experiencing a productivity crisis (Bank of Canada 2024), combined with Canada seeking cost-effective approaches to reducing harmful CO2 emissions, the project offers both productivity gains and lower-emission transportation capacity. There are, in short, significant opportunity costs to postponing or not moving ahead with this investment and perpetuating the status quo in rail service.

The Toronto-Québec City corridor, home to more than 16 million people (Statistics Canada 2024) and generating approximately 41 percent of Canada’s GDP (Statistics Canada 2023), lacks the sort of fully modernized passenger rail service provided in comparable regions worldwide. For example, Canada is the only G7 country without high-speed rail (HSR) – defined by the International Union of Railways (UIC) as a train service having the capability to reach speeds of 250 km per hour. Congestion has resulted in reliability (on time performance) far below typical industry standards. Discussion about enhancing rail service in this corridor has persisted for decades. But delays come with opportunity costs. This Commentary adds up those costs in the event that Canada continues to postpone, or even abandons, investment in enhanced rail services.

The existing rail infrastructure in the Toronto-Québec City corridor was developed several decades ago and continues to operate within parameters set during that time. However, significant changes have occurred since then, including higher population growth, economic development, and shifting transportation patterns. Rising demand for passenger and freight transportation – both by rail and other modes – has increased pressure on the region’s transportation network. There is increasing need to explore the various mechanisms through which enhancements to rail service could affect regional economic outcomes.

According to Statistics Canada (2024), the Toronto-Québec City corridor is the most densely populated and heavily industrialized region in Canada. This corridor is home to 42 percent of the country’s total population and comprises 43 percent of the national labor market. Transport Canada’s (2023) projections indicate that by 2043, an additional 5 million people will reside in Québec and Ontario, marking a 21 percent increase from 2020. This population growth will comprise more than half of Canada’s overall population increase over the period. As the population and economy continue to expand, the demand for all modes of transportation, including passenger rail, will rise. The growing strain on the transportation network highlights the need for infrastructure improvements within this corridor. In 2019, passenger rail travel accounted for only 2 percent of all trips in the corridor, with the vast majority of journeys (94 percent) undertaken by car (VIA-HFR website). This distribution is more skewed than in other countries with high-speed rail. For example, between London and Paris, aviation capacity has roughly halved since the construction of a high-speed rail link (the Eurostar) 25 years ago, which now has achieved approximately 80 percent modal share (Morgan et al. 2025, OAG website 2019). As such, there is potential for rail to have a greater modal share in Canada, particularly as the need for sustainable and efficient transportation solutions becomes more pressing in response to population growth and environmental challenges.

In practical terms, the cost of not proceeding with the Rapid Train project can be estimated as the loss of economic benefits that could have been realized if the project had moved forward. It should be noted that this study does not undertake a full cost-benefit analysis (CBA) of the proposed investment. Rather, it examines the various economic advantages associated with introducing the proposed Rapid Train service in the Toronto-Québec City corridor. Specifically, it analyzes five key dimensions of economic impact: rail-user benefits, road congestion reduction, road network safety improvements, agglomeration effects (explained below), and emission savings. The first three benefits primarily impact individuals who would have travelled regardless, or were induced to travel by rail or car. Agglomeration benefits extend to everyone living in the corridor, while emission savings contribute to both national and international efforts to combat climate change. In each of these ways, enhanced rail services can contribute to regional economic growth and sustainability. By evaluating these aspects, this study aims to develop quantitative estimates of the benefits that enhanced rail services could bring to the economy and society, and by doing so indicate the potential losses that could result from forgoing the proposed rail investment.

Rail user benefits constitute the most direct economic gains. Through faster rail transport with fewer delays, rail users experience reduced travel times, increased service reliability, and improved satisfaction. The Rapid Train project provides rail-user benefits because dedicated passenger tracks would remove the need to give way to freight transport, thus reducing delays. The Rapid Train project would see further benefits with faster routes reducing travel time.

Congestion effects extend beyond individual transportation choices to influence broader economic activity. This study considers how enhanced rail services might affect road congestion levels in key urban centres and along major highways within the corridor. Road network safety is a further aspect of the economic analysis in this study, as modal shift from road to rail could reduce road traffic accidents and their associated economic costs.

Agglomeration economies are positive externalities that arise from greater spatial concentration of industry and business, resulting in lower costs and higher productivity. Greater proximity results in improved opportunities for labour market pooling, knowledge interactions, specialization and the sharing of inputs and outputs (Graham et al. 2009). Improved transportation (both within and between urban areas) can support agglomeration economies by improving connectivity, lowering the cost of interactions and generating productivity gains.1 Supported by academic literature (Graham 2018), these wider economic benefits are included within international transportation appraisal guidance (Metrolinx 2021, UK Department for Transport 2024). Agglomeration effects from enhanced connectivity offer economic benefits distinct from (and additional to) benefits for rail users.

Environmental considerations, particularly emission savings, constitute a further economic benefit. This analysis examines potential reductions in transportation-related emissions and their associated economic value, including direct environmental costs. This examination includes consideration of how modal shifts might influence the corridor’s overall carbon footprint and its associated economic impacts.

The methodology employed in this analysis draws from established economic assessment frameworks while incorporating recent developments in transportation economics. The study utilizes data from VIA-HFR, Statistics Canada, and several other related studies and research papers. Where feasible, the analysis utilizes assumptions that are specific to the Toronto-Québec City corridor, recognizing its unique characteristics, economics, and demographic patterns.

The findings presented here may facilitate an understanding of how different aspects of rail service enhancement might influence economic outcomes across various timeframes and stakeholder groups. This analysis acknowledges that while some benefits may be readily quantifiable, others involve more complex, long-term economic relationships that require careful consideration within the specific context of the Toronto-Québec City corridor.

Based on our modelling and forecasts, the proposals for passenger rail infrastructure investment in the Toronto-Québec City corridor would present substantial economic, environmental, and social benefits (see Table 4 in the Appendix for a full breakdown, by scenario). Our scenario modelling is undertaken over a 60-year period, with new services coming on-stream from 2039, reported in 2024 present value terms. The estimated total of present value benefits ranges from $11 billion in the most conservative passenger growth scenario, to $27 billion in the most optimistic growth scenario. Cumulatively, in present value terms, economic benefits are estimated to be $11-$17 billion under our modelled conventional rail scenarios, and larger – $15-$27 billion – under high-speed rail scenarios. This is subject to a range of assumptions and inputs, including passenger forecasts.

These estimated benefits are built-up from several components. User benefits – stemming from time savings, increased reliability, and satisfaction with punctuality – are the largest component, with an estimated value of $3.1-$9.2 billion. Economic benefits from agglomeration effects (leading to higher GDP) are estimated at $2.6-$3.9 billion, while environmental benefits from reduced greenhouse gas emissions are estimated at $2.6-$7.1 billion. Additional benefits include reduced road congestion, valued between $2.0-$5.9 billion, and enhanced road safety, which adds an estimated $0.3-$0.8 billion. In addition, further sensitivity analysis has been undertaken alongside the main passenger growth scenarios.

Overall, the findings in this study demonstrate and underscore the substantial economic benefits of rail investment in the Toronto-Québec City corridor, and the transformative potential impact on the Toronto-Québec City region from economic growth and sustainable development.

Finally, there are several qualifications and limitations to the analysis in this study. It considers the major areas of economic benefit rather than undertaking a full cost-benefit analysis or considering wider opportunity costs, such as any alternative potential investments not undertaken. It provides an economic analysis, largely building on VIA-HFR passenger forecasts, rather than a full bottom-up transport modelling exercise. Quantitative estimates are subject to degrees of uncertainty.

The Current State of Passenger Rail Services in Ontario and Québec

The Toronto-Québec City corridor is the most densely populated and economically active region of the country. Spanning major urban centres such as Toronto, Ottawa, Montreal, and Québec City, this corridor encompasses more than 42 percent of Canada’s population and is a vital artery for both passenger and freight transport. Despite the significance of the corridor and the economic potential it holds, passenger rail services in Ontario and Québec face numerous challenges, and their overall state remains a topic of debate.

Passenger rail services in the region are primarily provided by VIA Rail, the national rail operator, along with commuter rail services like GO Transit in Ontario and Exo in Québec. VIA Rail operates intercity passenger trains connecting major cities in the Toronto-Québec City corridor, offering an alternative to driving or flying. VIA Rail’s most popular routes include the Montreal-Toronto and Ottawa-Toronto services, which run multiple times per day and serve business travellers, tourists, and daily commuters.

In addition to VIA Rail’s existing medium-to-long-distance services, commuter rail services play a key role in daily transportation for residents of urban centres like Toronto and Montreal. GO Transit, operated by Metrolinx, is responsible for regional trains serving the Greater Toronto and Hamilton Area, while Exo operates commuter trains in the Montreal metropolitan area. These services provide essential links for suburban commuters travelling to and from major employment hubs.

One of the primary challenges facing passenger rail services in Ontario and Québec is that the vast majority of rail infrastructure used by VIA Rail is owned by freight rail companies and is largely shared with freight trains, which means that passenger trains are regularly required to yield to freight traffic. This leads to frequent delays and slower travel times, making passenger rail less attractive compared to other modes of transport, especially for travellers who prioritize frequency, speed and punctuality. The absence of dedicated tracks for passenger rail is a major obstacle in improving travel times and increasing the frequency of service. Without addressing this issue, it is difficult to envisage a significant modal shift towards passenger rail, with cars having greater flexibility, and planes offering faster travel speeds once airborne. Much of the rail network was constructed several decades ago, and despite periodic maintenance and upgrades, it is increasingly outdated in its inability to facilitate higher speeds.

Passenger rail has the potential for low emission intensity. However, some of the potential environmental benefits of rail services in Ontario and Québec have yet to be fully realized. Many existing VIA Rail trains operate on diesel fuel, contributing to greenhouse gas emissions and air pollution. The transition to electrified rail, which would significantly reduce emissions, has been slow, and there is currently no comprehensive plan for widespread electrification of existing VIA Rail passenger rail services in the region.

The current state of rail passenger services in Ontario and Québec – and the opportunities for improvement – have prompted the development of the Rapid Train project along the Toronto-Québec City corridor, which proposes to reduce travel times between major cities and provide a more competitive alternative to air and car travel. The project would also generate significant environmental benefits by reducing greenhouse gas emissions associated with road and air transport. Furthermore, by investing in enhanced rail services, journey times would be further cut, generating additional time savings and associated economic benefits.

Current Government Commitment to Enhanced Rail Services

The Rapid Train project plans to introduce approximately 1,000 kilometres of new, mostly electrified, and dedicated passenger rail tracks connecting the major city centres of Toronto, Ottawa, Montreal, and Québec City. As such, it would be one of the largest infrastructure projects in Canadian history. It is led by VIA-HFR, a Crown corporation that collaborates with several governmental organizations, including Public Services and Procurement Canada, Housing, Infrastructure and Communities Canada; Transport Canada and VIA Rail, all of which have distinct roles during the procurement phases. Subject to approval, a private firm or consortium is expected to be appointed to build and operate these new rail services, via a procurement exercise (see below).

This new rail infrastructure would improve the frequency, speed, and reliability of rail services, making it more convenient for Canadians to travel within the country’s most densely populated regions. The project has the potential to shift a significant portion of travel from cars (which currently account for 94 percent of trips in the Toronto-Québec City corridor) to rail (which represents just 2 percent of total trips).

The project also seeks to contribute to Canada’s climate goals by reducing greenhouse gas emissions. Electrified trains and the use of dual-powered technology (for segments of the route that may still require diesel) will significantly reduce the environmental footprint of intercity travel. The project is expected to improve the experience for VIA Rail users, as dedicated passenger tracks will reduce delays caused by freight traffic, offering passengers faster, more frequent departures, and shorter travel times.

Beyond environmental benefits, the project is expected to stimulate economic growth by creating new jobs in infrastructure development, supporting new economic centres, and enhancing connectivity between cities, major airports, and educational institutions.

The project is currently at the end of the procurement phase, following the issuance of a Request for Proposals (RFP) in October 2023. Through the procurement exercise, a private-sector partner will be selected to co-develop and execute the project. The design phase, which may last four or more years, will involve regulatory reviews, impact assessments, and the development of a final proposal to the government for approval. Once constructed, passenger operations are expected to commence by 2040.

The Rapid Train project also offers opportunities to improve services on existing freight-owned tracks. VIA Rail’s local services, which currently operate between these major cities, will benefit from integration with this project. Although final service levels are not yet determined, the introduction of a new dedicated passenger rail line is expected to enable VIA Rail to optimize operating frequencies and schedules, leading to more responsive and efficient service for passengers. In turn, this will mean that departure and arrival times can be adjusted to better suit travellers’ needs, reducing travel times and increasing the attractiveness of rail as a mode of transportation for both leisure and business. As many of VIA Rail’s existing passenger services switch onto dedicated tracks, there is potential to free up capacity on the existing freight networks. As such, freight rail traffic may benefit from reduced congestion, supporting broader economic growth by easing supply chains and by improving the efficiency of goods transportation across Canada.

The project design will enable faster travel compared to existing services, but as the co-development phase progresses, it will examine the possibility of achieving even higher speeds on certain segments of the dedicated tracks. Achieving higher speeds is not guaranteed, due to the extensive infrastructure changes required and its associated costs, e.g., full double-tracking and the closure of approximately 1,000 public and private crossings. However, the project design currently incorporates flexibility to explore higher speeds where there may be opportunities for operational and financial efficiencies and additional user benefits.

The current Rapid Train project proposal seeks to achieve wider social and government objectives. In the context of maintaining public ownership, private-sector development partners will be required to respect existing labor agreements. VIA Rail employees will retain their rights and protections, with continuity ensured under the Canada Labour Code and relevant contractual obligations.

International Precedent

High-Speed Rail (HSR) already exists in many countries, with notable examples of successful implementation in East Asia and Europe. As of the middle of 2024, China has developed the world’s largest HSR network spanning over 40,000 kilometres, followed by Spain (3,661 km), Japan (3,081 km), and France (2,735 km) (Statista 2024). Among the G7 nations, Canada stands as the only country without HSR infrastructure, albeit the United States maintains relatively limited high-speed operations through the Acela Express in the Northeast Corridor. Recent significant HSR developments include China’s Beijing-Shanghai line (2,760 km), which is the world’s longest HSR route. In Europe, the UK’s High Speed 1 (HS1) connects London to mainland Europe via the Channel Tunnel. Italy has extended its Alta Velocità network with the completion of the Naples-Bari route in 2023, significantly reducing travel times between major southern cities (RFI 2023). Morocco recently became the first African nation to implement HSR with its Al Boraq service between Tangier and Casablanca (OCF 2022). In Southeast Asia, Indonesia’s Jakarta-Bandung HSR, completed in 2023, is the region’s first HSR system (KCIC 2023). India is installing the Mumbai-Ahmedabad HSR corridor, the country’s first bullet train project, which is scheduled to commence partial operations by 2024 (NHSRCL 2023).

The economic impacts of HSR have been extensively studied, particularly in Europe. In Germany, Ahlfeldt and Feddersen (2017) analyzed the economic performance of regions along the high-speed rail line between Cologne and Frankfurt: the study found that, on average, six years after the opening of the line, the GDP of regions along the route was 8.5 percent higher than their estimated counterfactual. In France, Blanquart and Koning (2017) found that the TGV network catalyzed business agglomeration near station areas, with property values increasing by 15-25 percent within a 5km radius of HSR stations. An evaluation of the UK’s HS1 project estimated cumulative benefits of $23-$30 billion (2024 prices, present value, converted from GBP) over the lifetime of the project, excluding wider economic benefits (Atkins 2014).

Modal shift and passenger growth is a critical driver of economic benefits. The Madrid-Barcelona corridor in Spain provides an example: HSR captured over 60 percent of the combined air-rail market within three years of operation, demonstrating that HSR can have a competitive advantage over medium-distance air travel (Albalate and Bel 2012). However, analysis by the European Court of Arbiters (2018) suggests that HSR routes require certain volumes of passengers (estimated at nine million) to become net beneficial, and while some European HSR routes have achieved this level (including the Madrid-Barcelona route), others have not. In the US, the Amtrek Acela service between Boston and Washington D.C. is estimated to have 3-4 million passengers (Amtrek 2023). For some high-speed rail lines, passenger volumes are supported by government environment policy. For example, Air France was asked directly by the government to reduce the frequency of short haul flights for routes where a feasible rail option existed (Reiter et al. 2022). Overall, passenger growth constitutes a key assumption regarding the benefits derived from the Rapid Train project.

Regarding the environmental benefits of HSR, a detailed study by the European Environment Agency (2020) found that HSR generates approximately 14g of CO2 per passenger-kilometre, compared to 158g for air travel and 104g for private vehicles. In Japan, the Central Japan Railway Company reports that the Shinkansen HSR system consumes approximately one-sixth the energy per passenger-kilometre compared to air travel. The UIC’s Carbon Footprint Analysis (2019) demonstrated that HSR infrastructure, despite high initial carbon costs during construction, typically achieves carbon neutrality within 4-8 years of operation through reduced emissions from modal shift.

Socioeconomic benefits of HSR extend beyond direct impacts on rail users. In Spain, the Madrid-Barcelona high-speed rail line enhanced business interactions by allowing for more same-day return trips and improved business productivity (Garmendia et al. 2012). Research has found that Chinese cities connected by HSR experienced a 20 percent increase in cross-regional business collaboration, providing potential evidence of enhanced knowledge spillovers and innovation diffusion (Wang and Chen 2019).

However, the implementation of HSR is not without challenges. Flyvbjerg’s (2007) analysis of 258 transportation infrastructure projects found that rail projects consistently faced cost overruns averaging approximately 45 percent. For example, the costs of the California High-Speed Rail project in the United States rose from an initial estimate of $33 billion in 2008 to over $100 billion by 2022, highlighting the importance of realistic cost projections and robust project management.

Positive labor market impacts are also evident, although varied by region. Studies in Japan by Kojima et al. (2015) found that cities served by Shinkansen experienced a 25 percent increase in business service employment over a 10-year period after connection. European studies, particularly in France and Spain, show more modest but still positive employment effects, with employment growth rates 2-3 percent higher in connected cities compared to similar unconnected ones (Crescenzi et al. 2021).

For developing HSR networks, international experience suggests several critical success factors. These include careful corridor selection based on population density and economic activity, integration with existing transportation networks, and sustainable funding mechanisms. The European Union’s experience, documented by Vickerman (2018), emphasizes the importance of network effects in finding that the value of HSR increases significantly when it connects multiple major economic centres.

Methodology

This study integrates data from VIA-HFR, Statistics Canada, prior reports on rail infrastructure proposals in Canada, and related studies, to build an economic assessment of potential benefits of the proposed Rapid Train project. Key assumptions throughout this analysis are rooted in published transportation models, modelling guidelines, and an extensive body of research. The methodology draws extensively from the Business Case Manual Volume 2: Guidance by Metrolinx, which itself draws upon the internationally recognized transportation appraisal guidelines set by the UK government’s Department for Transport (DFT). These established guidelines offer best practices and standards that provide a structured and reliable framework for estimating benefits. By aligning with proven methodologies in transportation and infrastructure project appraisal, this study ensures rigor and robustness within the economic modelling and analysis.

The proposed route includes four major stations: Toronto, Ottawa, Montréal, and Québec City. These major urban centres are expected to experience the most significant ridership impacts and related benefits. There are three further stations on the proposed route – Trois-Rivières, Laval, and Peterborough – although these are anticipated to have a more limited effect on the overall modelling results, due to their smaller populations. Based on forecast ridership data provided by VIA-HFR for travel between the four main stations, our model designates these areas as four separate zones to facilitate the benefit estimation. Figure 1 below illustrates the proposed route for the Rapid Train project and highlights the different zones modeled in this analysis.

According to current VIA-HFR projections, the routes are expected to be operational between 2039 and 2042. In line with typical transport appraisals, this paper estimates and monetizes economic and social benefits of the project over a 60-year period, summing the cumulative benefits from 2039 through to 2098, inclusive. To calculate the total present value (as of 2024) of these benefits, annual benefits are discounted at a 3.5 percent social discount rate, in line with Metrolinx guidance, and then aggregated across all benefit years.

Our model examines multiple scenarios to assess the range of potential benefits under various conditions. The primary scenarios within the Rapid Train project are for Conventional Rail (CR) and High-Speed Rail (HSR). These scenarios are distinguished by differences in average travel time, with HSR benefiting from significantly faster speeds than CR, and therefore lower travel times (see Table 2).

Within each of these scenarios, we consider three sub-scenarios from VIA-HFR’s modelled passenger projections – central, downside and upside – plus a further sub-scenario (referred to as the 2011 feasibility study in the Figures) based on previous modelled estimates of a dedicated passenger rail line in the corridor. The central sub-scenario provides VIA-HFR’s core forecast for passenger growth under CR and HSR. The upside sub-scenario reflects VIA-HFR’s most optimistic assumptions about passenger demand, while the downside represents the organisation’s more cautious assumptions.

The use of VIA-HFR’s passenger projections is cross-checked in two ways: First, our analysis models an alternative passenger growth scenario (2011 feasibility study), which is based upon the projected growth rate for passenger trips as outlined in the Updated Feasibility Study of a High-Speed Rail Service in the Québec City – Windsor Corridor by Transport Canada (2011).2 The analysis in that study was undertaken by a consortium of external consultants. Second, we have reviewed passenger volumes in other jurisdictions (discussed above and below).

In the absence of investment in the Rapid Train project, VIA-HFR’s baseline scenario passenger demand projections indicate approximately 5.5 million trips annually by 2050 using existing VIA Rail services in the corridor. In contrast, with investment, annual projected demand for CR ranges from 8 to 15 million trips, and for HSR between 12 and 21 million trips by 2050, across all the sub-scenarios described above. Figures 2 and 3 illustrate these projected ridership figures under CR and HSR scenarios across each sub-scenario, as well as compared to the baseline scenario.

Under the CR and HSR scenarios, while the vast majority of rail users are expected to use the new dedicated passenger rail services, VIA-HFR passenger forecasts indicate that some rail users within the corridor will continue to use services on the existing VIA Rail line, for example, due to travelling between intermediate stations (Kingston-Ottawa). The chart below illustrates the breakdown of benefits under the central sub-scenario for high-speed rail.

User Benefits

User benefits in transportation projects such as CR/HSR can be broadly understood as the tangible and intangible advantages that rail passengers gain from improved services. These benefits encompass the value derived from time saved, enhanced reliability, reduced congestion, and improved overall travel experience. For public transit projects like CR/HSR, user benefits are often key factors in justifying the investment due to their broad social and economic impact.

Rail infrastructure projects can reduce the “generalized cost” of travel between areas, which directly benefits existing rail users, as well as newly induced riders. The concept of generalized cost in transportation economics refers to the total cost experienced by a traveller, considering not just monetary expenses (like ticket prices or fuel) but also non-monetary factors such as travel time, reliability, comfort, and accessibility.

Investments that improve transit may reduce generalized costs in several ways. Consistent, on-time service lowers the uncertainty, inconvenience and dissatisfaction associated with delays. More frequent services provide passengers with greater flexibility and reduced waiting times. Reduced crowding can offer more comfortable travel, reducing the disutility associated with congested services. Enhanced services like better seating, Wi-Fi, or improved station facilities may increase user satisfaction. Better access to transit stations or stops may allow for easier integration into daily commutes, increasing the convenience for existing and new travellers. Faster travel can reduce travel time, which is often valued highly by passengers.

In this paper, user benefits are estimated based on three core components: travel time savings based on faster planned journey times, enhanced reliability (lower average delays on top of the planned journey time), and the psychological benefit of more reliable travel. In our analysis, the pool of users is comprised of the existing users who are already VIA Rail passengers within this corridor, plus new users who are not prior rail passengers. Within this category of new users there are two sub-groups. First, new users include individuals who are forecast to switch to rail from other modes of transport, such as cars, buses, and airplanes – known as “switchers.” Second, new users also include individuals who are induced to begin to use CR/HSR as a result of the introduction of these new services – known as “induced” passengers. Overall, this approach captures the comprehensive user benefits of CR/HSR, recognizing that time efficiency, increased dependability, and greater customer satisfaction hold substantial value for both existing and new riders. The split of new users across switchers and induced users – including the split of induced users between existing transport modes, primarily road and air – is based on the federal government’s 2011 feasibility study, although the modelling in this Commentary also undertakes sensitivity analysis using VIA-HFR’s estimates for these proportions. The approach to estimating rail-user benefits is discussed below.

The modelling in this study incorporates projections of passenger numbers for both existing VIA Rail services (under a ‘no investment’ scenario) and for the proposed CR/HSR projects, sourced from VIA-HFR transport modelled forecasts. This enables the derivation of forecasts for both existing and new users.

In line with the formula (above) for user benefits, this study estimates the reduction in generalized costs (C1 – C0 ) arising from the new CR/HSR transportation service. Since the ticket price for the proposed CR/HSR is still undetermined, we have not assumed any changes versus current VIA Rail fares, although this is discussed as part of sensitivity analysis. The model reflects a reduction in generalized costs attributed to shorter travel times and enhanced service reliability under CR/HSR. Table 2 shows a comparison of the average scheduled journey times (as of 2023) for existing VIA Rail services, compared to forecast journey times under the proposed CR/HSR services, across different routes.

In addition to travel time savings based on scheduled journey times, an important feature of the CR/HSR project is that a new, dedicated passenger rail line can reduce the potential for delays. To estimate the reduction in travel delays under CR/HSR, we first calculated a lateness factor for both existing VIA Rail and the proposed CR/HSR, based on punctuality data and assumptions. Current data indicate that VIA Rail services are on time (reaching the destination within 15 minutes of the scheduled arrival time) for approximately 60 percent of journeys. Therefore, VIA Rail experiences delays (arriving more than 15 minutes later than scheduled) approximately 40 percent of the time. Data showing the average duration of delays are not available, and therefore we estimate that each delay is 30 minutes on average, based on research and discussions with stakeholders. CR/HSR would provide a dedicated passenger rail service, which would have a far lower lateness rate. Our model assumes CR/HSR would aim to achieve significantly improved on-time performance, with on-time arrivals (within 15 minutes) for 95 percent of journeys (Rungskunroch 2022), which equates to 5 percent (or fewer) of trains being delayed upon arrival.

Combined, there are time savings to users from both faster scheduled journeys and fewer delays. The estimated travel time savings are derived from the difference between the forecast travel times of CR/HSR and the average travel times currently experienced with VIA Rail. The value of time is monetized by applying a value of $21.45 per hour, calculated by adjusting the value of time recommended by Business Case Manual Volume 2: Guidance-Metrolinx ($18.79 per hour, in 2021 dollars) to 2024 dollars using the Consumer Price Index (CPI). This value remains constant (in real terms) over our modelling period.

There is an additional psychological cost of unreliability associated with delays. Transport appraisal guidelines and literature typically ascribe a multiplier to the value of time for unscheduled delays. The modelling in this study utilizes a multiplier of 3 for lateness, which is consistent with government transport appraisal guidance in the UK and Ireland (UK’s Department for Transport 2015, Ireland’s Department of Transport 2023). Some academic literature finds that multipliers may be even higher, although it varies according to the journey distance and purpose (Rossa et al. 2024). Overall, the lateness adjustment increases the value to rail users of CR/HSR due to its improved reliability and generates a small uplift to the total user benefits under CR/HSR.

The modelling combines these user benefits and makes a final adjustment to net off indirect taxes, ensuring that economic benefits are calculated on a like-for-like basis with costs incurred by VIA-HFR (Metrolinx 2021). Individual users’ value of time implicitly takes into account indirect taxes paid, whereas VIA-HFR’s investments are not subject to indirect taxation. Ontario’s rate of indirect taxation (13 percent harmonized sales tax rate) has been used in the modelling (Metrolinx 2021).

The modelling does not assume any variation in ticket prices under the proposed CR/HSR services, relative to existing VIA Rail services. User benefits in the analysis are derived purely from the shorter journey times and improved reliability. This approach enables the estimation, in the first instance, of the potential benefits from time savings and reliability. While CR/HSR ticket prices are not yet determined, it is nevertheless possible to consider the impact of changes to ticket prices as a secondary adjustment, which is discussed in the sensitivity analysis further below.

Congestion and Safety on the Road Network

In addition to rail-user benefits, the proposed CR/HSR project would also provide benefit to road users via decongestion and a potential reduction in traffic accidents.

When new travel options become available, such as improved rail services, some travellers shift from driving to using transit, reducing the number of vehicles on the road. This reduction in vehicle-kilometres travelled (VKT) decreases road congestion, providing benefits to the remaining road users. Decreased congestion leads to faster travel times, and can also lower vehicle operating costs, particularly in terms of fuel efficiency and vehicle wear-and-tear.

Our research model includes a forecast of how improvements in rail travel could lead to decongestion benefits for auto travellers in congested corridors. Through CR/HSR offering a faster and more reliable journey experience versus existing VIA Rail services, VIA-HFR’s passenger modelling forecasts shifts in travel patterns, with a significant proportion of new rail users being switchers from roads. These shifts reduce road congestion and in turn generate welfare benefits for those continuing to use highways.

Analysis of Canadian road use data, cross-checked with more granular traffic data from the UK, suggests that the proportion of existing road VKT is 37 percent in peak hours and 63 percent in off-peak hours, based on Metrolinx’s daily timetable of peak versus non-peak hours (Metrolinx 2021, Statistics Canada 2014, Department for Transport 2024). Using this information, the estimated weighted average impact of road congestion is approximately 0.004 hours/VKT. Time savings are converted into monetary values (using $21.45/hour, in 2024 dollars) to estimate the economic benefits of reduced road congestion.

In practice, road networks are unlikely to decongest by the precise number of transport users who are forecast to switch from road to rail. First, the counterfactual level of road congestion (without CR/HSR) will change over time, as a function of population growth, investment in road networks (such as through highway expansion), developments in air transport options, and wider factors. Many of these factors are not known precisely (e.g., investment decisions regarding highways expansion across the coming decades), therefore the counterfactual is necessarily subject to uncertainty. Second, if some road users switch to rail due to investment in CR/HSR, the initial (direct) reduction in congestion would reduce the cost of road travel, inducing a subsequent (indirect) “bounce-back” of road users (known as a general equilibrium effect). The modelling of congestion impacts in this study is necessarily a simplification, focusing on the direct impacts of decongestion, based on the forecast number of switchers from road to rail.

In addition to decongestion, CR/HSR may also improve the overall safety of the road network through fewer vehicle collisions. Collisions not only cause physical harm but also cause economic and social costs. These include the emotional toll on victims and families, lost productivity from injuries or fatalities, and the costs associated with treating accident-related injuries. Road accidents can cause disruptions that delay other travellers, adding additional economic costs, and can also incur greater public expenditure through emergency responses.

With CR/HSR expected to shift some users from road to rail, this study models the forecast reduction in overall road VKT. This estimate for the reduction in road VKT is converted into a monetary value assuming $ 0.09/VKT in 2024 prices, which is discounted in future years by 5.3 percent per annum to account for general safety improvements on the road network over time (such as through improvements in technology) and fewer accidents per year (Metrolinx 2018, Metrolinx 2021).

Agglomeration

Agglomeration economies are the economic benefits that arise when firms and individuals are located closer to one another. This generates productivity gains which are additional to direct user benefits. These gains can stem from improved labor market matching, knowledge spillovers, and supply chain linkages, benefiting groups of firms within specific industries (localization economies) as well as across multiple industries (urbanization economies). Where businesses cluster more closely – such as within dense, urbanized environments – these businesses benefit from proximity to larger markets, varied suppliers, and accessible public services. For instance, if a manufacturing firm relocates to an urban hub such as Montreal, productivity benefits may ripple across industries as the economic density and activity scale of the area increases. Agglomeration can enable longer-term economic benefits, through collaboration across businesses, universities, and research hubs, stimulating research and development, supporting innovation and enabling new industries to develop and grow.

Transport investments generate economic benefits and increase productivity through urbanization and localization economies. Urbanization economies (Jacobs 1969) refer to benefits arising from a business being situated in a large urban area with a robust population and employment base. This type of agglomeration allows firms to leverage broader markets and infrastructure advantages, thus achieving economies of scale that are independent of industry. Conversely, localization economies (Marshall 1920) focus on productivity gains within a specific industry, where firms in close proximity can cluster together to benefit from a specialized labor pool and more efficient supply chains. For example, as multiple manufacturing firms cluster within an area, their proximity allows them to co-create a specialized workforce and share industry knowledge, creating productivity gains unique to that industry.

In practice, improved transportation can generate agglomeration effects in two ways; first is “static clustering”, where improvements in connectivity facilitate greater movement between existing clusters of businesses and improved labor market access, without changing land use. For individuals and businesses in their existing locations, enhanced connectivity reduces the travel times and the costs of interactions, so people and businesses are effectively closer together and the affected areas have a higher effective density.

Second, “dynamic clustering” can occur when transport investments alter the location or actual density of economic activity. Dynamic clustering can lead to either increased or decreased density in certain areas, impacting the overall productivity levels across regions by altering labor and firm distributions. Conceptually, dynamic clustering’s benefits include the benefits from static clustering.

The analysis in this study is based on static clustering effects, focusing on productivity benefits arising from improved connectivity without modelling potential changes in land use or actual density. This approach estimates the direct economic gains of reduced travel times and enhanced accessibility within existing urban and industrial structures. Benefits arising from dynamic clustering are subject to greater uncertainty because it may involve displacement of economic activity between regions. In addition, variations in density across regions could be influenced by external factors – such as regional economic policies, housing availability, or industry-specific demands – that would require a much deeper and granular modelling exercise. Overall, focusing on static clustering provides a more conceptually conservative estimate of the benefits.

To estimate the agglomeration economies associated with the CR/HSR project, we utilize well-established transport appraisal methodology for agglomeration estimation (Metrolinx 2021). The analysis in this study applies one simplification to accommodate data availability, which is to undertake the analysis at an economy-wide level, rather than performing and aggregating a series of sector-specific analyses.

Overall, the three-step model estimates these agglomeration effects through changes in GDP. In the first step, the generalized journey cost (GJC) between each zone pair is calculated. This GJC serves as an average travel cost across various transportation modes (e.g., road, rail, air), taking account of journey times and ticket prices. The GJC is estimated for both the baseline (existing VIA Rail) and investment scenarios (CR/HSR), across multiple projection years. Due to the sensitivity of agglomeration calculations, in the baseline the GJC for CR/HSR, road and air are assumed to be equivalent, and in the investment scenario the GJC for road and rail are reduced by utilizing the rule of half principle (see Figure 5). The baseline utilizes Canada-wide vehicle kilometre data from Statistics Canada to estimate passenger modal shares (across existing VIA Rail, road, and air) for 2024, with the modal shares remaining constant over time in the baseline (Transport Canada 2021, Transport Canada 2018, Statistics Canada 2016). In the scenarios, the modal shares are adjusted for passengers moving from existing VIA Rail (and other transport modes) to CR/HSR, as well as induced passengers.

In the second step, the effective density of each of the four zones is calculated under all scenarios. Effective density increases in the investment scenarios because CR/HSR reduces the GJC and enhances connectivity between zones.

In the third step, changes in effective density between scenarios are converted into productivity gains measured as changes in GDP, utilizing a decay parameter of 1.8 and an agglomeration elasticity of 0.046 (Metrolinx 2021). The decay parameter (being greater than 1) diminishes the agglomeration benefits between regions that are further away from each other, such that the estimated productivity gains (arising from greater connectivity) are higher for areas that are closer together. The agglomeration elasticity is – based on academic literature – the assumed sensitivity of GDP to changes in agglomeration. Approximately, an elasticity of 0.046 assumes that a 1 percent increase in the calculated estimate for effective density (see step 2) would correspond to a 0.046 percent increase in GDP. Data on GDP and employment are sourced from Statistics Canada’s statistical tables, and forecast employment growth is assumed to align with Statistics Canada’s projected population growth rates.

Emissions

Environmental effects from transportation create a further source of economic impact. This study considers the main dimensions – greenhouse gas (GHG) emissions and air quality – each contributing to external welfare impacts that affect populations and ecosystems.

Transportation accounts for approximately 22 percent of Canada’s GHG emissions (Canada’s 2024 National Inventory Report), primarily through automobile, public transit, and freight operations. Emissions from GHGs, particularly carbon dioxide, significantly impact the global climate by contributing to phenomena such as rising sea levels, shifting precipitation patterns, and extreme weather events. The social cost of carbon (SCC) framework, published by Environment and Climate Change Canada, assigns a monetary value to these emissions, reflecting the global damage caused by an additional tonne of CO₂ released into the atmosphere. The federal government’s SCC values were published in 2023, more recently than the values recommended by Metrolinx’s 2021 guidance, and therefore the government’s values are used for the modelling in this study. For SCC, data from Environment and Climate Change Canada’s Greenhouse Gas Estimates Table are used, adjusted to 2024 values using CPI. Within the modelling, SCC values increase from $303.6 (in 2024) to $685.5 (in 2098). Using SCC in cost-benefit analyses enables more informed decisions on transportation investments by calculating the welfare costs and benefits associated with emissions under both investment and business-as-usual scenarios.

A wider set of pollutants emitted by vehicles – including CO, NOx, SO₂, VOCs, PM10s, and PM2.5s – pose further health risks, causing respiratory issues, heart disease, and even cancer. These harmful compounds, classified as Criteria Air Contaminants (CACs), impact individuals living or working in the vicinity of transport infrastructure, leading to external societal costs that are not fully perceived by direct users of the transport network. Health Canada’s Air Quality Benefits Assessment Tool (AQBAT) quantifies the health impacts of CACs, evaluating the total economic burden of poor air quality through a combination of local pollution data and Concentration Response Functions (CRFs), linking pollutants to adverse health effects. Furthermore, AQBAT considers air pollution’s effects on agriculture and visibility, allowing analysts to estimate the overall benefits of reducing transport-related emissions for communities across Canada.

This study identifies that CR/HSR has the potential to reduce emissions across multiple fronts. First, as an electrified rail system, CR/HSR is capable of operating with zero emissions, providing a cleaner alternative to existing rail services. If VIA Rail discontinues some services on overlapping routes with CR/HSR, emissions from rail transport in those areas would decrease, as per its planning forecasts. Additionally, CR/HSR’s higher speeds and greater reliability are expected to attract more passengers over time, encouraging a modal shift from more carbon-intensive forms of transportation, such as cars and airplanes. This anticipated shift would lead to a reduction in overall emissions from private vehicle and regional air travel, contributing to CR’s/HSR’s positive environmental impact.

By incorporating SCC and AQBAT metrics, the analysis offers a holistic appraisal of the environmental and social benefits of reducing emissions and improving air quality through CR/HSR, capturing the external welfare consequences beyond direct user impacts. Unit costs of CACs (see Table 3 below) are sourced from Metrolinx (2021) and are also adjusted by CPI into 2024 prices.

Results and Analysis

This section sets out the potential benefits of CR/HSR across various scenarios and sub-scenarios, spanning the 60-year period project implementation (2039 to 2098, inclusive). Results are reported in 2024 present value terms, cumulated over the 60-year period, as per cost-benefit analysis (CBA) literature (e.g., Metrolinx 2021). This cumulative present value represents the total value of benefits to 2098, with benefits in future years discounted to 2024 values. Figure 6 below illustrates the total cumulative present value of benefits for the proposed CR/HSR project, under different scenarios and passenger growth sub-scenarios in our model.

Since the HSR upside is the most optimistic sub-scenario, with a higher speed and the highest projected growth rate for rail passengers, it yields the largest total economic benefit, estimated at approximately $27 billion. Conversely, the CR downside assumes a comparatively lower speed and a smaller growth rate for rail passengers, resulting in the lowest benefit among all sub-scenarios, estimated at around $11 billion. This range of outcomes highlights that economic benefits are sensitive to assumptions around speed and passenger growth, underscoring the importance of these factors in the overall project evaluation.

Figure 7 illustrates the breakdown of benefits from the proposed CR/HSR project across different sub-scenarios and categories of benefits (see Table 4 in the Appendix for numerical values). User benefits form the largest component, indicating that rail passengers are expected to gain approximately $3.1–$9.2 billion in value over the modelling period, in present-day terms. Road decongestion effects, agglomeration impacts and emissions reductions are also forecast to deliver economic benefits. This study’s modelling estimates that CR/HSR could generate agglomeration effects that boost GDP by around $2.6–$3.9 billion over the 60-year analysis period, through enhancing productivity in the Ontario-Québec corridor. CR/HSR could significantly reduce greenhouse gas emissions and improve air quality, valued at approximately $2.6–$7.1 billion when considering the social cost of carbon and other pollutants. Benefits from reduced congestion on roads are estimated at $2.0–$5.9 billion. Finally, improved road safety offers an additional $0.3–$0.8 billion (approximately) in present value. Together, these impacts illustrate the wide-ranging economic, environmental, and social benefits anticipated from the CR/HSR project.

Given the potential sensitivity of economic benefits to assumptions around passenger growth, the 2011 federal government feasibility study provides a useful point of comparison for rail passenger growth under CR/HSR. The current outlook for rail passenger forecasts is not the same as it was in 2011, but some of the changes will have offsetting impacts. On one hand, Canada’s population has both grown faster (between 2011 and 2024) and is expected to grow faster in the future, relative to expectations in 2011. On the other hand, remote working has increased significantly since the COVID-19 pandemic. Passenger forecasts are discussed in more detail below.

Modelled agglomeration benefits are at the upper end of expectations. For example, the value of agglomeration effects for the HSR central scenario in this study ($3.4 billion) is almost 50 percent of the value of rail user benefits ($7.2 billion). Within academic literature, economic benefits from agglomeration are typically estimated to be in the region of 20 percent of direct user benefits on average (Graham 2018). However, across a range of studies, agglomeration benefits up to 56 percent have been identified (Oxera 2018). Therefore, the modelled estimates appear high relative to prior expectations, but within a plausible range.

To note, our agglomeration modelling (based on the Metrolinx methodology) forecasts significant economic benefits for all four of the zones. Our modelled agglomeration estimates for each zone are a function of the distance between zones (higher distance reduces agglomeration benefits due to the decay parameter), forecast uptake of CR/HSR services, and GDP. For example, Toronto’s agglomeration effect (as a percentage of GDP) is forecast to be one-third less than that of Montreal, due to be Toronto being slightly further away (from Ottawa, Montreal and Quebec City) than those cities are to each other. The agglomeration modelling is complex and sensitive to input assumptions, therefore it is important to recognize a degree of uncertainty around the precise value of agglomeration-related economic benefits.

Sensitivity Analysis

Ticket prices for CR/HSR impact the total benefits. For example, under the HSR central scenario, if HSR ticket prices were set 20 percent above existing Via Rail ticket prices, the forecast present value of user benefits falls by around 40 percent. The present value of economic benefits would fall by $4.2 billion compared to the HSR central case (from $20.7 billion to $16.5 billion), the majority being due to lower user benefits. However, recognizing cost of living concerns for Canadian households, it is also possible that median ticket prices could fall – such as through dynamic pricing – in which case economic benefits could also rise, by a similar amount.

The source of CR/HSR passengers will impact the estimated quantum of benefits, although relatively moderately. If proportions for “switchers” and “induced” passengers are sourced from VIA-HFR’s estimates, the level of economic benefits is $3.0 billion lower (falling from $20.7 billion to $17.7 billion). VIA-HFR’s forecasts assume a higher proportion of induced passengers, and also assume a greater share of switchers from air transport. As a result, the main impact of the VIA-HFR assumptions is to produce a smaller road decongestion effect, which reduces the potential benefits for road users.

The agglomeration calculation is relatively sensitive to the baseline assumption for passenger modal share. The modelling in this study is based on Canada-wide vehicle kilometre data, utilizing information from Transport Canada and Statistics Canada. Further analysis could be undertaken to refine this assumption across Ontario and Québec, while also ensuring that forecast agglomeration benefits align with wider estimates in existing transport literature.

Discussion and Qualifications

The analysis presented in this study is based on currently available information and projections, which are subject to certain limitations. Notably, there are uncertainties surrounding several key factors, including the precise routes and station locations, the design specifications (e.g., maximum achievable speed), ticket pricing, expected passenger numbers, the breakdown across ”switchers” and “induced” passengers, and passenger modal shares more generally. These elements, if altered, could impact the economic outcomes considerably.

There are several important qualifications to the scope of this study. First, it provides an analysis of potential economic benefits from CR/HSR investment but does not seek to quantify or analyze the direct costs involved in procurement, financing, construction, operations, maintenance or renewals. As such, this study constitutes an analysis of economic benefits, rather than a full cost-benefit analysis exercise. Second, this study seeks to estimate national, aggregate-level impacts, rather than undertaking a full distributional analysis of the impacts across and between different population groups. Third, this study’s primary focus is an economic assessment, rather than a transportation modelling exercise. The economic analysis utilizes and relies upon detailed, bottom-up passenger forecasts developed by VIA-HFR (received directly), cross-checked against the 2011 federal government’s previous HSR study. All three of these scope issues are important inputs to a holistic transport investment appraisal and should be considered in detail as part of investment decision-making.

Specifically, regarding this final issue – passenger forecasts – it is relevant to consider the transport modelling assumptions in further detail. As noted above, this study has not developed a full transport model, nor does it seek to take a definitive view on VIA-HFR’s forecasts. We would recommend that independent technical forecasts are developed. However, there are several relevant observations.

On one hand, VIA-HFR’s estimates do not appear implausible. For example, HSR has achieved a 7-8 percent share of passenger travel within certain routes in the United States (New York-to-Boston and New York-to-Washington), which would appear to be broadly consistent with the level of ambition within VIA-HFR’s passenger growth forecasts for the HSR central scenario (LEK 2019). The Madrid-Barcelona high speed link is estimated to serve 14 million passengers per year (International Railway Journal 2024). Internationally, HSR has achieved high market shares in Europe and Asia, such as 36 percent modal share for Madrid-Barcelona and 37 percent for London-Manchester, albeit noting that Europe typically has lower road usage and a higher propensity to use public transport (LEK 2019).

On the other hand, it is important to recognize the historic tendency for optimism bias within transportation investment projects. For example, in the UK, the HS2 project was criticized as having “overstated the forecast demand for passengers using HS2 [and] overstated the financial benefits that arise from that demand” (Written evidence to the Economic Affairs Committee, UK 2014). A review of HS2 in 2020 revised downwards previous estimates of economic benefits (Lord Berkeley Review 2020). As noted further above, analysis by the European Court of Arbiters (2018) posits that not all HSR projects induce sufficient passenger volumes to achieve net benefits over the project lifetime.

Overall, future passenger forecasts will depend upon a range of factors, including ticket prices, the availability and price of substitute modes (i.e., air), cultural preferences for private vehicle ownership, the impact of changing emission standards and the feasibility of construction plans.

This study applies some pragmatic, simplifying assumptions and approximations, applied to best practice transport appraisal (Metrolinx 2021; Department for Transport, UK, 2024). Across these modelling assumptions, there is variation in the directional impact on our estimates for economic benefits.

On one hand, some of the modelled benefits are likely to be relatively high-end estimates. First, for rail-user benefits, the modelling assumes no differential in ticket prices between existing VIA Rail services and CR/HSR. It also assumes that CR/HSR can deliver VIA-HFR’s proposed journey times with 95 percent reliability, which is achievable but not guaranteed. Second, for road congestion benefits, the forecast (direct) reductions in road congestion assume no indirect “bounce-back” effect where reduced traffic encourages new or longer trips (as noted above). For example, analysis of US highway demand suggests that capacity expansion only results in temporary congestion relief, for up to five years, before congestion returns to pre-existing levels (Hymel 2019). Third, for agglomeration, the modelled estimates for economic benefits are approximately 50 percent of rail-user benefits, which is close to the upper end of estimates from other transportation studies. Fourth, for emissions, the estimated benefits from forecast emissions savings do not seek to make assumptions about future changes to fuel efficiency for road and air transport, the emissions associated with power generation for CR/HSR, or the anticipated growth in electric vehicle adoption. In the case of electric vehicle deployment, there is uncertainty regarding the level of uptake, as well as the carbon intensity of electricity generation (albeit Ontario and Québec have relatively “clean” grids by international standards). Fifth, for benefits overall, this study leverages the VIA-HFR forecasts for passenger growth which are likely to be ambitious, though they have been robustly developed.

On the other hand, by focusing on the most material economic benefits, this study may exclude some smaller additional benefits that could be considered in further detail. First, there may be specific impacts on the tourism and hospitality sector. By enhancing travel convenience, CR/HSR is likely to draw more visitors to the various cultural, entertainment, and natural attractions across the corridor. As this influx would benefit local businesses by stimulating economic growth and job creation, these impacts are likely to be reflected within the estimate of agglomeration benefits.

Second, CR/HSR would improve national and global competitiveness, enhancing the appeal of Canadian cities to investors and environmentally conscious travellers while helping Canada align more closely with global standards for sustainable, modern infrastructure. Again, the economic benefits are likely to align with the agglomeration estimates.

Third, this study does not seek to quantify the potential gains to individual productivity from CR/HSR ridership, e.g., from individuals having time to work on the train. There is not expected to be a benefit for existing rail users, as they can already utilize Wi-Fi on existing VIA Rail services. For individuals switching to rail from road or air, potential benefits would only accrue to business users. Although switchers from road and air could have opportunities for improved individual productivity, Wi-Fi is increasingly available on airlines and individuals are able to dial into meetings remotely whilst driving.

Fourth, CR/HSR could generate wider economic benefits by increasing competition between businesses along the corridor. International transport appraisal literature suggests that enhanced transport connectivity can erode price markups (and therefore increase consumer surplus) by overcoming market imperfections (Metrolinx 2021; Department for Transport 2024). However, such impacts are likely to be relatively small, e.g., the Department for Transport (UK) estimates them at 10 percent of the benefits for rail business users only. Furthermore, sources of market power in Canada are legal in nature (e.g., interprovincial trade barriers) which rail investment alone is unlikely to overcome.

There are a further group of issues that have been excluded consciously from the methodology in this study. First, impacts on rail crowding are not considered. Some transport appraisals (such as the UK’s economic appraisal of the High Speed 2 project) do estimate the user benefits from reduced crowding. However, this is not as applicable for CR/HSR: In the UK, users of existing rail services may be required to stand if the train is overbooked, whereas users of existing VIA Rail services are guaranteed a seat with their booking. Second, impacts on land and property values are not included within the economic benefits. With greater access to efficient transportation, properties near rail stations typically see increased demand and value, boosting local tax revenues and promoting urban revitalization. While CR/HSR could increase values in areas close to the proposed stations, such changes are not additional to other wider economic benefits, but rather reflect a capitalization of those benefits. To avoid the risk of double counting the economic benefits already estimated, these are excluded (Department for Transport 2024).

CR/HSR may improve social equity and accessibility by offering affordable, reliable travel options for those without cars, including low-income individuals, students, and seniors. This expanded access enables broader employment, education, and healthcare opportunities, contributing to a more inclusive society. Whilst this study does not include a distribution analysis, social benefits from greater inclusion and social equity would constitute a benefit of CR/HSR investment and merit further detailed analysis.

Finally, in addition to policy considerations, major investment decisions have a substantial political dimension. For example, Canada is the only G7 country without HSR infrastructure. While cognizant of the political context, the analysis in this study is purely an economic assessment and does not consider political factors.

Conclusion

Canada’s population and economy continue to expand, particularly within the Toronto-Québec corridor. Existing transportation routes can expect greater congestion over time, particularly capacity-constrained VIA Rail services. In this context, can Canada afford not to progress with faster, more frequent rail services? There are significant opportunity costs to postponing investment.

This study has developed quantified estimates of the economic benefits of investing in the proposed Rapid Train project in the Toronto-Québec City corridor. Cumulatively, in present value terms, these economic benefits are estimated to be $11-$17 billion under our modelled conventional rail (CR) scenarios, and larger – at $15-$27 billion – under high-speed rail (HSR) scenarios. Economic benefits arise from several areas, including rail user time savings and improved reliability, reduced congestion on the road network, productivity gains through enhanced connectivity, and environmental benefits through emission reductions. With many commentators highlighting that Canada is experiencing a “productivity crisis” and a “climate emergency,” the projected productivity gains and lower-emission transportation capacity from the Rapid Train project present particularly valuable opportunities.

This study has assessed major economic benefit categories as identified within mainstream transport appraisal guidance. Further research could include additional sensitivity analysis around key parameters, as well as consideration of potential dynamic clustering effects, and projections for housing and land values.

Clearly, there is a cost to investment in a new dedicated passenger rail service: upfront capital investment, ongoing operations and maintenance expenditure, and any financing costs. These costs are not assessed in this study and will need to be considered carefully by policymakers. However, inaction – by continuing with the status quo rail infrastructure – also has a significant opportunity cost. Canada would forgo billions of dollars worth of economic advantage if it fails to deal with current challenges, including congestion on the rail and road networks, stifled productivity, and environmental concerns.

This study identifies the multi-billion-dollar economic benefits from the proposed Rapid Train project. While these benefits will need to be weighed alongside the forecast project costs, this study provides a basis for subsequent project evaluation and highlights the significant opportunity costs that Canada is incurring in the absence of investment.

Appendix

For the Silo, Tasnim Fariha, David Jones. The authors thank Daniel Schwanen, Ben Dachis, Glen Hodgson and anonymous reviewers for comments on an earlier draft. The authors retain responsibility for any errors and the views expressed.

References

Ahlfeldt, G., Feddersen, A., 2017. “From periphery to core: measuring agglomeration effects using high-speed rail.” Journal of Economic Geography.

Albalate, D., and Bel, G. 2012. “High‐Speed Rail: Lessons for Policy Makers from Experiences Abroad.” Public Administration Review 72(3): 336-349.

Amtrak. 2023. Amtrak fact sheet: Acela service.

Atkins, AECOM and Frontier Economics. 2014. First Interim Evaluation of the Impacts of High Speed 1, Final Report, Volume 1. Prepared for the Department of Transport, UK.

Blanquart, C., and Koning, M. 2017. “The local economic impacts of high-speed railways: theories and facts.” European Transport Research Review 9(2): 12-25.

Bonnafous, A. 1987. “The Regional Impact of the TGV.” Transportation 14(2): 127-137.

California High-Speed Rail Authority. 2022. “2022 Business Plan.” Sacramento: State of California.

Central Japan Railway Company. 2020. “Annual Environmental Report 2020.” Tokyo: JR Central.

Crescenzi, R., Di Cataldo, M., and Rodríguez‐Pose, A. 2021. “High‐speed rail and regional development.” Journal of Regional Science 61(2): 365-395.

Dachis, B., 2013. Cars, Congestion and Costs: A New Approach to Evaluating Government Infrastructure Investment. Commentary. Toronto: C.D. Howe Institute. July.

Dachis, B., 2015. Tackling Traffic: The Economic Cost of Congestion in Metro Vancouver. Commentary. Toronto: C.D. Howe Institute. March.

Department of Transport (Ireland). 2023. “Transport Appraisal Framework, Appraisal Guidelines for Capital Investments in Transport, Module 8 – Detailed Guidance on Appraisal Parameters.”

Department for Transport (UK). 2024. “National Road Traffic Survey, TRA0308: tra0308-traffic-distribution-by-time-of-day-and-selected-vehicle-type.ods (live.com).”

______________. 2024. “Road traffic estimates (TRA).”

______________. 2015. “Understanding and Valuing Impacts of Transport Investment.”

______________. 2024. Transport analysis guidance (various).

Economic Affairs Committee, UK government. 2014. Written evidence (Alan Andrews), “EHS0071 – Evidence on The Economic Case for HS2.”

European Court of Arbiters. 2018. “Special Report: A European high-speed rail network: not a reality but an ineffective patchwork.”

European Environment Agency. 2020. “Transport and Environment Report 2020: Train or Plane?” EEA Report No 19/2020.

Flyvbjerg, B. 2007. “Cost Overruns and Demand Shortfalls in Urban Rail and Other Infrastructure.” Transportation Planning and Technology 30(1): 9-30.

Garmendia, M., Ribalaygua, C., and Ureña, J. M. 2012. “High speed rail: Implication for cities.” Cities 29(S2), S26-S31.

Graham, D., 2018: “Quantifying wider economic benefits within transport appraisal.”

Government of Canada, House of Commons. 2019. Vote No. 1366. 42nd Parliament, 1st Session.

Government of Canada. 2023. “Social Cost of Greenhouse Gas Estimates – Interim Updated Guidance for the Government of Canada.”

High Speed Rail Authority (HS2 Ltd). 2024. “HS2 Phase One: London to Birmingham Development Report.”

Hymel, K. 2019. “If you build it, they will drive: Measuring induced demand for vehicle travel in urban areas.” Transport Policy Volume 76.

Indonesian-Chinese High-Speed Rail Consortium (KCIC). 2023. “Jakarta-Bandung High-Speed Railway Project Completion Report.”

International Railway Journal. 2024. “Spanish high-speed traffic up 37 percent in 2023.”

International Transport Forum-OECD. 2013. “High Speed Rail Performance in France: From Appraisal Methodologies to Ex-post Evaluation.”

International Union of Railways (UIC). 2022. “High-Speed Rail: World Implementation Report.” Paris: UIC Publications.

International Union of Railways (UIC). 2019. “Carbon Footprint of Railway Infrastructure.” Paris: UIC Publications.

Jacobs, J. 1969. The Economy of Cities. New York: Random House.

Kojima, Y., Matsunaga, T., and Yamaguchi, S. 2015. “Impact of High-Speed Rail on Regional Economic Productivity: Evidence from Japan.” Research Institute of Economy, Trade and Industry (RIETI) Discussion Paper Series 15-E-089.

Lawrence, M., Bullock, R. G., and Liu, Z. 2019. “China’s High-Speed Rail Development.” World Bank Publications.

LEK. 2019. New Routes to Profitability in High-Speed Rail.

Lord Berkeley Review. 2020. A Review of High Speed 2, Dissenting Report by Lord Tony Berkeley, House of Lords: Lord-Berkeley-HS2-Review-FINAL.pdf.

Marshall, A. 1920. Principles of Economics. London: Macmillan.

Metrolinx. 2018, GO Expansion Full Business Case.

________. 2021. Business Case Manual Volume 2: Guidance.

________. 2021, Traffic Impact Analysis Durham-Scarborough Bus Rapid Transit.

Morgan, M., Wadud, Z., Cairns, S. 2025, “Can rail reduce British aviation emissions?” Transportation Research Part D 138.

National High Speed Rail Corporation Limited (NHSRCL). 2023. “Mumbai-Ahmedabad High Speed Rail Project Status Report.”

Office National des Chemins de Fer (ONCF). 2022. “Al Boraq High-Speed Rail Service: Five Year Performance Review.”

OAS. 2019. “High Speed Rail vs Air: Eurostar at 25, The Story So Far.”

Oxera. 2018. “Deep impact: assessing wider economic impacts in transport appraisal.”

Reiter, V., Voltes-Dorta, A., Suau-Sanchez, P. 2022, “The substitution of short-haul flights with rail services in German air travel markets: A quantitative analysis.” Case Studies on Transport Policy.

Rete Ferroviaria Italiana (RFI). 2023. “Alta Velocità Network Expansion: Naples-Bari Route Completion Report.”

Rossa et al. 2024. “The valuation of delays in passenger rail using journey satisfaction data.” Elsevier, Part D (129).

Rungskunroch, P. 2022. “Benchmarking Operation Readiness of the High-Speed Rail (HSR) Network.”

Statistics Canada. 2023. Table 36-10-0468-01 Gross domestic product (GDP) at basic prices, by census metropolitan area (CMA) (x 1,000,000).

______________. 2024. Table 14-10-0420-01 Employment by occupation, economic regions, annual.

______________. 2024. Table 17-10-0057-01 Projected population, by projection scenario, age and gender, as of July 1 (x 1,000).

_____________. 2016. Table 8-1: Domestic Passenger Travel by Mode, Canada.

______________. 2014, Canadian vehicle survey: Canadian vehicle survey, passenger-kilometres, by type of vehicle, type of day and time of day, quarterly (statcan.gc.ca).

Transport Canada. 2021. Transportation in Canada 2020, Overview Report, Green Transportation.

_______________. 2018. RA16-Passenger and Passenger-Kms for VIA Rail Canada and Other Carriers.

Ministry of Transportation of Ontario & Transport Canada. 2011. “Updated feasibility study of a high-speed rail service in the Québec City – Windsor Corridor: Deliverable No. 13 – Final report.”

VIA-HFR website. 2024. Frequently Asked Questions.

Vickerman, R. 2018. “Can high-speed rail have a transformative effect on the economy?” Transport Policy 62: 31-37.

Wang, X., and Chen, X. 2019. “High-speed rail networks, economic integration and regional specialisation in China.” Journal of Transport Geography 74: 223-235.

Fake Photo? Manipulated Video? How to Spot Sham AI

Amid a surge of alarming headlines exposing the disruptive impact of deepfake photo and video scams on our digital, cultural, societal financial and political landscapes, game-changing, readily-available free solutions are emerging that let you instantly identify and flag AI-generated imagery.

This to preserve the credibility of digital media and safeguard users from falling victim to scams. As synthetic media becomes more sophisticated, identifying AI-generated manipulations presents a unique challenge, but numerous  free apps and tools are readily available allowing users to validate photo and video authenticity with ease—a major step forward in safeguarding trust in a world increasingly influenced by AI-generated visuals, ensuring transparency and security in the digital age. More below.

Amid the onslaught of highly concerning news headlines  spotlighting how deepfake AI-generated photo and video scams are driving rampant misinformation and wreaking havoc across digital, cultural, workplace, political and other societal frameworks, solutions are emerging combat AI-driven misinformation and fraud before people fall victim to scams.

One AI disruptor transforming the fight against AI fraud is BitMind—an AI deepfake detection authority that offers a suite of free  apps and tools that instantly identify and flag AI-generated images before you fall victim. 



Built by a team of AI engineers hailing from leading tech companies like Amazon, Poshmark, NEAR, and Ledgersafe, BitMind’s instant detection of deepfakes helps uphold the credibility of the media, guaranteeing the authenticity of the information we use. A strong deepfake detection enhances digital interactions, supports better decision making and strengthens the integrity of the modern digital world—serving to protect reputations, shield finances and maintain trust for celebrities, politicians, public figures … and everyone else.

x1 -1.pngtom.png

For both B2C and B2B use, these 5 BitMind tools are free and accessible to anyone: 

  • AI Detector App: A simple web page where users can drag-and-drop suspicious images for fast deepfake detection results;
  • Chrome Extension: Flags AI-created content in real-time, while browsing.
  • X Bot: Verifies if images on X/Twitter are real or AI-generated;
  • Discord Bot: Verifies if images are real or AI-generated via its Discord Integration;
  • AI or Not GameFun Telegram bot that tests your ability to distinguish between AI-generated and human-created images.

“Recognizing the need to integrate deepfake detection into everyday technology use, our applications fit seamlessly into users’ lives,” notes Ken Miyachi, BitMind CEO. “For example, the BitMind Detection App is a user-friendly application that allows individuals to upload images and quickly assess the likelihood of them being real or synthetic. Additionally, the Browser Extension enhances online security by analyzing images on web pages in real time and providing immediate feedback on their authenticity through our subnet validators. These tools are designed to empower users, enabling them to navigate digital spaces with confidence and security.”

As the world’s first decentralized Deepfake Detection System, BitMind is an open-source technology that enables developers to easily integrate the technology into their existing platforms to provide accurate real-time detection of deepfakes.

“Deepfake technology has emerged as both a marvel and a menace,” continued Miyachi.  “With the capacity to create synthetic media that closely mimics reality, deepfakes present unprecedented challenges in privacy, security, and information integrity. Responding to these challenges, we introduced the BitMind Subnet, a breakthrough on the Bittensor network, dedicated to the detection and mitigation of deepfakes.”

According to Miyachi, here are key reasons why BitMind technology is a game changer:

  • The BitMind Subnet, which represents a pivotal advancement in the fight against AI-generated misinformation. Operating on a decentralized AI platform, this deepfake detection system employs sophisticated AI models to accurately distinguish between real and manipulated content. This not only enhances the security of digital media but also preserves the essential trust in digital interactions.
  • The BitMind Subnet is equipped with advanced detection algorithms that utilize both generative and discriminative AI technologies to provide a robust mechanism for identifying deepfakes.
  • BitMind employs cutting-edge techniques, including Neighborhood Pixel Relationships, ensuring competitive accuracy in detection. The operation of the subnet is decentralized, with miners across the network running binary classifiers. This setup ensures that the detection processes are widespread and not confined to any centralized repository, enhancing both the reliability and integrity of the detection results.
  • Community collaboration is a cornerstone of the BitMind Subnet, actively encouraging the community to contribute to our evolving codebase, and by engaging with developers and researchers, the subnet is continuously improved and updated with the latest advancements in AI.
  • BitMind combines its extensive industry expertise, cutting-edge academic research, and a deep passion for technology. The team has a proven track record in AI, blockchain, and systems architecture, successfully leading tech projects and founding innovative companies.

What truly sets BitMind apart is their commitment to creating a safer, more transparent digital world where AI benefits humanity, driven by their passion for innovation, security and community engagement. Their technologies are expressly designed to safeguard the integrity of digital media and foster a trustworthy digital ecosystem.

In the modern world full of fake news and increasing cyber threats, BitMind’s innovations are paving the way for a future in which digital trust is not an option, but a necessity. As the threats increase, the global community must be equipped with the means to ingest digital information in a reliable and authentic in order to realize AI’s true potential safely and efficiently. For the Silo, Marsha Zorn.

How To Stop DNA Size Plastics From Entering Your Body

Plastics that break down into particles as tiny as our DNA—small enough to be absorbed through our skin—are released into our environment at a rate of 82 million metric tons a year. These plastics, and the mix of chemicals they are made with, are now major contributors to disease, affecting the risk of afflictions ranging from cancer to hormonal issues.

Plastic pollution threatens everything from sea animals to human beings, a problem scientists, activists, business groups, and politicians are debating as they draft a global treaty to end plastic pollution. These negotiations have only highlighted the complexity of a threat that seems to pit economic growth and jobs against catastrophic damage to people and the planet.

Rapid growth in plastics didn’t begin until the 1950s, and since then, annual production has increased nearly 230-fold, according to two data sets processed by Our World in Data. More than 20 percent of plastic waste is mismanaged—ending up in our air, water, and soil.

Inescapable Problem

While plastic doesn’t biodegrade—at least not in a reasonable time frame—it does break down into ever smaller particles. We may no longer see it, but plastic constantly accumulates in our environment. These microscopic bits, known as microplastics and nanoplastics, can enter our bodies through what we eat, drink, and breathe.

Microplastics measure five millimeters or less. Nanoplastics are an invisible fraction of that size, down to one billionth of a meter or around the size of DNA.

image-5655365

While microplastics can be as small as a hair, they remain visible. Nanoplastics, however, are impossible to see without a microscope. (Illustration by The Epoch Times, Shutterstock)

Plastic pollution is a chemical remnant of petroleum with other chemicals added in to change the durability, elasticity, and color. PlastChem Project has cataloged more than 16,000 chemicals—4,200 considered highly hazardous, according to the initiative’s report issued in March.

The astounding level and types of plastics, many with unknown health effects, should be a wakeup call for everyone, says Erin Smith, vice president and head of plastic waste and business at the World Wildlife Fund (WWF).

Premium Picks

Trojan Tomato: A New GMO Is Designed to Infiltrate America’s Gardens

Micro- and Nanoplastics Linked to Parkinson’s and Dementia

Micro- and Nanoplastics Linked to Parkinson’s and Dementia

“Plastic pollution is absolutely everywhere,” she said. “What’s hard right now is the body of science, trying to understand what the presence of plastic inside us means from a human health perspective, is still new.”

Ms. Smith said we may be waiting for the science to reveal the full scope of plastic’s biological effects, but one thing is certain: “We know it’s not good.”

Reproductive and Neurological Issues

Newer human health studies have shown plastic has far-reaching effects.

“The research is clear: Plastics cause disease, disability, and death. They cause premature birth, low birth weight, and stillbirth as well as leukemia, lymphoma, brain cancer, liver cancer, heart disease and stroke. Infants, children, pregnant women, and plastics workers are the people at greatest risk of these harms. These diseases result in annual economic costs of $1.2 trillion,” said Dr. Phil Landrigan, pediatrician and environmental health expert, in a Beyond Plastics news release in March.

Beyond Plastics, an advocacy group for policy change, warns that new research indicates plastic could be leading to an increased risk of heart disease, stroke, and death.

Successive studies have found microscopic plastic particles affect every system of our bodies and at every age.

Nearly 3,600 studies indexed by the Minderoo Foundation have detailed the effects of polymers and additives like plasticizers, flame retardants, bisphenols, and per- and polyfluoroalkyl substances. The vast majority of studies indicate plastics affect endocrine and metabolic function, the reproductive system, and contribute to mental, behavioral, and neurodevelopment issues.

One study published in Environmental Science & Technology looked at plastic food packaging from five countries and found hormone-disrupting chemicals were common.

“The prevalence of estrogenic compounds in plastics raises health concerns due to their potential to disrupt the endocrine system, which can, among others, result in developmental and reproductive issues, and an elevated risk of hormone-related cancers, such as breast and prostate cancer,” the authors noted.

image-5665649

Data mapped by Our World in Data shows national rates of per capita plastic pollution to the oceans. American individuals add about .01 kilograms (10 grams) of plastic waste to the world’s oceans each year. At 336,500,000 people today, that amounts to 3,311 tons or 7,418,555 pounds. (The Epoch Times)

The full scope of these chemical consequences is far from known. According to Minderoo, less than 30 percent of more than 1,500 plastics chemicals have been investigated for human health impacts. That includes the “substitution” chemicals used to replace additives that were restricted after being found problematic.

“All new plastic chemicals should be tested for safety before being introduced in consumer products, with ongoing post-introduction monitoring of their levels in human biospecimens and evaluation of health effects throughout the lives of individuals and across generations,” said professor Sarah Dunlop, Minderoo Foundation’s head of plastics and human health.

Absorbed Into Arteries and Skin

The relatively recent discovery that plastic particles can make their way into the human body through multiple methods has come with other unsetting insights. Microplastics and nanoplastics in human artery wall plaque were recently linked to a 350 percent increased risk of heart attack, stroke, and death.

image-5655375
image-5655374

Plastic pollution comes in all forms, from packaging and waste that clogs the Buckingham Canal in Chennai, India to plastic pellets from petrochemical companies that litter the ground in Ecaussinnes, Belgium. (R. SATISH BABU, Kenzo TRIBOUILLARD / AFP via Getty Images)

Published March 6 in the New England Journal of Medicine, the study followed 257 patients over 34 months. Among those involved in the study, 58.4 percent had polyethylene in carotid artery plaque and 12.1 percent had polyvinyl chloride.

Polyethylene is the most common plastic found in bottles and bags, including cereal box liners. Polyvinyl chloride, better known as PVC, is another common plastic, often used in medical and construction materials.

Besides finding entry through ingestion, polymers can also make their way into the bloodstream through our skin, according to another study published in April in Environment International. The findings, based on a human skin equivalent model, add to evidence that suggests that as plastics break down, it may be impossible for us to avoid absorbing them. Microscopic plastic has been found in our soil, water supply, air, and arctic ice.

Sweaty skin was found to be especially prone to absorbing the particles.

Once inside the body, plastic can mimic hormones, collect in arteries, and contribute to one of the most common disease pathologies today—an imbalance of free radicals and antioxidants known as oxidative stress.

Dr. Bradley Bale, a heart attack and stroke prevention specialist and co-author of “Beat The Heart Attack Gene,” says there’s plenty of evidence that plastic is causing oxidative stress.

“Plastics are ubiquitous on planet Earth,” Dr. Bale said. “You’re crazy to think you can eliminate your exposure to that. It would be next to impossible. But we can look at other issues that cause oxidative stress.”

image-5662075

Data processed by our Our World in Data shows the increase in plastic production in metric tonnes. (Illustration by The Epoch Times, Shutterstock)

Those other issues, including poor diet and other toxic exposures, may be resolved through lifestyle approaches, supplements, or avoidance.

Dr. Bale suspects future nanoplastics research will reveal a relationship between plastics exposure and early death, dementia, cancer, diabetes, and any disease impacted by oxidative stress.

How to Stop the Plastic Onslaught

Since cleaning up plastic is nearly impossible once it breaks down, advocacy groups are pushing for legislation that would reduce single-use products such as food wrappers, bottles, takeout containers, and bags—some of the most prolific and problematic plastics.

The United Nations Environment Programme, a global environmental decision-making body with representatives from all UN member states, decided in March 2022 that the plastics issue needed a coordinated response. It committed to fast-tracking a treaty meant to address the world’s growing plastic problem.

However, after holding the fourth of five sessions in late April in Canada, the group still hasn’t decided whether to identify problematic plastics or call for new plastic to be phased out or scaled back. The final meeting begins in late November with a treaty expected in 2025.

image-5655466
image-5655377
image-5655378

(Left) The secretariat of the Intergovernmental Negotiating Committee (INC) to Develop an International Legally Binding Instrument on Plastic Pollution consults on the dais during the closing plenary in Ottawa on April 30, 2024; (Center) Members of Greenpeace holds up placards during the discussions in Ottawa, Canada, on April 23, 2024.; (Right) Pro-plastic messaging was seen at hotels in Ottawa during the UN INC meetings. (IISD-ENB/Kiara Worth, DAVE CHAN/AFP via Getty Images)

Meanwhile, U.S. lawmakers are on a third attempt to gain Congressional consideration of the Break Free From Plastic Pollution Act. First introduced in 2020, it remains stuck in committee. Among the act’s proposals are reducing and banning some single-use plastics, creating grants for reusable and refillable products, requiring corporations to take responsibility for plastic pollution, and temporarily banning new plastic facilities until protections are established.

The Economics of Plastics

Plastics are important for many businesses and the plastics industry itself is significant and influential. However, plastics aren’t as profitable as one may expect. New plastic facilities often get subsidies and tax breaks that make plastics artificially cheap to produce. These financial supports have increased substantially in the past three years.

In addition to direct fossil fuel subsidies, the plastics and petrochemical industries benefit from grants, tax breaks, and incentives. Because of a lack of transparency, exact figures on subsidies are hard to come by, according to the Center for International Environmental Law. The group is urging the UN to ban certain subsidies, including any that would reduce the price of raw goods used to make plastic.

Some organizations question whether these incentives are beneficial to local economies and taxpayers as a whole.

The Environmental Integrity Project issued a report in March that found 64 percent of 50 plastic plants built or expanded in the United States since 2012 received nearly $9 billion in state and local subsidies. Unexpected events were common, including violations of air pollution permits among 42 plants and more than 1,200 accidents like fires and explosions. State-modified permits at 15 plants allowed for additional emissions that were often detected beyond the property line of the plants.

A case study report published June 2023 by the Ohio River Valley Institute examined the $6 billion Shell facility built in Beaver County, Pennsylvania to produce plastic pellets.

“Since the project’s inception, industry executives and government officials alike have argued that it would spur local economic growth and renewed business investment. Yet prosperity still has not arrived. Beaver County has seen a declining population, zero growth in GDP, zero growth in jobs, lackluster progress in reducing poverty, and zero growth in businesses—even when factoring in all the temporary construction workers at the site,” the report says.

image-5655372

The Shell Pennsylvania Petrochemicals Complex makes plastic from “cracking” natural gas in Beaver County, near Pittsburgh, PA. (Mark Dixon/Flickr)

Conflicted Solutions for a Plastic World

The Plastics Industry Association argues that plastic “makes the world a better place”—language it wants in the plastics treaty.

The association represents more than one million workers throughout the entire supply chain. A $468 billion industry, plastics are the sixth largest U.S. manufacturer, according to the association, which did not respond to media requests for an interview.

David Zaruk, a communications professor in Belgium with a doctorate in philosophy, said opposition to plastic is largely an attack on the fossil fuel industry—part of a larger “anti-capitalist political agenda.” The value of plastic on society, he said, is frequently understated.

He pointed to a 2024 study published in Environmental Science and Technology that concludes plastic is far more “sustainable” with lower greenhouse gas emissions than alternatives like paper, glass, and aluminum—many of which it was designed to replace. Arguments often overlook the environmental impact of alternatives, the study notes, and in some cases, there are no substitutions for plastic.

“This isn’t a recent revelation either. Academic scientists have said for years that plastic serves essential functions. Speaking specifically of short-lived plastic uses, a pair of supply chain experts argued in 2019 that ’some plastic packaging is necessary to prevent food waste and protect the environment.’ By the way, food waste produces roughly double the greenhouse emissions of plastic production,” Mr. Zaruk wrote recently on the Substack blog, Firebreak.

The Plastics Industry Association heavily promotes recycling and biodegradable plastics but critics say there are inherent problems with both.

Only 4 percent of plastic is recycled in the United States, while an equal amount ends up in rivers, oceans, and soil—breaking down into microplastics and nanoplastics that experts believe will persist for centuries.

The U.S. Plastics Pact—a collaboration of more than 100 businesses, nonprofit organizations, government agencies, and academic institutions initiated by The Recycling Partnership and World Wildlife Fund—identified 11 problematic plastics that its members aim to voluntarily eliminate by 2025. Members include major plastics users and the products are all finished items or components of plastics that either aren’t recycled or cause problems in the recycling system and could be eliminated or replaced.

While some major companies support the pact, the  Plastics Industry Association has taken a dim view of the pact, describing it as an attempt to “tell others how to run their businesses by restricting their choices.”

The association says the best way to increase recycling is through education and innovation.

Recycled Mystery Chemicals

Unfortunately, recycling isn’t a perfect solution to the plastic problem. Recycled plastics present additional hazards because they are made from a blend of products and a more uncertain chemical makeup, according to Therese Karlsson, science advisor for the International Pollutants Elimination Network, a global consortium of public interest groups.

image-5655814

“We’ve looked a lot at recycled plastics. There you have a lot of different plastic materials that you don’t know what they contain and you combine that into a new plastic material that you have even less information about what it contains,” Ms. Karlsson. “As a consumer, you can’t look at a piece of plastic to figure out if it’s safe or not. We just don’t know, but we know a lot of the chemicals used in plastic are toxic.”

An IPEN investigation in April found plastic pellets recovered from recycling facilities in 24 countries had hundreds of toxic chemicals—including pesticides, industrial chemicals, pharmaceuticals, dyes, and fragrances.

“For our recycling technology, it just doesn’t work, and a lot of that ends up in landfills anyway,” said Ms. Smith from the WWF. “It shouldn’t require a decoder ring to decide what goes in that blue bin because everything should be designed for that system.” For the Silo, Amy Denny.

Little Changes Make a Big Difference

In the absence of government intervention, Ms. Smith said there are some easy tips consumers can take to limit their own plastic exposure:

  • Shop with reusable shopping bags.
  • Don’t use plastic in the microwave or dishwasher because heat can release additional polymers.
  • Buy metal or glass snack containers to replace sealable plastic bags.
  • Use beeswax cloth in place of plastic wrap.
  • Replace dryer sheets with wool balls.
  • Carry a refillable cup for water and coffee.
  • Consider reusable trash bags.
  • Use and carry metal straws, stir sticks, and/or reusable cutlery.
  • Don’t litter, and pick up trash you find outdoors.

Science Behind Post Orgasmic Afterglow

Whether it’s a cozy post-coital cuddle or the serene satisfaction following a solo session, the afterglow is that unmistakable halo of happiness we carry long after the climax. Far from being a fleeting sensation, the afterglow is a scientifically grounded phenomenon driven by hormones and emotional connectivity.

Our friends at LELO share everything you need to know about the science of the afterglow and why it deserves a central place in the conversation about pleasure, intimacy, and well-being.


The afterglow is the warm, contented feeling that lingers after sexual activity or orgasm. It’s that magical moment when you feel deeply connected to your partner or yourself. This glowing sensation can last minutes, hours, or even days, influencing how you approach your relationships, work, and personal life with a rejuvenated sense of calm and joy.

The Science Behind the Glow


Orgasm triggers the release of a powerful cocktail of hormones. Oxytocin, often called the “love hormone,” strengthens trust and bonding, especially during partnered intimacy. Dopamine delivers an intense rush of pleasure, while serotonin enhances relaxation and happiness. These hormones are universal, playing the same role whether the experience is shared with a partner or savored solo.


The parasympathetic nervous system also kicks in post-orgasm, reducing stress and fostering a profound sense of well-being. This physiological response underscores that pleasure isn’t just about feeling good in the moment; it’s about nurturing your mind and body in meaningful ways.

The Benefits of the Afterglow


Numerous studies show that the effects of post-orgasmic bliss can persist for up to 48 hours.
During this time, the afterglow fosters emotional connection in relationships, boosts mood, and even strengthens immune function. It can also enhance self-esteem, helping you approach your day with confidence and optimism.

Partnered Pleasure


In relationships, the afterglow is an emotional glue, reinforcing bonds and increasing satisfaction. By prolonging this shared connection, couples can navigate conflicts more effectively and deepen their intimacy. The key is to be present and savor the moment together through touch, eye contact, or quiet conversation.

Solo Afterglow


Self-pleasure offers the same hormonal and emotional rewards as partnered sex, making it a powerful form of self-care. Beyond physical release, it’s an act of self-discovery and affirmation, promoting body positivity and emotional recharge. The afterglow from solo sessions is a reminder that connecting with yourself is just as vital as connecting with others.

Extending The Glow


To fully savor the magic of the afterglow, consider these tips:

Extend the Glow: Take a slow, mindful approach to aftercare. A shared bath, journaling about your experience, or meditative breathing can amplify the benefits.


The afterglow is more than a momentary sensation; it’s a testament to the beauty of connection, intimacy, and self-awareness. By leaning into these moments, you embrace the joy of pleasure and unlock a deeper understanding of your emotional and physical needs.


So, next time you bask in that warm, lingering glow, let it remind you of the transformative power of pleasure to nourish your body, mind, and soul. Stay glowing 🙂

Prioritize Connection: For couples, linger in the moment by sharing a cuddle, eye contact, or a few whispered words. For solo sessions, take time to appreciate your body and the joy it brings.

Set the Scene: Create an environment that invites relaxation. Soft lighting, calming music, or a warm blanket can extend the moment’s serenity.

Practice Gratitude: Whether with a partner or alone, reflect on the experience and express gratitude – for your body, your partner, or simply the pleasure itself. For the Silo, Emilie Melloni Quemar/ Lelo.

About LELO
LELO is not “just a sex toy brand”; it’s a self-care movement aimed at those who know that satisfaction transcends gender, sexual
orientation, race, and age. We’re offering the experience of ecstasy without shame, the pleasure of discovering all the wonders of one’s
body, thus facilitating our customers with confidence, that leads to a fulfilled intimate life. LELOi AB is the Swedish company behind LELO,
where offices extend from Stockholm to San Jose, from Sydney to Shanghai

The Crappy But Amazing Toy Sampling Keyboard That Influenced Legions

SPOTLIGHT: The Casio SK series keyboards, carries great nostalgia for many of us who grew up in the 80s and 90s.

My friend got one for his birthday, and after spending an entire day at his place sampling burps, coughs and farts, I knew I was destined to become an electronic musician. If this is your story too- let us know below in the comments section.

The SK line’s distinctive design and sound evoke memories of early bedroom-made electronic music and hip-hop production, making it a sought-after collector’s item. The SK sound quality is characterized by its lo-fi charm. Everything sampled into it sounds like crap, but in a good way!

There’s a thriving community of musicians and collectors who appreciate the SK-2 , SK-5, SK-8 et al for their historical significance and cultural impact.

Used prominently on my 2008 release of AUDIOCOSM by Jarrod Barker, the SK-8 was used as a sampling beat box and also one or two of its famous presets were used in a loop to create swirling matrices of sound on a number of tracks. I found this keyboard in a thrift store in 2007 buried in the children’s electronic toys. It was $2.99 CAD and even had the battery cover. This fortuitous discovery was the spark that led to the writing and recording (on a boat no less!) of AUDIOCOSM.

The rare white version of the SK-10.

At their absolute basic, the SK line are fun keyboards and a great introduction tool to lo-fi sampling. But there is much more.

After recording a short audio sample (recorded either via the built in monophonic microphone or by plugging in an audio source using the 1/8″ female jack) you can choose from various envelopes (settings that affect how the sound starts, how long it sounds and how it ends) and also select whether to reverse or loop samples. These toy synthesizers are a treat when connected via guitar effects pedals. A good outboard effect can transform these keyboards into something much more powerful and believable. If that isn’t enough you can explore the factory preset sounds some of which sound more realistic than others but all have usefulness in the correct setting.

Most SK’s have a decent enough piano, vibraphone, flute, trumpet and clarinet sounds. There are also built-in speakers that are loud enough for mobile use such as camping or beach hang outs and of course the SK’s run on batteries. They also have line out feature which bypasses the onboard speakers allowing you to record them into your favorite DAW or standalone recording machine….an analog 4 track is a great bedfellow.

The Mod community and the SK (circuit bending)

Due to it’s relative affordability (used prices have continued to rise in the past 10 years) the SK series of keyboards have been embraced by electronic musician modifiers. Their basic construction and ease of internal access makes modifying them a lot of fun. The creativity of modders seems to be endless- everything from MIDI control to individual audio outputs to real time control of sound chips using dip switches or rotary controls.

Join that thriving community by buying this one from our friend at Tone Tweakers today! For the Silo, Jarrod Barker.

Why National Radon Action?

The US Centers for Disease Control (CDC) has designated January as National Radon Action Month to draw attention to what it describes as “an invisible, silent home invader.” The CDC initiative seeks to unmask the dangers of radon, a colorless, odorless, and tasteless gas that is responsible for an estimated 21,000 lung cancer deaths in the US each year.

“Radon can build up in the air in any home or building, whether or not it has a basement, is sealed or drafty, or is new or old,” the CDC warns. It also explains that there is “no known safe level of radon,” encouraging every homeowner to test for radon and, when detected, implement effective mitigation systems.

The last week of January 2025 is the CDC’s Radon Awareness Week, which encourages people to explore their personal “Radon Story.” The following facts about radon can help anyone understand how they might come into contact with it, its potential health impacts, and how radon levels in a home or other building can be reduced.

Any home can be vulnerable to radon

Radon is a naturally occurring, radioactive element that is released when radium in rocks, plants, and soil breaks down. It makes its way into buildings through cracks and other openings in foundations.

Outdoors, radon dissipates in the atmosphere to levels that are not harmful to humans. If trapped indoors, however, it can accumulate to dangerous levels. In the US, the Environmental Protection Agency estimates that 1 in 15 homes contains dangerous radon levels. The 2024 Cross-Canada Survey of Radon Exposure in Residential Buildings of Urban and Rural Communities found that 18 percent of Canadian homes contain unsafe levels of radon.

Certain people face higher risks of radon-related health issues

When radon accumulates indoors, it can be breathed in by humans to be trapped in lung tissues, where its radioactivity then can lead to cancer. It is estimated that radon exposure causes an estimated 84,000 lung cancer deaths globally each year, which makes it second only to smoking for lung cancer deaths.

While radon can cause health impacts for anyone, certain people have been identified as being more vulnerable to its effects. According to the EPA, cigarette smokers face a higher risk of radon-induced lung cancer due to the synergistic effects of radon and smoking. Those with a faster breathing rate, including pregnant women and children, also face more of a risk of health impacts from radon.

Modern technology can provide real-time radon readings

Traditional tests determine radon levels by using charcoal canisters to capture a sample of indoor air that is then analyzed in a lab. The effectiveness of those tests is limited by the fact that they capture only a single snapshot of radon levels, which can fluctuate significantly between seasons and even throughout the day.  In addition, obtaining test results from the lab requires waiting several days.

Modern radon monitors provide ongoing readings of radon levels, with initial readings available within minutes and reliable results determined within an hour. These monitors ensure that fluctuations in radon levels are identified, and they can also be easily moved around within a home or building to identify radon hot spots. Continuous readings from the monitors can also be accessed wirelessly through a mobile app for in-depth analysis, capable of alerting the residents to potential radon issues even when they are not at home.

High levels of accumulation require radon mitigation

Mitigation is  essential for homes where high levels of radon accumulation are detected. The EPA has set the radon action level at 4 pCi/L. Canadian authorities have set a level of 200 Bq/m3, which is approximately 5.4 pCi/L.

Radon mitigation systems utilize fans and suction pipes to carry out a process known as active soil depressurization. The process removes radon from beneath foundations before it can make its way into structures. The systems typically require little maintenance and can be run for as little as $10 per month in operating costs. They also prevent other soil gases from entering the home.

While radon poses serious health risks, these risks can be easily prevented. Homeowners can stay safe from the dangerous effects of the gas by taking the steps to: 1)continuously monitoring for radon accumulation by using a modern radon monitor that provides ongoing readings of radon levels, and 2) when necessary, leveraging the mitigation tools available for reducing radon levels or seeking the help of radon professionals to eliminate the threat of toxic gas from the inside of their homes. For the Silo, Insoo Park, Founder and CEO of Ecosense.

What Happens To Cars That Sit For Long Periods Of Time

The trouble is that automobiles, like everything else, are subject to the law of entropy. “Preservation” is about more than just keeping the odometer reading low. “Like-new” means something different after one, two, or three decades, even if the car still has plastic wrap on the steering wheel. The paint, upholstery, and trim may look flawless—but what about the bits you can’t see, like the complex systems and different materials that make up the driveline? Just because a car is like-new doesn’t mean it actually is new, or that you can just hop in and drive it home. We decided to call up some experts across the industry to answer a big question: What exactly is happening to a car when it sits?

1967 le mans winner ford mark iv at the henry ford gurney foyt
The Henry Ford Museum/Wes Duenkel

First off, what’s happening to it while it sits depends on where it sits. Imagine a car in a museum—perhaps the Le Mans–winning Ford Mark IV at the Henry Ford Museum in Dearborn, Michigan. Now, think of that old pickup you once saw sitting in a field. Technically, they’re both decaying. One is just decaying far more slowly than the other. 

The race car lives in a perfectly curated world. The temperature in the museum is consistent and the humidity is just so: Low enough to deter moisture-loving insects and mold, high enough to prevent the tires and other rubber seals from drying out. A museum car’s tires may barely touch the ground, because the chassis sits on jack stands. The fluids in the car—fuel, coolant, oil—have either been drained or supplemented with stabilizing agents. The upholstery is regularly vacuumed to eliminate pests. Dust barely gathers on the body before someone gently sweeps it off.

1967 Ford Mark IV Race Car wheel detail
The wheel of The Henry Ford’s 1967 Ford Mark IV race car, with its original tire. The Henry Ford Museum

The pickup, meanwhile, has been at the mercy of the weather for who knows how long. The tires have cracked and rotted. Salty air might be corroding metal. Insects and/or rodents might be living inside the cabin and engine bay. The engine’s cylinders may be dry, the gas in its rusty fuel tank a kind of goo, the oil gray instead of honey-colored. Its paint may be bubbling, its carpets mildewing. 

Those are two extreme examples, of course, but when it comes to the condition of a car, the storage (or display) environment makes all the difference, whether the car is Henry Ford’s original Quadricycle from 1896 or a brain scientist’s sporty Sentra from 1992. To keep a “like-new” car living up to its descriptor, the temperature must be consistent; otherwise, even the most immaculate car will bake, sweat, and/or freeze. The moisture in the air needs to be high enough to slow the decay of organic materials like tires but low enough to protect from rust. The room itself needs to be well-sealed to deter pests. The vehicle also needs a barrier (or two) between the paint and the dust, dirt, and grime that will accumulate. And that’s only the parts of the car you can see …

The Odometer Doesn’t Tell the Whole Story

tom cotter 930 turbo barn find hunter
Tom’s 16,000-mile/ 25,750-kilometer Porsche, where he found it. Youtube / Hagerty

No one is more familiar with finding automotive diamonds in rough storage situations than Tom Cotter, known as The Barn Find Hunter. When I called him to discuss this story, the consequences of bad storage were especially fresh on his mind: He had just bought a barn-find car (a 1986 Porsche 930 Turbo) with 16,000 miles. “That’s the good news,” he said. “The bad news is that it has not been driven since 1996, so nearly 30 years. And even though it had a plastic sheet on it, somehow it got filthy. Filthy. My heart breaks.” Even worse, the windows were open, and the car was infested with mice. It needs a thorough recommissioning: brakes, gas tank, fuel lines, fuel injection unit, fuel injector, fuel pump—and those are just the major areas, says Tom. He’s still in the process of figuring out how much the car needs, but if everything needs to be replaced, the work could cost as much as $40,000 usd/ $58,000 cad. Oh, and he’ll need a new set of tires—the car was parked on its original set from 1986. 

“Just because a car has low miles doesn’t mean it was well cared for,” says Cotter. “Cars go bad when they sit.” A perfect storage environment and a sedentary life don’t guarantee stasis, either: “There are things that happen inside the systems of a car that break down, like the rubber in a brake system or the rubber in our fuel system. It doesn’t matter if the car is hot or cold or clean or dirty, those things are going to break down.” One interesting system that is especially prone to degrading when a car sits is the exhaust, he says. “For every gallon (3.785 liters) of fuel that’s burned in a car, a gallon of water comes out the tailpipe. It’s just part of the combustion process. And so if you run the car and then turn it off and park it for 20 years, you’ve got at least a gallon of water (3.785 liters) sitting in the exhaust system—most of it, in the muffler. Unless it’s made of stainless steel or something, it’s going to just rot right out. There’s really nothing you can do about that.” 

The fluids and the metals in a car are often conspiring against each other. “One of the biggest challenges you have managing large collections—and with cars that sit, too—is coolant system corrosion,” says Scott George, curator of collections at the Revs Institute in Naples, Florida, who knows a thing or two about keeping old cars in peak health. “You’ve got brass, copper, aluminum, iron, steel, all coming in contact with water, and it can create a battery of sorts. It can almost create its own internal energy, which can attack certain metals that are most vulnerable,” like the vanes in a water pump, which are often made of a different metal than the pump itself. Using antifreeze doesn’t eliminate the problem: Those systems can corrode, too, damaging hose connections and water chambers in cylinder heads. “Corrosion in radiators, and things that attack solder and solder seams, are also a big challenge for anybody with large collections.”

Proper storage requires understanding of the car’s construction, because certain materials require special attention and/or precautions. Wool and horsehair, materials that are especially common in the upholstery of cars built before World War II, can attract cloth moths and carpet beetles. Cuong Nguyen, a senior conservator at The Henry Ford, who is heavily involved in the care of the museum’s 300-car collection, suggests vacuuming such cars each season. He also warns that some more modern wiring harnesses are made with soy-based materials that, while eco-friendly, attract mice. Sticky traps, he says, especially those without pheromones, can be good preventive measures for furry pests. 

Understanding how a car is built also helps set expectations for how it ages, even in the best conditions. For instance, different sorts of paints wear differently: Lacquer-based paint, used on most cars built before the late 1980s or early ‘90s, doesn’t hold up as well as the more modern, urethane-based version. Another notoriously finicky modern material covers the soft-touch buttons found in some Italian exotics from the 1990s or early 2000s. The black material gets sticky over time.

Best-Case Storage Scenario

Cotter, who owns a storage facility called Auto Barn in North Carolina, encourages enthusiasts to store their vehicles thoughtfully because they’re protecting their financial investment. “It might take you a half-day to get a car ready to lock up, but put a little bit of effort into it. You are maintaining your investment. It’s a mechanical portfolio. A car that’s parked haphazardly will more than likely go down in value.”

The best place to store a car—with any odometer reading—is in a clean, dry place with temperature and humidity control. To avoid flat spots on the tires, which can develop within a year, the car should be elevated, just slightly, on jack stands (as mentioned above, a trick used by museums) or lowered onto a set of tire cradles. If the fuel isn’t drained, it should be ethanol-free; the regular stuff turns into a gummy, gooey mess when it sits. If the fuel in the tank does contain ethanol, it should be supplemented with a fuel stabilizer. If the car was driven regularly before storage, the carpets in the driver’s side footwell should either be completely dry or propped up, away from the floorboards. Cotter explains why: moisture from the driver’s shoes may get onto and under the carpets, and it may mold the carpets or, worse, become trapped between the rubber backing and the sheet metal underneath, which may begin to rust.

Some sort of rodent protection, even a Bounce sheet, should be taken. (This nifty device, called Mouse Blocker, uses sonic pulses to keep the critters at bay.) One moisture-absorbing trick that Cotter recommends is cheap, and readily found at your local hardware store: charcoal, which absorbs moisture and odors. Ideally, the paint should be waxed and the car put under a cover. Feeling fancy? Look into a Car Capsule, the “bubbles” that the Detroit Historical Society uses to store its cars.

detroit historical society storage bubble car capsule
YouTube / Hagerty

While in Storage

Of course, not all low-mile cars are barn finds like Tom’s Porsche. Many of them present amazingly well. Scott George weighs in. There’s an excitement, he says, about buying a car that appears locked in time and cosmetically perfect—free of nicks, scrapes, bumps, wrinkles. But some people, he says, may not think about what they’re getting into at a mechanical level: “Every time I see a later-model car sell with low mileage, what often goes through my mind is ‘cha-ching, cha-ching, cha-ching.’” He’s seen what can happen when cars sit for 25 or 30 years: “Everything functioning part of the automobile, maybe except for a total engine rebuild, has to be redone.”

Not all buyers may want to drive their pristine, low-mile prize, he admits—some may simply want to be the next owner, to park the car in their climate-controlled showroom as a trophy. There is nothing wrong with that, of course, but down the road, it may be a very costly one—if not for them, for the next person who buys it and wants to drive it. “Cars are operating machines,” George says. “They like to drive.”

At the very least, a car should be started once in a while, and run for more than 5 or 10 minutes—half an hour or so, at least, so that the engine and oil can come up to temperature and cooling fluids can fully circulate. Starting a car and quickly turning it off, says Cotter, “does more damage than if you just leave it alone because the cylinders are dry—there’s not enough oil in the system.”

Acids and moisture can build up, warns George, if a car doesn’t run long enough, “and exhaust systems can corrode from the inside out, and so forth.” He practices what he preaches: The Revs Institute has an unusually high commitment to keeping most of its 120-something collection running, and that means driving the cars—on a circuit loop, for the road cars, or on track, for the race cars, whether that’s at a historic racing event or during a test day where Revs rents out a facility.

Where a car is stored may make the most difference in preserving its condition, but how it is maintained during that period is a close second. “I have witnessed actually cars that 25 or 30 years old that literally sat,” says George, “and I’ve seen it firsthand: every functioning part of the automobile, maybe except for a total engine rebuild, has to be redone. The fuel systems, the fuel injectors, all of that stuff.” Maintaining a low-mile car in driving condition requires a balance of commitment and restraint: “There are some people that have just had these wonderful low-mileage cars,” says George, “and they have done annual maintenance and they have cared for the mechanical systems. They’ve just been cautious about how many mile miles they’ve put on.”

In short, the best way to keep a car in driving condition is to, well, drive it.

Barn Find Hunter Episode 172 Porsche 930 911 Turbo covered in dust in barn

“Just because a car has low miles doesn’t mean it was well cared for,” says Cotter. “Cars go bad when they sit.” A perfect storage environment and a sedentary life don’t guarantee stasis, either: “There are things that happen inside the systems of a car that break down, like the rubber in a brake system or the rubber in our fuel system. It doesn’t matter if the car is hot or cold or clean or dirty, those things are going to break down.” One interesting system that is especially prone to degrading when a car sits is the exhaust, he says. “For every gallon of fuel that’s burned in a car, a gallon of water comes out the tailpipe. It’s just part of the combustion process. And so if you run the car and then turn it off and park it for 20 years, you’ve got at least a gallon of water sitting in the exhaust system—most of it, in the muffler. Unless it’s made of stainless steel or something, it’s going to just rot right out. There’s really nothing you can do about that.” 

The fluids and the metals in a car are often conspiring against each other. “One of the biggest challenges you have managing large collections—and with cars that sit, too—is coolant system corrosion,” says Scott George, curator of collections at the Revs Institute in Naples, Florida, who knows a thing or two about keeping old cars in peak health. “You’ve got brass, copper, aluminum, iron, steel, all coming in contact with water, and it can create a battery of sorts. It can almost create its own internal energy, which can attack certain metals that are most vulnerable,” like the vanes in a water pump, which are often made of a different metal than the pump itself. Using antifreeze doesn’t eliminate the problem: Those systems can corrode, too, damaging hose connections and water chambers in cylinder heads. “Corrosion in radiators, and things that attack solder and solder seams, are also a big challenge for anybody with large collections.”

Proper storage requires understanding of the car’s construction, because certain materials require special attention and/or precautions. Wool and horsehair, materials that are especially common in the upholstery of cars built before World War II, can attract cloth moths and carpet beetles. Cuong Nguyen, a senior conservator at The Henry Ford, who is heavily involved in the care of the museum’s 300-car collection, suggests vacuuming such cars each season. He also warns that some more modern wiring harnesses are made with soy-based materials that, while eco-friendly, attract mice. Sticky traps, he says, especially those without pheromones, can be good preventive measures for furry pests. 

Understanding how a car is built also helps set expectations for how it ages, even in the best conditions. For instance, different sorts of paints wear differently: Lacquer-based paint, used on most cars built before the late 1980s or early ‘90s, doesn’t hold up as well as the more modern, urethane-based version. Another notoriously finicky modern material covers the soft-touch buttons found in some Italian exotics from the 1990s or early 2000s. The black material gets sticky over time.

Best-Case Storage Scenario

Cotter, who owns a storage facility called Auto Barn in North Carolina, encourages enthusiasts to store their vehicles thoughtfully because they’re protecting their financial investment. “It might take you a half-day to get a car ready to lock up, but put a little bit of effort into it. You are maintaining your investment. It’s a mechanical portfolio. A car that’s parked haphazardly will more than likely go down in value.”

The best place to store a car—with any odometer reading—is in a clean, dry place with temperature and humidity control. To avoid flat spots on the tires, which can develop within a year, the car should be elevated, just slightly, on jack stands (as mentioned above, a trick used by museums) or lowered onto a set of tire cradles. If the fuel isn’t drained, it should be ethanol-free; the regular stuff turns into a gummy, gooey mess when it sits. If the fuel in the tank does contain ethanol, it should be supplemented with a fuel stabilizer. If the car was driven regularly before storage, the carpets in the driver’s side footwell should either be completely dry or propped up, away from the floorboards. Cotter explains why: moisture from the driver’s shoes may get onto and under the carpets, and it may mold the carpets or, worse, become trapped between the rubber backing and the sheet metal underneath, which may begin to rust.

Some sort of rodent protection, even a Bounce sheet, should be taken. (This nifty device, called Mouse Blocker, uses sonic pulses to keep the critters at bay.) One moisture-absorbing trick that Cotter recommends is cheap, and readily found at your local hardware store: charcoal, which absorbs moisture and odors. Ideally, the paint should be waxed and the car put under a cover. Feeling fancy? Look into a Car Capsule, the “bubbles” that the Detroit Historical Society uses to store its cars.

detroit historical society storage bubble car capsule
YouTube / Hagerty

While in Storage

Of course, not all low-mile cars are barn finds like Tom’s Porsche. Many of them present amazingly well. Scott George weighs in. There’s an excitement, he says, about buying a car that appears locked in time and cosmetically perfect—free of nicks, scrapes, bumps, wrinkles. But some people, he says, may not think about what they’re getting into at a mechanical level: “Every time I see a later-model car sell with low mileage, what often goes through my mind is ‘cha-ching, cha-ching, cha-ching.’” He’s seen what can happen when cars sit for 25 or 30 years: “Everything functioning part of the automobile, maybe except for a total engine rebuild, has to be redone.”

Not all buyers may want to drive their pristine, low-mile prize, he admits—some may simply want to be the next owner, to park the car in their climate-controlled showroom as a trophy. There is nothing wrong with that, of course, but down the road, it may be a very costly one—if not for them, for the next person who buys it and wants to drive it. “Cars are operating machines,” George says. “They like to drive.”

At the very least, a car should be started once in a while, and run for more than 5 or 10 minutes—half an hour or so, at least, so that the engine and oil can come up to temperature and cooling fluids can fully circulate. Starting a car and quickly turning it off, says Cotter, “does more damage than if you just leave it alone because the cylinders are dry—there’s not enough oil in the system.”

Acids and moisture can build up, warns George, if a car doesn’t run long enough, “and exhaust systems can corrode from the inside out, and so forth.” He practices what he preaches: The Revs Institute has an unusually high commitment to keeping most of its 120-something collection running, and that means driving the cars—on a 40-, 50-, or 60-mile (approx. 64-, 70-, 97 kilometer) loop, for the road cars, or on track, for the race cars, whether that’s at a historic racing event or during a test day where Revs rents out a facility.

Where a car is stored may make the most difference in preserving its condition, but how it is maintained during that period is a close second. “I have witnessed actually cars that 25 or 30 years old that literally sat,” says George, “and I’ve seen it firsthand: every functioning part of the automobile, maybe except for a total engine rebuild, has to be redone. The fuel systems, the fuel injectors, all of that stuff.” Maintaining a low-mile car in driving condition requires a balance of commitment and restraint: “There are some people that have just had these wonderful low-mileage cars,” says George, “and they have done annual maintenance and they have cared for the mechanical systems. They’ve just been cautious about how many mile miles they’ve put on.”

In short, the best way to keep a car in driving condition is to, well, drive it. For the Silo, Grace Houghton.

New Roofing Technology Trends to Watch in 2025

The technological enhancement in our age makes our life a lot easier than we first thought. Especially in the field of real estate, technology brings us effective construction methods. Let’s think about implementing some of that new technology in our homes for a moment.

The roof in particular needs more attention than other parts of a house because it faces more rough weather contact and gets damaged slowly but surely. Saving your roof and increasing its life span is very important. Recent roofing trends may help reduce the hassle of replacing the roof with technological help. These trends are sustainable and ensure a long life span. 

Roofing technology trends

Solar Roofs:

Can you guess the type of this advanced roof?

Solar roofs are trending because they are a cheap and effective roofing technology for homeowners. Solar is sustainable and works as an alternative power source. It gives support to your house along with a solar energy source. Solar roofs have been getting popular for the last few years. People are accepting this technology for many reasons. Solar roofs are becoming physically stronger due to ever improving solar tiles and shingles. It also saves lots of extra electricity expenses and offer unlimited power backup direct from the sun. 

Green Roofs:

Green roofs are a kind of living pleasure. They will give you the feeling of living in a jungle with the latest technology. Green roofs shield directly the intense hit from the warm heat coming from the sun. They also absorb rainwater so that the roof can avoid flooding. Moreover, a green roof is eco-friendly and a suitable place for relaxing.  

Green roofs have lots of benefits. This type of roofing technology ensures more durability than other roofing. It creates a natural feel for the eye and makes the roof a perfect hangout place. It also absorbs the heat of the building. So the whole building remains cool in the warm season. 

Drone:

Drones are a surprisingly useful innovation of science and technology. These remotely controlled flying robots are handy devices that help us in many ways. In terms of real estate, drones help with the capability of Ariel observation. You can inspect your entire home along with the roof without having to climb up and down a ladder or having to physically move to every single spot. The drone will go do the inspections for you and you can observe the whole process by viewing your mobile phone screen. 

In the roofing industry, drone technology becomes a must-have tool. It is now used widely for its benefits. Roofers use them often for any kind of roof-related assistance. The high-resolution cameras give a clear and detailed intro to any problems. As mentioned earlier, since the roofer doesn’t have to climb on the roof for further inspection it helps the whole project by cutting down work time and adding an extra level of safety. 

Drone photos or videos also help the project manager and roof repair company build their project portfolio. Nowadays, every real estate company uses drones in every project because it helps the company to plan perfectly for repairing a damaged roof or installing a new one.

Mixed material roofs:

We always want to install long-lasting roofs. The reason behind this is installing a new roof or replacing shingles from time to time wastes lots of money. Why should someone spend time and money on a something that may only last 5 seasons, if they could install a more stable and longer lasting durable roof at a reasonable price? 

Nowadays, new and innovative technology takes roofing ideas into far more advanced areas. Technological improvements have introduced us to sustainable roofing options never dreamed of a few decades ago. Options like cool roofs, green roofs, solar roofs, and so on. 

We all know that metal roofs are popular for their durability.

Tech improvements have also added various style and color options so that modern roofs can better match the building structure. Because of the improvement in aesthetics, metal roofing technology is successfully gaining attention from homeowners: read more about eco-friendly metal roofs.

Recently an architectural trend called mixed material is getting popular. By mixing various materials together, roofs can be stronger than a single material. It is something like the proverb “unity makes us strong”. That’s why composite shingles and metal makes a strong and sustainable element that increases the lifespan of the roof. 

Mobile Apps:

Using mobile phones every day is a habit for all of us. We can’t live a single day without using a mobile phone. Although there are several reasons behind this, a mobile phone can help our day become efficient through the use of apps.

Mobile apps take on an important role in the roofing business. Using mobile roofing applications, we can measure the whole building and roof remotely. We can create a complete report and send them directly to the workers. 

It saves time and increases the project success rate. Roofing and building apps also help find out the estimated time for the project complication. Contractors have affirmed that all kinds of paper work and invoices are easy to handle with mobile roofing software. 

Last word

We all know “Old is Gold”. But that proverb is not suitable for every sector of human life. Particularly, in the real estate business, implementing new technology in the roofing system will increase durability. That’s why new roofing technology trends are getting popular day by day.  

Next Month MP3 Creators Debut Flexible Rendering Technology For New Immersive Audio Experience

Fraunhofer will be at CES 2025 from January 6-January 8, 2025 at the Fraunhofer suite in the Venetian Hotel.

A person sitting at a desk with a computer

Description automatically generated

December 31, 2024, Erlangen, Germany — Fraunhofer Institute for Integrated Circuits (Fraunhofer IIS), a leading developer of advanced audio technologies, including mp3 and AAC, will present at CES 2025 upHear Flexible Rendering, a ground-breaking audio technology that greatly simplifies high-quality immersive sound experiences on consumer audio reproduction devices, including loudspeakers, soundbars, and TVs. upHear Flexible Rendering enhances and distributes sound flexibly to all available speakers for a spectacular enveloping immersive audio effect at all times.

upHear Flexible Rendering for Effortless Flexibility: Expand Setups from Stereo to Immersive 

upHear Flexible Rendering belongs to Fraunhofer’s upHear range of audio processing technologies. They enhance the sound quality of content from voice signals to immersive audio masterpieces and provide pristine audio pickup as well as faithful reproduction from classic stereo setups to complex scenarios with multiple devices. The upHear Flexible Rendering technology automatically combines wireless speakers for the best possible experience, supporting new immersive audio formats as well as providing first-class upmixing for legacy content.

What Makes This Technology Impressively Different

With upHear Flexible Rendering, everything is adjustable: Adding or removing a speaker from the system is no trouble due to its automatic adaptation to the current playback situation. Its inherent flexibility provides the option for effortless system growth making it very budget-friendly. All systems upgraded with upHear Flexible Rendering achieve enhanced immersion for an extraordinary sound experience. “Currently, users can only connect to one smart speaker at a time or to a fixed setup per room.

With the upHear technology, you now have the flexibility to conveniently combine any number of speakers into an immersive speaker cluster wherever you want. Party music in the kitchen? Transform your living room into a movie theatre? Operas in the bathroom? It is all possible without a fuss,” said Sebastian Meyer, Product Manager upHear, Fraunhofer IIS.

Fraunhofer Demos will be available at CES 2025 –

  • Speaker Repositioning: Smart speakers will be distributed in the room playing immersive music. The upHear Flexible Rendering technology will make it possible to add/remove speakers and to reposition them. The upHear algorithm optimizes the sound image based on the current setup. Visitors will be able to try out repositioning for themselves. The wireless speaker setup also uses the Fraunhofer Communication Codec LC3plus for ultra-low-latency wireless audio transmission.
  • upHear Microphone Processing Technologies: Through live calls, Fraunhofer will show how they make life easier: An “office” in the suite will show advanced AI-based technologies such as Echo Control, Noise Reduction, and the latest feature: Voice Isolation, which is the highlight of the Microphone Processing area. Users can create a “fingerprint” of their voice in seconds, which makes it possible to remove all sounds but their voice from a call for a truly personalized live call experience for better conferencing. 
  • Demo Audio Samples: Check out upHear demos at: https://comm-demos.iis.fraunhofer.de/upHear_20240808/index.html
  • upHear Mobile Audio: Fraunhofer’s virtualization platform opens the door to true spatial playback experiences on headphones. It works from a multitude of sources, from legacy stereo to current immersive formats. Thanks to the minimum resource requirements of the low-complexity rendering, it can run on DSPs for any audio device class, be it headphones, mobile devices, home audio systems, or even a low-power XR devices.

upHear is available immediately. For more information, see: www.uphear.com, and https://www.audioblog.iis.fraunhofer.com/tag/uphear.

Fraunhofer Audio Codecs to be presented at CES 2025 –

  • xHE-AAC: The stereo codec for broadcast and streaming provides DASH/HLS streaming at 12-320+ kbit/s, a new anchor loudness feature, mandatory loudness and dynamic range control, improved speech quality, and stereo imaging. The technology is already used by Netflix, Facebook stories, and Instagram Reels. xHE-AAC is natively supported on the latest Amazon, Android, Microsoft, and Apple products and operating systems.
  • LC3plus Lossless: Video calls, music, and gaming are best enjoyed with wireless headphones and microphones that deliver perfect audio quality and long battery life. With LC3plus, sound reaches users in perfect quality and the new LC3plus Lossless switches seamlessly between lossless and lossy modes of operation. It works with all features of LC3plus, including low-delay and superior robustness thanks to Advanced Packet Loss Concealment. LC3plus is on the Japan Audio Society’s list of codecs whose implementation opens the door for device manufacturers to use the prestigious High-Res Audio Wireless logo. Fraunhofer IIS will demonstrate integrations into the AKG N5 Hybrid Earbuds, a HyperX gaming headset, a Sony streaming microphone, and a headphone prototype by BEStechnic.
    • MPEG-H Audio: Next Generation Audio system for streaming and broadcast applications with advanced personalization and accessibility options. Fraunhofer will showcase devices by globally leading CE manufacturers with MPEG-H Audio support, which is required for Brazil’s upcoming broadcast standard DTV+. Here, the technology helps Globo, TV Cultura, and other major providers to deliver customizable immersive sound experiences with a high degree of accessibility. Viewers can choose between different languages and commentators or select a version with enhanced dialogue.

For the Silo, Karen Thomas/ Eva Yutani.

A black text on a white background

Description automatically generated

About Fraunhofer IIS Audio & Media Technologies
Fraunhofer IIS is part of Fraunhofer-Gesellschaft, Europe’s leading applied research organization. The global leader in advanced technologies for audio coding and moving picture production. Almost all computers, mobile phones, and consumer electronic devices available today are equipped with technologies from Erlangen and are used by billions of people around the world every day. The creation of mp3, co-development of AAC as well as HE-AAC prove how Fraunhofer IIS has been innovating the audio sector for over 30 years. The current generation of compelling audio technologies includes Fraunhofer Symphoria for automotive 3D audio, EVS for phone calls with crystal-clear audio quality, xHE-AAC, which is used by major streaming services such as Netflix, by Facebook stories, and by Instagram Reels. MPEG-H Audio delivers personalized immersive sound for broadcast and streaming, which lets viewers adjust dialogue volume to suit their personal preferences.

7 Most Expensive Electric Cars In The World Include Batmobile Inspired Dark Knight

While EVs are known mainly as environmentally friendly offerings, this list proves not all things with electric motors on four wheels are created equal.

1a. Automobili Pininfarina B95: $4.7 Million usd/ $6.67 Million cad

Topping the list is the Pininfarina B95, the world’s most expensive electric car at $4.7 million usd. Limited to just 10 units, the B95 blends breathtaking performance with unmatched luxury. With a top speed of 186 mph and acceleration from 0 to 62 mph in under two seconds, it’s as fast as it is exclusive. Crafted for collectors, the B95 epitomizes automotive luxury in the EV era.

1b. Automobili Pininfarina Battista B95 Dark Knight $4.2 Million usd/ $5.94 Million cad

Celebrating 85 years of Batman, this hypercar is meticulously crafted as the ultimate inspiration for Bruce Wayne’s conquest against darkness. The Battista Dark Knight emerges blending superhero mystique with high-performance luxury. Dark Knight transforms the elegant, pure-electric Battista into its most formidable version yet, Furiosa. Featuring never-previously-seen bespoke enhancements and aggressive styling, it showcases the pinnacle of Automobili Pininfarina’s dynamic design and craftsmanship.

Most Expensive Electric Vehicles In the World

2. Aspark Owl: $3.1 Million usd/ $4.4 Million cad

Hailing from the Land of the Rising Sun, the Aspark Owl takes electric speed to another level with a claimed top speed of 260 mph and a 0-60 mph time of 1.72 seconds. Its all-carbon-fiber body minimizes weight while maximizing aerodynamics. Limited to 50 units, the Owl’s exclusivity matches its $3.1 million usd price tag. A recent evolution of the model reached a record-breaking 272 mph, solidifying its place as one of the fastest EVs ever.

Most Expensive Electric Vehicles In the World

3. NIO EP9: $3 Million usd/ $4.25 Million cad

China’s NIO EP9 stands out with its focus on aerodynamics and track performance. With an active rear wing and 5,395 pounds of downforce at 150 mph, the EP9 excels on the racetrack. Its four motors enable a 0-124 mph sprint in just 7.1 seconds, and its innovative battery-swapping system adds convenience. Limited to 50 units, the EP9 costs $3 million usd and showcases NIO’s technical expertise.

Most Expensive Electric Vehicles In the World

4.Lotus Evija: $2.3 Million usd/ $3.25 Million cad

The Lotus Evija aims to redefine what an electric hypercar can achieve, delivering 1,973 horsepower from its four motors. Its lightweight design, with a curb weight of just 3,704 pounds, emphasizes performance, while a range of 250 miles ensures practicality. A special Fittipaldi edition pays homage to Lotus’s racing legacy, featuring even greater power and exclusivity. At $2.3 million, the Evija remains a pinnacle of British engineering.

Most Expensive Electric Vehicles In the World

5. Pininfarina Battista: $2.25 Million usd/ $3.18 Million cad

We mentioned the Batman version above already but the ‘base model’ Automobili Pininfarina’s Battista is an electrified masterpiece, blending exquisite design with awe-inspiring performance. With a combined output of 1,900 horsepower from four motors, the Battista rockets from 0 to 62 mph in just 1.86 seconds. Its 120 kWh battery allows fast charging to 80% in 25 minutes, and its carbon fiber construction optimizes agility. Priced at $2.25 million, this Italian creation is limited to 150 units.

Most Expensive Electric Vehicles In the World

6. Rimac Nevera: $2.2 Million usd/ $3.11 Million cad

Croatia’s Rimac Nevera has rewritten the record books, claiming the title of the world’s fastest EV with a top speed of 258 mph. Its four motors generate 1,813 horsepower, enabling blistering acceleration and exceptional handling. With only 150 units produced, each priced at $2.2 million, the Nevera is a true collector’s item. A special Time Attack variant, priced at over $3 million usd , adds even more exclusivity to an already rare hypercar.

Most Expensive Electric Vehicles In the World

7. Deus Vayanne: $2 Million usd/ $2.83 Million cad

The Deus Vayanne debuted at the 2022 New York Auto Show, boasting a staggering 2,243 horsepower thanks to its tri-motor setup. Designed in Austria, produced in Italy, and electrified in the UK, this hypercar achieves a balance of power and elegance. Its unique infinity-loop-inspired grille complements an interior lined with sustainable materials. With a range of 310 miles and a limited production run of 99 units, the Vayanne offers exclusivity at $2 million.

Most Expensive Electric Vehicles In the World

For the Silo, Verdad Gallardo.

Lockheed Martin Space Fence Tracks 25,000 Orbiting Objects

Space FenceLockheed Martin continues refining its technology solution for Space Fence, a program that revamps the way the U.S. Air Force & U.S. Space Force identifies and tracks objects in space. The U.S. Air Force selected Lockheed Martin in 2015 to build a $USD 914 million / CAD $1.285 billion   Space Fence Radar to Safeguard Space Resources.

Lockheed Martin’s Space Fence solution, an advanced ground-based radar system, enhances the way the U.S. detects catalogs and measures more than 200,000 orbiting objects and tracks over 25,000 orbiting objects. With better timeliness and improved surveillance coverage, the system protects space assets against potential crashes that can intensify the debris problem in space.

“Space Fence locates and track space objects with more precision than ever before to help the Air Force transform space situational awareness from being reactive to predictive.”

Lockheed Martin delivered up to two advanced S-Band phased array radars for the Space Fence program. The Space Fence radar system greatly improves Space Situational Awareness of the existing Space Surveillance Network.

That's a LOT to track! [CP]
There is a lot to track and growing space debris every year. 
Construction of the new Space Fence system on Kwajalein Atoll in the Marshall Islands began in February 2015 to meet the program’s 2018 initial operational capability goal. With more than 400 operational S-band arrays deployed worldwide, Lockheed Martin is a leader in S-band radar operation. The Lockheed Martin led team, which includes General Dynamics and AMEC, has decades of collective experience in space-related programs.

Headquartered in Bethesda, Maryland, Lockheed Martin is a global security and aerospace company that employs approximately 113,000 people worldwide and is principally engaged in the research, design, development, manufacture, integration and sustainment of advanced technology systems, products and services.

On 16 December 2002, US President George W. Bush signed National Security Presidential Directive which outlined a plan to begin deployment of operational ballistic missile defense systems by 2004.

The following day the US formally requested from the UK and Denmark use of facilities in RAF Fylingdales, England and Thule, Greenland respectively, as a part of the NMD Program.

The administration continued to push the program, despite highly publicized but not unexpected trial-and-error technical failures during development and over the objections of some scientists who opposed it. The projected cost of the program for the years 2004 to 2009 was 53 billion US dollars/ 74.55 billion CAD dollars, making it the largest single line in The Pentagon’s budget. For the Silo, George Filer.

When Buick And Oldsmobile Promoted Cars With Space Themed Musicals

General Motors’ affinity for using entertainment to promote its products reached a fever pitch in 1955, as an estimated two million people attended Motorama in New York City, Boston, Miami, San Francisco, and Los Angeles. It was followed that same year by Powerama in Chicago, a show that highlighted GM’s non-automotive businesses and featured a musical dubbed “More Power to You.” It included French acrobats atop a 70-foot crane, 35-ton bulldozers dancing the mambo, and a battle of strength between a top-hatted elephant and a bulldozer in which the pachyderm is sent packing. The show ran for 26 days and attracted two million visitors. 

But that wasn’t the end of it, as GM produced musicals—yes musicals—to help move the metal. The result would be Buick’s Spacerama (so many -ramas) and Oldsmobile’s The Merry Oh-h-h.

Oldsmobile in 1955

1955 Oldsmobile black white
Flickr/Chad Horwedel

Having reached record sales of 583,179 units for the 1955 model year, Oldsmobile hoped to continue the sales boom for 1956, even though its lineup was mostly carryover. The biggest news was the Jetaway Hydra-matic automatic transmission, which was redesigned for the first time since its introduction in 1940. For the first time, it offered a Park position, like modern automatics, and featured two fluid couplings to enhance shifts between its four gears. The Jetaway was standard on the 98 and Super 88. 

J.F. Wolfram, Oldsmobile general manager, confidently predicted Oldsmobile would sell 750,000 cars for the 1956 model year as Oldsmobile employment reached a record high of 19,170 employees.

To stoke enthusiasm, the company created a musical dubbed “The Merry Oh-h-h”, which debuted in New York City at the Ziegfeld Theatre. The show starred Chita Rivera, who had appeared in “Call Me Madam” and “Can Can.” Here she plays Miss Jetaway Drive alongside singer Mildred Hughes and Billy Skipper, who danced in “Annie Get Your Gun.” Other notable names include Joe Flynn, Frank Gorshin, Charles Cooper and Bern Hoffman. It was directed by Max Hodge, who would go on to work on the TV shows “Mission: Impossible” and “Mannix.”

General Motors Merry Oh h h
GM

The musical, which at the time cost GM $150,000 usd / $210,000 cad to produce, espoused the glories of power steering, automatic transmissions and Rocket V8 engines. Songs included “Tops in Transmission,” “Advancing on Lansing” and “The Car is the Star.”

After its New York debut, the musical and its 34-member cast went on tour to San Francisco, Fort Worth and Chicago before arriving in Lansing, Michigan, Oldsmobile’s hometown, which included an appearance by pop star Patti Page.

But the show generated unintentional notoriety when its piano player, Robert Orpin, was found dead in his room at the Hilton Hotel in Fort Worth. Orpin, who hailed from Forest Hills, Long Island, was found in a filled bathtub with the hot water running. He was discovered by a maid who heard the running water running. His death was later ruled accidental. 

“The Merry Oh-h-h” would play to 30,000 Oldsmobile employees and their families nationwide. But it did little for Oldsmobile sales, as demand fell to 485,492 units for the model year.

Buick heads for Spacerama

General Motors Spacerama
GM

No doubt using a stage show to promote new models was hardly an isolated idea at GM in 1955. In fact, Buick arrived at the idea before Oldsmobile, thanks to their ad agency at the time, the Kudner Agency and its vice president, Myron Kirk.

Kirk had attended GM’s 1954 Motorama during its nine-day stand in Boston, where he ran into Ivan Wiles, vice president and general manager of Buick, and Al Belfie, Buick’s general sales manager. While watching the theatrics, Kirk told the executives of the impressive dancing he had seen in the then-new movie, “Seven Brides for Seven Brothers.” Kirk arranged a private viewing of the film for them, and afterwards, Kirk received approval to bring in the movie’s choreographer, Michael Kidd, to produce a show to promote the 1956 Buick lineup.

General Motors Spacerama
GM

He tapped Alan Lipscott and Robert Fisher to write the show. The duo was well-known for writing scripts for such TV shows as “Make Room For Daddy,” “The Donna Reed Show” and “Bachelor Father” along with many others. The plot concerned mankind’s search for the obtaining transportation from the Stone Age to the current day, where a trip to Mars reveals a depressed population. They overcome their depression when they are brought to earth to see the 1956 Buick lineup. The show starred Mark Dawson and comedian Jack E. Leonard. 

For the music, Kirk’s agency chose Bernie Wayne, who is best known for such songs as “Blue Velvet,” “The Magic Touch,” the Miss America theme, and the commercial jingle “Chock Full O’Nuts Is the Heavenly Coffee.” For Buick’s musical, Wayne composed such songs as “Just Like Coming Home Again,” “Switch the Pitch,” and ‘The Peak of Civilization.”

The show started in Flint, Michigan before heading to Los Angeles, Houston, Chicago, Atlanta, Detroit, and wrapping up in New York City. In all, 50,000 Buick dealers, employees and their families saw the show.

Still, you have to wonder why GM went to so much trouble. “We have about 12,000 dealers and their salesmen,” a Buick spokesman told the Detroit Free Press in September 1955. “Many of them will sell as much as $150,000 usd of our products next year. You surely can afford to spend $100 or more to entertain them.”

Of course, GM could afford such largesse; they were on their way to their first billion-dollar annual profit. Now that’s a lot of spacebucks. For the Silo, Larry Printz/ Hagerty. Featured image- GM’s Spacerama 2 promo.

Moon Rover Driver & French Astronaut Join Monaco Prince For Visit Of New Moon Rover Lab

PRINCE ALBERT II OF MONACO, APOLLO 15 COMMANDER DAVID SCOTT AND ASTRONAUT JEAN-FRANCOIS CLERVOY VISITING VENTURI SPACE 
Monaco, November 2024 – Gildo Pastor, President of the Monegasque company Venturi Space, welcomed HSH Prince Albert II of Monaco, General David Scott – the first person to have driven a rover on the Moon and Commander of the Apollo 15 mission – and astronaut Jean-François Clervoy.

As a prelude to the lunar missions in which Venturi Space will participate in 2025 and 2026, its President, Gildo Pastor, invited Prince Albert II of Monaco and former astronauts David Scott and Jean-François Clervoy to learn more about the upcoming programme and the advanced technologies developed by Venturi Space’s European bases (Monaco, Switzerland, and France) as part of their collaboration with the North American strategic partner, Venturi Astrolab, Inc. This US-based entity is developing multi-purpose rovers optimised for the needs of the lunar South Pole: FLIP, which will become operational in 2025, and FLEX, scheduled for launch with SpaceX in 2026 at the earliest. FLEX is also one of three mobility solutions shortlisted by NASA for the Artemis V mission in 2030.
In the presence of Isabelle Berro-Amadeï, Minister for External Relations and Cooperation; Pierre-André Chiappori, Minister for Finance and the Economy; HE Maguy Maccario-Doyle, Monaco’s Ambassador to the United States of America; and Frédéric Genta, Interministerial Delegate for Attractiveness and Digital Transition, the visit consisted of five main stages:

-A presentation of the FLIP and FLEX rovers,
-An overview of Venturi Space Monaco’s lunar battery manufacturing technologies and techniques,
-A discussion of the upcoming missions of Venturi Astrolab and Venturi Space, featuring insights from David Scott and Jean-François Clervoy,
-A presentation of Venturi Space Switzerland’s hyper-deformable lunar wheel technology,
-An exhibition by Philippe Tondeur dedicated to the helmets and suits of aerospace history.‘ Venturi Space is taking on a very serious challenge! The FLEX rover is very different from the one I drove, it’s much bigger and will have an enormous operating life. It seems to me that the teams are doing a good job, and I wish them good luck.’ – General David Scott.

‘I’m passionate about space exploration and wheeled vehicles. Welcoming the first person to have driven a rover on the Moon, in the presence of the Sovereign and Jean-François Clervoy, brings me immense pleasure’ – Gildo Pastor, President of Venturi Space.

Hundreds of New UFO Sightings Reported to Pentagon

The new findings bring the total number of UAP cases under review to more than 1,600 as of June 2024.

Hundreds of New UFO Sightings Reported to Pentagon
A photo from the Department of Defense shows an “unidentified aerial phenomenon.” Department of Defense

There were 757 reports of unidentified anomalous phenomena (UAP) between May 2023 and June 2024, according to an unclassified Department of Defense (DOD) report released on Nov. 14.

Congress mandated the annual report by the DOD’s All-Domain Anomaly Resolution Office (AARO), which is tasked with studying and cataloging reports of UAPs, formerly referred to as UFOs.

The report said that AARO received 757 UAP reports from May 1, 2023, to June 1, 2024, and “485 of these reports featured UAP incidents that occurred during the reporting period.”

“The remaining 272 reports featured UAP incidents that occurred between 2021 and 2022 but were not reported to AARO until this reporting period and consequently were not included in previous annual UAP reports,” the report reads.

The new findings bring the total number of UAP cases under AARO review to more than 1,600 as of June.

AARO Director Jon Kosloski said at a Nov. 14 media briefing that the findings have left investigators puzzled.

Related Stories

5 Takeaways From Congressional UFO Hearing

5 Takeaways From Congressional UFO Hearing

IN-DEPTH: ‘The American People Are Ready’; Lawmakers Advocate Government Disclosure of Records on the ‘UAP Enigma’

IN-DEPTH: ‘The American People Are Ready’; Lawmakers Advocate Government Disclosure of Records on the ‘UAP Enigma’

“There are interesting cases that I, with my physics and engineering background and time in the [intelligence community], I do not understand,“ Kosloski said. ”And I don’t know anybody else who understands them either.”

Some cases were later resolved, with 49 determined to be sightings of common objects such as balloons, birds, and unmanned aerial systems. Another 243, also found to be sightings of ordinary objects, were recommended for closure by June. However, 444 were deemed inexplicable and lacking sufficient data, so they were archived for future investigation.

Notably, 21 cases were considered to “merit further analysis” because of anomalous characteristics and behaviors.

Despite the unexplained incidents, the office noted that it “has discovered no evidence of extraterrestrial beings, activity, or technology.”

The report said UAP cases often had consistent patterns, described as having unidentified lights and as orb-shaped or otherwise round objects with distinct visual traits.

Of the new cases, 81 were reported in U.S. military operating areas, and three reports from military air crews described “pilots being trailed or shadowed by UAP.”

The Federal Aviation Administration reported 392 unexplained sightings among the 757 reports made since 2021.

In one such case, the AARO resolved a commercial pilot’s sighting of white flashing lights as a Starlink satellite launched from Cape Canaveral, Florida.

“AARO is investigating if other unresolved cases may be attributed to the expansion of the Starlink and other mega-constellations in low earth orbit,” the report states.

The AARO report maintains that none of the resolved cases has substantiated “advanced foreign adversarial capabilities or breakthrough aerospace technologies.” The document also states that the AARO will immediately notify Congress if any cases indicate such characteristics, which could suggest extraterrestrial involvement.

The report emphasized the AARO’s “rigorous scientific framework and a data-driven approach” and safety measures while investigating these phenomena.

UAP Hearing

The report was released a day after a House Oversight Committee hearing titled “Unidentified Anomalous Phenomena: Exposing the Truth,” during which witnesses alleged government secrecy surrounding the phenomena.

During the hearing, a former DOD official, Luis Elizondo, said, “Advanced technologies not made by our government or any other government are monitoring sensitive military installations around the globe.”

He testified that the government has operated secret programs to retrieve UAP crash materials to identify and reverse-engineer alien technology.

“Furthermore, the U.S. is in possession of UAP technologies, as are some of our adversaries. I believe we are in the midst of a multi-decade secretive arms race, one funded by misallocated taxpayer dollars and hidden from our elected representatives and oversight bodies,” Elizondo said.

“Although much of my government work on the UAP subject still remains classified, excessive secrecy has led to grave misdeeds against loyal civil servants, military personnel, and the public, all to hide the fact that we are not alone in the cosmos.

“A small cadre within our own government involved in the UAP topic has created a culture of suppression and intimidation that I have personally been victim to, along with many of my former colleagues.” For The Silo, Rudy Blalock/NTD.

A Pathway To Trusted AI

Artificial Intelligence (AI) has infiltrated our lives for decades, but since the public launch of ChatGPT showcasing generative AI in 2022, society has faced unprecedented technological evolution. 

With digital technology already a constant part of our lives, AI has the potential to alter the way we live, work, and play – but exponentially faster than conventional computers have. With AI comes staggering possibilities for both advancement and threat.

The AI industry creates unique and dangerous opportunities and challenges. AI can do amazing things humans can’t, but in many situations, referred to as the black box problem, experts cannot explain why particular decisions or sources of information are created. These outcomes can, sometimes, be inaccurate because of flawed data, bad decisions or infamous AI hallucinations. There is little regulation or guidance in software and effectively no regulations or guidelines in AI.

How do researchers find a way to build and deploy valuable, trusted AI when there are so many concerns about the technology’s reliability, accuracy and security?

That was the subject of a recent C.D. Howe Institute conference. In my keynote address, I commented that it all comes down to software. Software is already deeply intertwined in our lives, from health, banking, and communications to transportation and entertainment. Along with its benefits, there is huge potential for the disruption and tampering of societal structures: Power grids, airports, hospital systems, private data, trusted sources of information, and more.  

Consumers might not incur great consequences if a shopping application goes awry, but our transportation, financial or medical transactions demand rock-solid technology.

The good news is that experts have the knowledge and expertise to build reliable, secure, high-quality software, as demonstrated across Class A medical devices, airplanes, surgical robots, and more. The bad news is this is rarely standard practice. 

As a society, we have often tolerated compromised software for the sake of convenience. We trade privacy, security, and reliability for ease of use and corporate profitability. We have come to view software crashes, identity theft, cybersecurity breaches and the spread of misinformation as everyday occurrences. We are so used to these trade-offs with software that most users don’t even realize that reliable, secure solutions are possible.

With the expected potential of AI, creating trusted technology becomes ever more crucial. Allowing unverifiable AI in our frameworks is akin to building skyscrapers on silt. Security and functionality by design trump whack-a-mole retrofitting. Data must be accurate, protected, and used in the way it’s intended.

Striking a balance between security, quality, functionality, and profit is a complex dance. The BlackBerry phone, for example, set a standard for secure, trusted devices. Data was kept private, activities and information were secure, and operations were never hacked. Devices were used and trusted by prime ministers, CEOs and presidents worldwide. The security features it pioneered live on and are widely used in the devices that outcompeted Blackberry. 

Innovators have the know-how and expertise to create quality products. But often the drive for profits takes precedence over painstaking design. In the AI universe, however, where issues of data privacy, inaccuracies, generation of harmful content and exposure of vulnerabilities have far-reaching effects, trust is easily lost.

So, how do we build and maintain trust? Educating end-users and leaders is an excellent place to start. They need to be informed enough to demand better, and corporations need to strike a balance between caution and innovation.

Companies can build trust through a strong adherence to safe software practices, education in AI evolution and adherence to evolving regulations. Governments and corporate leaders can keep abreast of how other organizations and countries are enacting policies that support technological evolution, institute accreditation, and financial incentives that support best practices. Across the globe, countries and regions are already developing strategies and laws to encourage responsible use of AI. 

Recent years have seen the creation of codes of conduct and regulatory initiatives such as:

  • Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, September 2023, signed by AI powerhouses such as the Vector Institute, Mila-Quebec Artificial Intelligence Institute and the Alberta Machine Intelligence Institute;
  • The Bletchley Declaration, Nov. 2023, an international agreement to cooperate on the development of safe AI, has been signed by 28 countries;
  • US President Biden’s 2023 executive order on the safe, secure and trustworthy development and use of AI; and
  • Governing AI for Humanity, UN Advisory Body Report, September 2024.

We have the expertise to build solid foundations for AI. It’s now up to leaders and corporations to ensure that much-needed practices, guidelines, policies and regulations are in place and followed. It is also up to end-users to demand quality and accountability. 

Now is the time to take steps to mitigate AI’s potential perils so we can build the trust that is needed to harness AI’s extraordinary potential. For the Silo, Charles Eagan. Charles Eagan is the former CTO of Blackberry and a technical advisor to AIE Inc.

In The Future Cyberwar Will Be Primary Theater For Superpowers

Cybersecurity expert explains how virtual wars are fought

With the Russia-Ukraine war in full swing, cybersecurity experts point to a cyber front that had been forming online long before Russian troops crossed the border. Even in the months leading up to the outbreak of war, Ukrainian websites were attacked and altered to display threatening messages about the coming invasion.

“In response to Russian warfare actions, the hacking collective Anonymous launched a series of attacks against Russia, with the country’s state media being the main target. So we can see cyber warfare in action with new types of malware flooding both countries, thousands of sites crashing under DDoS (distributed denial-of-service) attacks, and hacktivism thriving on both sides of barricades,” Daniel Markuson, a cybersecurity expert at NordVPN, says.

The methods of cyberwarfare

In the past decade, the amount of time people spend online has risen drastically. Research by NordVPN has shown that Americans spend around 21 years of their lives online. With our life so dependent on the internet, cyber wars can cause very real damage. Some of the goals online “soldiers” are trying to pursue include:

  • Sabotage and terrorism

The intent of many cyber warfare actions is to sabotage and cause indiscriminate damage. From taking a site offline with a DDoS attack to defacing webpages with political messages, cyber terrorists launch multiple operations every year. One event that had the most impact happened in Turkey when Iranian hackers managed to knock out the power grid for around twelve hours, affecting more than 40 million people.

  • Espionage

While cyber espionage also occurs between corporations, with competitors vying for patents and sensitive information, it’s an essential strategy for governments engaging in covert warfare. Chinese intelligence services are regularly named as the culprits in such operations, although they consistently deny the accusations.

  • Civilian activism (hacktivism)

The growing trend of hacktivism has seen civilian cyber activists take on governments and authorities around the world. One example of hacktivism is Anonymous, a group that has claimed responsibility for assaults on government agencies in the US. In 2022, Anonymous began a targeted cyber campaign against Russia after it invaded Ukraine in an attempt to disrupt government systems and combat Russian propaganda.

  • Propaganda and disinformation

In 2020, 81 countries were found to have used some form of social media manipulation. This type of manipulation was usually ordered by government agencies, political parties, or politicians. Such campaigns, which largely involve the spread of fake news, tended to focus on three key goals – distract or divert conversations away from important issues, increase polarization between religious, political, or social groups, and suppress fundamental human rights, such as the right to freedom of expression or freedom of information.

The future of cyber warfare

“Governments, corporations, and the public need to understand this emerging landscape and protect themselves by taking care of their physical security as well as cybersecurity. From the mass cyberattacks of 2008’s Russo-Georgian War to the cyber onslaught faced by Ukraine today, this is the new battleground for both civil and international conflicts,” Daniel Markuson says.

Markuson predicts that in the future, cyber war will become the primary theater of war for global superpowers. He also thinks that terrorist cells may focus their efforts on targeting civilian infrastructure and other high-risk networks: terrorists would be even harder to detect and could launch attacks anywhere in the world. Lastly, Markuson thinks that activism will become more virtual and allow citizens to hold large governmental authorities to account.

A regular person can’t do much to fight in a cyber war or to protect themselves from the consequences.

However, educating yourself, paying attention to the reliability of sources of information, and maintaining a critical attitude  to everything you read online could help  increase your awareness and feel less affected by propaganda.  For the Silo, Darija Grobova.

Great Tips For Winter Storing Your Classic

The trees are almost bare and the evening arrives sooner each day. We all know what that means: It’s time to tuck away our classics into storage.

Just when you thought you’d heard every suggestion and clever tip for properly storing your classic automobile, along comes another recommendation—or two, or three or twelve 😉

As you can imagine, I’ve heard plenty of ideas and advice about winter storage over the years. Some of those annual recommendations are repeated here. And some have been amended—for example, the fragrance of dryer sheets is way more pleasing to noses than the stench of moth balls, and the fresh smell actually does a superior job of repelling mice.

Wash and wax

ferrari 458 wax
Sabrina Hyde

It may seem fruitless to wash the car when it is about to be put away for months, but it is an easy step that shouldn’t be overlooked. Water stains or bird droppings left on the car can permanently damage the paint. Make sure to clean the wheels and undersides of the fenders to get rid of mud, grease and tar. For added protection, give the car a coat of wax and treat any interior leather with a good conditioner.

Car cover

Viper car cover
Don Rutt

Even if your classic is stored in a garage in semi-stable temperatures and protected from the elements, a car cover will keep any spills or dust off of the paint. It can also protect from scratches while moving objects around the parked car.

Oil change

Checking oil 1960 plymouth fury
Sabrina Hyde

If you will be storing the vehicle for longer than 30 days, consider getting the oil changed. Used engine oil has contaminants that could damage the engine or lead to sludge buildup. (And if your transmission fluid is due for a change, do it now too. When spring rolls around, you’ll be happy you did.)

Fuel tank

camaro red fill up gas
Sabrina Hyde

Before any extended storage period, remember to fill the gas tank to prevent moisture from accumulating inside the fuel tank and to keep the seals from drying out. You should also pour in fuel stabilizer to prevent buildup and protect the engine from gum, varnish, and rust. This is especially critical in modern gasoline blended with ethanol, which gums up more easily. The fuel stabilizer will prevent the gas from deteriorating for up to 12 months.

Radiator

This is another area where fresh fluids will help prevent contaminants from slowly wearing down engine parts. If it’s time to flush the radiator fluid, doing it before winter storage is a good idea. Whether or not you put in new antifreeze, check your freezing point with a hydrometer or test strips to make sure you’re good for the lowest of winter temperatures.

Battery

car battery
Optima

An unattended battery will slowly lose its charge and eventually go bad, resulting in having to purchase a new battery in the spring. The easiest, low-tech solution is to disconnect the battery cables—the negative (ground) first, then the positive. You’ll likely lose any stereo presets, time, and other settings. If you want to keep those settings and ensure that your battery starts the moment you return, purchase a trickle charger. This device hooks up to your car battery on one end, then plugs into a wall outlet on the other and delivers just enough electrical power to keep the battery topped up. Warning: Do not use a trickle charger if you’re storing your car off property. In rare cases they’ve been known to spark a fire.

Parking brake

For general driving use it is a good idea to use the parking brake, but don’t do it when you leave a car in storage long term; if the brake pads make contact with the rotors for an extended period of time, they could fuse together. Instead of risking your emergency brake, purchase a tire chock or two to prevent the car from moving.

Tire care

Ferrari tire care
Sabrina Hyde

If a vehicle is left stationary for too long, the tires could develop flat spots from the weight of the vehicle pressing down on the tires’ treads. This occurs at a faster rate in colder temperatures, especially with high-performance or low-profile tires, and in severe cases a flat spot becomes a permanent part of the tire, causing a need for replacement. If your car will be in storage for more than 30 days, consider taking off the wheels and placing the car on jack stands at all four corners. With that said, some argue that this procedure isn’t good for the suspension, and there’s always this consideration: If there’s a fire, you have no way to save your car.

If you don’t want to go through the hassle of jack stands, overinflate your tires slightly (2–5 pounds) to account for any air loss while it hibernates, and make sure the tires are on plywood, not in direct contact with the floor.

Repel rodents

buick in the barn
Gabe Augustine

A solid garage will keep your car dry and relatively warm, conditions that can also attract unwanted rodents during the cold winter months. There are plenty of places in your car for critters to hide and even more things for them to destroy. Prevent them from entering your car by covering any gaps where a mouse could enter, such as the exhaust pipe or an air intake; steel wool works well for this. Next, spread scented dryer sheets or Irish Spring soap shavings inside the car and moth balls around the perimeter of the vehicle. For a more proactive approach and if you’re the killing type, you can also lay down a few mouse traps (although you’ll need to check them regularly for casualties).

Maintain insurance

In order to save money, you might be tempted to cancel your auto insurance when your vehicle is in storage. Bad idea. If you remove coverage completely, you’ll be on your own if there’s a fire, the weight of snow collapses the roof, or your car is stolen. If you have classic car insurance, the policy covers a full year and takes winter storage into account in your annual premium.

  • “An ex-Ferrari race mechanic (Le Mans three times) recommends adding half a cup of automatic transmission fluid to the fuel tank before topping up, and then running the engine for 10 minutes. This applies ONLY to carburetor cars. The oil coats the fuel tank, lines and carb bowls and helps avoid corrosion. It will easily burn off when you restart the car.”
  • A warning regarding car covers: “The only time I covered was years ago when stored in the shop side of my machine shed. No heat that year and the condensation from the concrete caused rust on my bumpers where the cover was tight. The next year I had it in the dirt floor shed and the mice used the cover ties as rope ladders to get in.”
  • “I use the right amount of Camguard in the oil to protect the engine from rust. It’s good stuff.”
  • Your car’s biggest villain is rust, that’s why I clean the car inside and out, and wax it prior to putting it in storage. For extra protection, I generously wax the bumpers and other chrome surfaces, but I do not buff out the wax. Mildew can form on the interior; to prevent this I treat the vinyl, plastic, and rubber surfaces with a product such as Armor All.
  • “Ideally, your car should be stored in a clean, dry garage. I prepare the floor of the storage area by laying down a layer of plastic drop cloth, followed by cardboard. The plastic drop cloth and cardboard act as a barrier to keep the moisture that is in the ground from seeping through the cement floor and attacking the underside of my car.”
  • “Fog out the engine. I do this once the car is parked where it is to be stored for the winter, and while it is still warm from its trip. Remove the air cleaner and spray engine fogging oil into the carburetor with the engine running at a high idle. Once I see smoke coming out of the exhaust, I shut off the engine and replace the air cleaner. Fogging out the engine coats many of the internal engine surfaces, as well as the inside of the exhaust with a coating of oil designed to prevent rust formation.”

Relax, rest, and be patient

Ford Model a roadster in garage
Gabe Augustine

For those of us who live in cold weather provinces or states, there’s actually a great sense of relief when you finally complete your winter prep and all of your summer toys are safely put to bed before the snow flies. Relax; you’ve properly protected your classic. It won’t be long before the snow is waist-high and you’re longing for summer—and that long wait may be the most difficult part of the entire storage process. Practice patience and find something auto-related to capture your attention and bide your time. You’ll be cruising again before you know it. (Keep telling yourself that, anyway.) For the Silo, Rob Siegel/Hagerty.

Is New Porsche 911 GT3 Touring All The car You’ll Ever Need?

Top Gear UK November 2024- Not one but two new Porsche 911 GT3s are upon us, both a regular be-winged car and the more subtle Touring model. And for once, the headline news isn’t the power, the peak revs or the Nürburgring lap time, but how practical it is.

That’s right, because for the first time in the 25-year history of the GT3, it’s being offered with back seats.

It’s only for the Touring, but that addition alone will be enough to start The Internet chattering about whether this is ‘all the car you’ll ever need’.

However, if kids, or at least taking your kids with you, isn’t your thing, then worry not. The back seats are merely an option, and the non-Touring GT3 can’t be had with them at all. Plus, if you’re the sort of Porsche purest who hates weight, you can double down on that ethos with either a Weissach pack for the GT3 or a Leichtbau (aka Lightweight) pack for the Touring.

As for what else is new (and there are a lot of detailed, GT3 RS-inspired changes), join Top Gear’s Tom Ford for an in-depth walkaround of both new GT3s with Andreas Preuninger, Porsche’s Director of GT Cars…

The Dawn of Artificial Intelligence: A Journey Through Time

AI

Artificial Intelligence (AI) has become an integral part of our daily lives, influencing everything from how we interact with technology to how businesses operate. But where did it all begin? Let’s take a journey through the early days of AI, exploring the key milestones that have shaped this fascinating field.

Early Concepts and Inspirations

The concept of artificial beings with intelligence dates back to ancient myths and legends. Stories of mechanical men and intelligent automata can be found in various cultures, reflecting humanity’s long-standing fascination with creating life-like machines1. However, the scientific pursuit of AI began much later, with the advent of modern computing.

The Birth of AI as a Discipline

The field of AI was officially founded in 1956 during the Dartmouth Conference, organized by computer science pioneers John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon2. This conference is often considered the birth of AI as an academic discipline. The attendees proposed that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

Early Milestones

One of the earliest successful AI programs was written in 1951 by Christopher Strachey, who later became the director of the Programming Research Group at the University of Oxford. Strachey’s checkers (draughts) program ran on the Ferranti Mark I computer at the University of Manchester, England3. This program demonstrated that machines could perform tasks that required a form of intelligence, such as playing games.

In 1956, Allen Newell and Herbert A. Simon developed the Logic Theorist, a program designed to mimic human problem-solving skills. This program was able to prove mathematical theorems, marking a significant step forward in AI research4.

The Rise and Fall of AI Hype

The initial success of AI research led to a period of great optimism, often referred to as the “AI spring.” Researchers believed that human-level AI was just around the corner. However, progress was slower than expected, leading to periods of reduced funding and interest known as “AI winters”4. Despite these setbacks, significant advancements continued to be made.

The Advent of Machine Learning

The 1980s and 1990s saw the rise of machine learning, a subset of AI focused on developing algorithms that allow computers to learn from and make predictions based on data. This period also saw the development of neural networks, inspired by the structure and function of the human brain4.

The Modern Era of AI

The 21st century has witnessed a resurgence of interest and investment in AI, driven by advances in computing power, the availability of large datasets, and breakthroughs in algorithms. The development of deep learning, a type of machine learning involving neural networks with many layers, has led to significant improvements in tasks such as image and speech recognition4.

Today, AI is a rapidly evolving field with applications in various domains, including healthcare, finance, transportation, and entertainment. From virtual assistants like me, Microsoft Copilot, to autonomous vehicles and systems, AI continues to transform our world in profound ways.

A Copilot self generated image when queried “Show me what you look like”. CP

Conclusion

The journey of AI from its early conceptual stages to its current state is a testament to human ingenuity and perseverance. While the field has faced numerous challenges and setbacks, the progress made over the past few decades has been remarkable. As we look to the future, the potential for AI to further revolutionize our lives remains immense.

2: Timescale 3: Encyclopedia Britannica 4: Wikipedia 1: Wikipedia


For the Silo, Microsoft Copilot AI. 😉

Inside The High Flying & Spying World Of Hot Air Balloons

Remember early last year when we were besieged by strange, large balloons in our airspace- the kind of balloons that certain nations are using to spy on us or possibly manipulate the weather?

Okay let’s put those conspiracy theories aside for now. Hopefully everyone has seen a hot air balloon in flight at least once because they are majestic and otherworldly and quite calming to watch drifting around up in the blue sky above.

But have you ever wondered when the first one was invented? Or how much hot air is required to get them safely off the ground and ready for a flight around the skies? Or where they are stored when they aren’t being used? Our friends at SpareFoot were wondering the same thing, and the data they shared with us below is quite astonishing.

Hot Air Balloon InfoGraphic

SupplementalBaumgartner’s record setting Free Fall event utilized Hot Air Balloon to reach “edge of space”

Amidst Waves of Data Breaches, U.S. Gov Advised Agencies: Implement Zero Trust Architecture

It’s been nearly two years since arguments and questions kept rising following the FAA outage that happened on January 11th, 2023, which resulted in the complete closure of the U.S. Airspace and most of the airspace here in Canada.

Although the FAA later confirmed that the outage was, in fact, caused by a contractor who unintentionally damaged a data file related to the Notices to Air Missions (NOTAM) system, the authenticity of the fact is still debated. 

The FAA initially urged airlines to ground domestic departures following the system glitch Credit: Reuters

“The FAA said it was due to one corrupted file – who believes this? Are there no safeguards against one file being corrupted, bringing everything down? Billions of Dollars are being spent on cybersecurity, yet this is going on – are there any other files that could be corrupted?” questions Walt Szablowski, Founder and Executive Chairman of Eracent, a company that specializes in providing IT and cybersecurity solutions to large organizations such as the USPS, Visa, U.S. Airforce, British Ministry of Defense — and dozens of Fortune 500 companies.

There has been a string of cybersecurity breaches across some high-profile organizations.

Last year, on January 19th, T-Mobile disclosed that a cyberattacker stole personal data pertaining to 37 million customers, December 2022 saw a trove of data on over 200 million Twitter users circulated among hackers. In November 2022, a hacker posted a dataset to BreachForums containing up-to-date personal information of 487 million WhatsApp users from 84 countries.

The Ponemon Institute in its 2021 Cost of a Data Breach Report analyzed data from 537 organizations around the world that had suffered a data breach. Note all of the following figures are in US dollars. They found that healthcare ($9.23 million ), financial ($5.72 million), pharmaceutical ($5.04 million), technology ($4.88 million), and energy organizations ($4.65 million) suffered the costliest data breaches.

The average total cost of a data breach was estimated to be $3.86 million in 2020, while it increased to $4.24 million in 2021.

“In the software business, 90% of the money is thrown away on software that doesn’t work as intended or as promised,” argues Szablowski“Due to the uncontrollable waves of costly network and data breaches, the U.S. Federal Government is mandating the implementation of the Zero Trust Architecture.

Eracent’s ClearArmor Zero Trust Resource Planning (ZTRP) consolidates and transforms the concept of Zero Trust Architecture into a complete implementation within an organization.

This image has an empty alt attribute; its file name is image-4.png

“Relying on the latest technology will not work if organizations do not evolve their thinking. Tools and technology alone are not the answer. Organizations must design a cybersecurity system that fits and supports each organization’s unique requirements,” concludes Szablowski. For the Silo, Karla Jo Helms.

China Innovates Shenzhen Sea World With Robot Whale Shark

SHENZHEN, China (October, 2024) — After five years of renovations, Xiaomeisha Sea World have taken the bold step to include forward-thinking robotic alternatives to using live animals to educate and entertain visitors.

“We are thrilled to see Xiaomeisha Sea World taking a step toward more compassionate entertainment with its animatronic whale shark, and we hope this move encourages people to reconsider why they feel entitled to see live marine animals in confinement — especially when it comes to species who are known to suffer extreme psychological and physical harm as a result of captivity — and that that this aquarium will continue to lead the way with more exhibits that don’t use live animals.”  Hannah Williams, Cetacean Consultant for In Defense of Animals.

Xiaomeisha Sea World’s decision comes in the context of a broader global movement toward protecting marine life. In recent years, New Zealand made headlines for banning swimming with dolphins to prevent the disturbance of wild populations — a step in recognizing the importance of reducing stress on these sentient beings. In Mexico City, the ban on keeping dolphins and whales in captivity has been a landmark victory, specifically citing the former use of living dolphins in displays that landed the city’s aquarium on In Defense of Animals’ “10 Worst Tanks” list.

Developed by Shenyang Aerospace Xinguang Group under the Third Academy of China Aerospace Science and Industry Corporation Limited, this groundbreaking achievement marks a significant step forward in modern marine technology.

The nearly five-meter-long, 350-kilogram bionic marvel is capable of replicating the movements of a real whale shark with remarkable precision, including swimming, turning, floating, diving, and even movements of its mouth.

At Xiaomeisha Sea World- cutting edge display technology is front and center.

Wild whale and dolphin populations are in global decline. Fishing has caused a severe decline of Indian Ocean dolphins and Pacific Ocean orcas — who also suffer additionally from ship traffic and marine noise. The marine animal entertainment industry puts further pressure on wild animals since it depends on continual top ups of captive populations with wild captures of dolphins and small whales, such as Japan’s infamous Taiji Cove drive hunt. Each year, dolphins face traumatic experiences during live captures, either being killed or traumatically ripped from their pods and shipped for a life of confinement.

In light of the inherent cruelty and conservation impacts of traditional aquarium captivity, Xiaomeisha Sea World’s animatronic whale shark represents a promising shift towards humane marine entertainment. We encourage Xiaomeisha to build on this achievement by becoming the world’s first fully animatronic aquarium. By adopting more “species” of advanced marine robots — which include manta rays, dolphins, and orcas — Xiaomeisha could address lingering concerns, such as new reports of fish with white spot diseasecrowded tanks, “lots of excrement in the snow wolf garden,” ongoing harmful beluga whale shows, and firmly put to rest the heartbreaking legacy of Pezoo, a zoochotic polar bear who suffered in extreme confinement for years. Transitioning away from outdated live-animal performances would position Xiaomeisha as a global leader in innovative, ethical marine exhibits.

Exciting developments in next-generation animal entertainment are taking place around the world. Time Magazine named Axiom Holographics’ animal-free Hologram Zoo in Brisbane among the best inventions of 2023.

Edge Innovations in California has created hyper-realistic animatronic animals, including dolphins that can swim, respond to questions, and engage closely with audiences — without any of the ethical concerns associated with real captive animals. These lifelike creations offer enhanced levels of interaction and can thrive in confined environments like theme parks, aquariums, and shopping malls, preventing real animals from suffering and premature death.

“A tidal wave of excitement is building for the future of animal-free entertainment, driven by cutting-edge technologies like animatronics, holograms, and virtual reality. “Aquariums and zoos have a unique opportunity to captivate audiences with these immersive experiences — without capturing live animals. Modern technology can bring the wonders of animal life to people in ways that were never possible before. We urge Xiaomeisha Sea World to fully embrace animatronics and seize this chance to proudly and openly lead the way to a sustainable, cruelty-free model that respects marine animal lives.” Fleur Dawes, Communications Director for In Defense of Animals.

For the Silo, Hannah Williams/IDA.

In Defense of Animals is an international animal protection organization with over 250,000 supporters and a 40-year history of defending animals, people, and the environment through education, campaigns, and hands-on rescue facilities in California, India, South Korea, and rural Mississippi. For more information, visit https://www.idausa.org/campaign/cetacean-advocacy

Over Half Canadians Opposed To Fed’s Unaffordable 2035 Ban On Gas Powered Cars

Over Half of Canadians Oppose Fed’s Plan to Ban Sale of Conventional Vehicles by 2035: Poll
An electric vehicle is seen being charged in Ottawa on on July 13, 2022. The Canadian Press/Sean Kilpatrick

More than half of Canadians DO NOT support the federal government’s mandate to require all new cars sold in Canada to be electric by 2035, a recent Ipsos poll finds.

Canadians across the country are “a lot more hesitant to ban conventional cars than their elected representatives in Ottawa are,” said Krystle Wittevrongel, research director at the Montreal Economic Institute (MEI), in a news release on Oct. 3.

“They have legitimate concerns, most notably with the cost of those cars, and federal and provincial politicians should take note.”

The online poll, conducted by Ipsos on behalf of the MEI, surveyed 1,190 Canadians aged 18 and over between Sept. 18 and 22. Among the participants overall, 55 percent said they disagree with Ottawa’s decision to ban the sale of conventional vehicles by 2035 and mandate all new cars be electric or zero-emissions.

“In every region surveyed, a larger number of respondents were against the ban than in favour of it,” MEI said in the news release. According to the poll, the proportion of those against the ban was noticeably higher in Western Canada, at 63 percent, followed by the Atlantic provinces at 58 percent. In Ontario, 51 percent were against, and in Quebec, 48 percent were against.

In all, only 40 percent nationwide agreed with the federal mandate.

‘Lukewarm Attitude’

Just 1 in 10 Canadians own an electric vehicle (EV), the poll said. Among those who don’t, less than one-quarter (24 percent) said their next car would be electric.

Fewer Canadians Willing to Buy Electric Vehicles: Federal Research

ANALYSIS: ‘Bumpy Road’ Ahead as Canada Moves Toward 2035 EV Goals

A research report released by Natural Resources Canada (NRCan) in March this year suggests a trend similar to that of the Ipsos poll’s findings. The report indicated that only 36 percent of Canadians had considered buying an EV in 2024—down from 51 percent in 2022.

“Survey results reveal that Canadians hold mixed views on ZEVs [Zero-Emission Vehicles] and continue to have a general lack of knowledge about these vehicles,” said the report by EKOS Research Associate, which was commissioned by NRCan to conduct the online survey of 3,459 Canadians from Jan. 17 to Feb. 7.

The MEI cited a number of key reasons for “this lukewarm attitude” in adopting EVs, including high cost (70 percent), lack of charging infrastructure (66 percent), and reduced performance in Canada’s cold climate (64 percent).

Canada’s shift from gas-powered vehicles to EVs is guided by federal and provincial policies aimed at zero-emission transportation. The federal mandate requires all new light-duty vehicles, which include passenger cars, SUVs, and light trucks, sold by 2035 to be zero-emission—with interim targets of 20 percent by 2026 and 60 percent by 2030.

Some provincial policies, such as those in Quebec, are even stricter, including a planned ban on all gas-powered vehicles and used gas engines by 2035.

‘Unrealistic’

The MEI survey indicated that two-thirds of respondents (66 percent) said the mandate’s timeline is “unrealistic,” with only 26 percent saying Ottawa’s plan is realistic.

In addition, 76 percent of Canadians say the federal government’s environmental impact assessment process used for energy projects takes too long, with only 9 percent taking the opposite view, according to the survey.

A study by the Fraser Institute in March said that achieving Ottawa’s EV goal could increase Canada’s demand for electricity by 15.3 percent and require the equivalent of 10 new mega hydro dams or 13 large natural gas plants to be built within the next 11 years.

“For context, once Canada’s vehicle fleet is fully electric, it will require 10 new mega hydro dams (capable of producing 1,100 megawatts) nationwide, which is the size of British Columbia’s new Site C dam. It took approximately 10 years to plan and pass environmental regulations, and an additional decade to build. To date, Site C is expected to cost $16 billion,” said the think tank in a March 14 news release.

On April 25, Prime Minister Justin Trudeau announced that Canada since 2020 has attracted more than $46 billion cad in investments for projects to manufacture EVs and EV batteries and battery components. A Parliamentary Budget Officer report published July 18 said Ottawa and the provinces have jointly promised $52.5 billion cad in government support from Oct. 8, 2020, to April 25, 2024, which included tax credits, production subsidies, and capital investment for construction and other support.

On July 26, a company slated to build a major rechargeable battery manufacturing plant in Ontario announced that it would halt the project due to declining demand for EVs.

In a news release at the time, Umicore Rechargeable Battery Materials Canada Inc. said it was taking “immediate action” to address a “recent significant slowdown in short- and medium-term EV growth projections affecting its activities.”

For The Silo, Isaac Teo with contribution from the Canadian Press.

Isaac Teo

Porsche Rarities Coming To Auction

Broad Arrow Auctions has released the complete digital catalog for its upcoming inaugural Chattanooga Auction, set for 12 October 2024 at the Chattanooga Convention Center in Tennessee and we have it here for you to drool over (see below).

Among the 90+ collector cars on offer at the single-day sale are no less than 15 variations of the 911 model, including such rarities as the 1984 Porsche 911 SC RS Gruppe B “Evolutionsserie”, the vertible “missing link” in any Carrera RS collection.

Friday, October 11 9:00 am – 5:00 pm ET
Saturday, October 12 9:00 am – 1:00 pm ETAuction
Saturday, October 12 1:00 pm ET

Drool Time

1984 Porsche 911 SC RS Gruppe B “Evolutionsserie”Lot 180
Estimate: $2,600,000 – $3,500,000 USD/ $3,528,000 CAD- $4,750,000 CAD

Looking for something less German? View all lots- click here.

Featured image-

2019 Porsche 911 Speedster Heritage Design Package Lot 140
Estimate: $375,000 – $425,000 USD/ $509,000 CAD- $577,000 CAD

USB Juice Jacking Is New Way Hackers Attack Travelers

How to avoid being hacked during this Fall’s travel season. 

According to a recent study by cybersecurity firm NordVPN, one in four travelers has been hacked when using public Wi-Fi while traveling abroad. However, unsecured Wi-Fi is not the only factor travelers should be worried about. 

Last year, the FBI published a tweet (see below) warning users against smartphone charging stations in public places (airports, hotels, and shopping malls). Hackers may have modified the charging cables with the aim of installing malware on phones to perform an attack called juice jacking. 

“Digital information, although it exists virtually, can also be stolen using physical devices. So it is important to take a 360-degree approach and secure your device from both online and offline threats,” says Adrianus Warmenhoven, a cybersecurity advisor.

What is juice jacking?

Juice jacking is a cyberattack where a public USB charging port is used to steal data or install malware on a device. Juice jacking attacks allow hackers to steal users’ passwords, credit card information, addresses, names, and other data. Attackers can also install malware to track keystrokes, show ads, or add devices to a botnet.

Image

Is juice jacking detectable?

Juice jacking attacks can be difficult to detect. If your device has already been compromised, you may notice some suspicious activity – but that won’t always be the case.

For example, you may notice something you don’t recognize on your phone — like purchases you didn’t make or calls that look suspicious.

Your phone may also start working unusually slowly or feel hotter than usual. Chances are you may have picked up malware. For a full list of signs to watch out for read on and find out how to know if your phone is hacked.

How to protect yourself

Since no sign of juice jacking is 100% reliable, it is best to avoid falling victim to this attack by using the following the advice:

  • Get a power bank. Power banks are a safe and convenient way to charge your device on the go. Getting a portable power bank means that you’ll never have to use public charging stations where juice jacking attacks occur. Always ensure your power bank is fully charged so you can use it on the go.
     
  • Use a USB data blocker. A USB data blocker is a device that protects your phone from juice jacking when you’re using a public charging station. It plugs into the charging port on your phone and acts as a shield between the public charging station’s cord and your device.
     
  • Use a power socket instead. Juice jacking attacks only happen when you’re connected to a USB charger. If you absolutely need to charge your phone in public, avoid the risk of infected cables and USB ports and use a power outlet. This is typically a safe way to charge your mobile device and other devices in public.

For the Silo, Darija Grobova.