Category Archives: Sci-Tech

Hundreds of New UFO Sightings Reported to Pentagon

The new findings bring the total number of UAP cases under review to more than 1,600 as of June 2024.

Hundreds of New UFO Sightings Reported to Pentagon
A photo from the Department of Defense shows an “unidentified aerial phenomenon.” Department of Defense

There were 757 reports of unidentified anomalous phenomena (UAP) between May 2023 and June 2024, according to an unclassified Department of Defense (DOD) report released on Nov. 14.

Congress mandated the annual report by the DOD’s All-Domain Anomaly Resolution Office (AARO), which is tasked with studying and cataloging reports of UAPs, formerly referred to as UFOs.

The report said that AARO received 757 UAP reports from May 1, 2023, to June 1, 2024, and “485 of these reports featured UAP incidents that occurred during the reporting period.”

“The remaining 272 reports featured UAP incidents that occurred between 2021 and 2022 but were not reported to AARO until this reporting period and consequently were not included in previous annual UAP reports,” the report reads.

The new findings bring the total number of UAP cases under AARO review to more than 1,600 as of June.

AARO Director Jon Kosloski said at a Nov. 14 media briefing that the findings have left investigators puzzled.

Related Stories

5 Takeaways From Congressional UFO Hearing

5 Takeaways From Congressional UFO Hearing

IN-DEPTH: ‘The American People Are Ready’; Lawmakers Advocate Government Disclosure of Records on the ‘UAP Enigma’

IN-DEPTH: ‘The American People Are Ready’; Lawmakers Advocate Government Disclosure of Records on the ‘UAP Enigma’

“There are interesting cases that I, with my physics and engineering background and time in the [intelligence community], I do not understand,“ Kosloski said. ”And I don’t know anybody else who understands them either.”

Some cases were later resolved, with 49 determined to be sightings of common objects such as balloons, birds, and unmanned aerial systems. Another 243, also found to be sightings of ordinary objects, were recommended for closure by June. However, 444 were deemed inexplicable and lacking sufficient data, so they were archived for future investigation.

Notably, 21 cases were considered to “merit further analysis” because of anomalous characteristics and behaviors.

Despite the unexplained incidents, the office noted that it “has discovered no evidence of extraterrestrial beings, activity, or technology.”

The report said UAP cases often had consistent patterns, described as having unidentified lights and as orb-shaped or otherwise round objects with distinct visual traits.

Of the new cases, 81 were reported in U.S. military operating areas, and three reports from military air crews described “pilots being trailed or shadowed by UAP.”

The Federal Aviation Administration reported 392 unexplained sightings among the 757 reports made since 2021.

In one such case, the AARO resolved a commercial pilot’s sighting of white flashing lights as a Starlink satellite launched from Cape Canaveral, Florida.

“AARO is investigating if other unresolved cases may be attributed to the expansion of the Starlink and other mega-constellations in low earth orbit,” the report states.

The AARO report maintains that none of the resolved cases has substantiated “advanced foreign adversarial capabilities or breakthrough aerospace technologies.” The document also states that the AARO will immediately notify Congress if any cases indicate such characteristics, which could suggest extraterrestrial involvement.

The report emphasized the AARO’s “rigorous scientific framework and a data-driven approach” and safety measures while investigating these phenomena.

UAP Hearing

The report was released a day after a House Oversight Committee hearing titled “Unidentified Anomalous Phenomena: Exposing the Truth,” during which witnesses alleged government secrecy surrounding the phenomena.

During the hearing, a former DOD official, Luis Elizondo, said, “Advanced technologies not made by our government or any other government are monitoring sensitive military installations around the globe.”

He testified that the government has operated secret programs to retrieve UAP crash materials to identify and reverse-engineer alien technology.

“Furthermore, the U.S. is in possession of UAP technologies, as are some of our adversaries. I believe we are in the midst of a multi-decade secretive arms race, one funded by misallocated taxpayer dollars and hidden from our elected representatives and oversight bodies,” Elizondo said.

“Although much of my government work on the UAP subject still remains classified, excessive secrecy has led to grave misdeeds against loyal civil servants, military personnel, and the public, all to hide the fact that we are not alone in the cosmos.

“A small cadre within our own government involved in the UAP topic has created a culture of suppression and intimidation that I have personally been victim to, along with many of my former colleagues.” For The Silo, Rudy Blalock/NTD.

A Pathway To Trusted AI

Artificial Intelligence (AI) has infiltrated our lives for decades, but since the public launch of ChatGPT showcasing generative AI in 2022, society has faced unprecedented technological evolution. 

With digital technology already a constant part of our lives, AI has the potential to alter the way we live, work, and play – but exponentially faster than conventional computers have. With AI comes staggering possibilities for both advancement and threat.

The AI industry creates unique and dangerous opportunities and challenges. AI can do amazing things humans can’t, but in many situations, referred to as the black box problem, experts cannot explain why particular decisions or sources of information are created. These outcomes can, sometimes, be inaccurate because of flawed data, bad decisions or infamous AI hallucinations. There is little regulation or guidance in software and effectively no regulations or guidelines in AI.

How do researchers find a way to build and deploy valuable, trusted AI when there are so many concerns about the technology’s reliability, accuracy and security?

That was the subject of a recent C.D. Howe Institute conference. In my keynote address, I commented that it all comes down to software. Software is already deeply intertwined in our lives, from health, banking, and communications to transportation and entertainment. Along with its benefits, there is huge potential for the disruption and tampering of societal structures: Power grids, airports, hospital systems, private data, trusted sources of information, and more.  

Consumers might not incur great consequences if a shopping application goes awry, but our transportation, financial or medical transactions demand rock-solid technology.

The good news is that experts have the knowledge and expertise to build reliable, secure, high-quality software, as demonstrated across Class A medical devices, airplanes, surgical robots, and more. The bad news is this is rarely standard practice. 

As a society, we have often tolerated compromised software for the sake of convenience. We trade privacy, security, and reliability for ease of use and corporate profitability. We have come to view software crashes, identity theft, cybersecurity breaches and the spread of misinformation as everyday occurrences. We are so used to these trade-offs with software that most users don’t even realize that reliable, secure solutions are possible.

With the expected potential of AI, creating trusted technology becomes ever more crucial. Allowing unverifiable AI in our frameworks is akin to building skyscrapers on silt. Security and functionality by design trump whack-a-mole retrofitting. Data must be accurate, protected, and used in the way it’s intended.

Striking a balance between security, quality, functionality, and profit is a complex dance. The BlackBerry phone, for example, set a standard for secure, trusted devices. Data was kept private, activities and information were secure, and operations were never hacked. Devices were used and trusted by prime ministers, CEOs and presidents worldwide. The security features it pioneered live on and are widely used in the devices that outcompeted Blackberry. 

Innovators have the know-how and expertise to create quality products. But often the drive for profits takes precedence over painstaking design. In the AI universe, however, where issues of data privacy, inaccuracies, generation of harmful content and exposure of vulnerabilities have far-reaching effects, trust is easily lost.

So, how do we build and maintain trust? Educating end-users and leaders is an excellent place to start. They need to be informed enough to demand better, and corporations need to strike a balance between caution and innovation.

Companies can build trust through a strong adherence to safe software practices, education in AI evolution and adherence to evolving regulations. Governments and corporate leaders can keep abreast of how other organizations and countries are enacting policies that support technological evolution, institute accreditation, and financial incentives that support best practices. Across the globe, countries and regions are already developing strategies and laws to encourage responsible use of AI. 

Recent years have seen the creation of codes of conduct and regulatory initiatives such as:

  • Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, September 2023, signed by AI powerhouses such as the Vector Institute, Mila-Quebec Artificial Intelligence Institute and the Alberta Machine Intelligence Institute;
  • The Bletchley Declaration, Nov. 2023, an international agreement to cooperate on the development of safe AI, has been signed by 28 countries;
  • US President Biden’s 2023 executive order on the safe, secure and trustworthy development and use of AI; and
  • Governing AI for Humanity, UN Advisory Body Report, September 2024.

We have the expertise to build solid foundations for AI. It’s now up to leaders and corporations to ensure that much-needed practices, guidelines, policies and regulations are in place and followed. It is also up to end-users to demand quality and accountability. 

Now is the time to take steps to mitigate AI’s potential perils so we can build the trust that is needed to harness AI’s extraordinary potential. For the Silo, Charles Eagan. Charles Eagan is the former CTO of Blackberry and a technical advisor to AIE Inc.

In The Future Cyberwar Will Be Primary Theater For Superpowers

Cybersecurity expert explains how virtual wars are fought

With the Russia-Ukraine war in full swing, cybersecurity experts point to a cyber front that had been forming online long before Russian troops crossed the border. Even in the months leading up to the outbreak of war, Ukrainian websites were attacked and altered to display threatening messages about the coming invasion.

“In response to Russian warfare actions, the hacking collective Anonymous launched a series of attacks against Russia, with the country’s state media being the main target. So we can see cyber warfare in action with new types of malware flooding both countries, thousands of sites crashing under DDoS (distributed denial-of-service) attacks, and hacktivism thriving on both sides of barricades,” Daniel Markuson, a cybersecurity expert at NordVPN, says.

The methods of cyberwarfare

In the past decade, the amount of time people spend online has risen drastically. Research by NordVPN has shown that Americans spend around 21 years of their lives online. With our life so dependent on the internet, cyber wars can cause very real damage. Some of the goals online “soldiers” are trying to pursue include:

  • Sabotage and terrorism

The intent of many cyber warfare actions is to sabotage and cause indiscriminate damage. From taking a site offline with a DDoS attack to defacing webpages with political messages, cyber terrorists launch multiple operations every year. One event that had the most impact happened in Turkey when Iranian hackers managed to knock out the power grid for around twelve hours, affecting more than 40 million people.

  • Espionage

While cyber espionage also occurs between corporations, with competitors vying for patents and sensitive information, it’s an essential strategy for governments engaging in covert warfare. Chinese intelligence services are regularly named as the culprits in such operations, although they consistently deny the accusations.

  • Civilian activism (hacktivism)

The growing trend of hacktivism has seen civilian cyber activists take on governments and authorities around the world. One example of hacktivism is Anonymous, a group that has claimed responsibility for assaults on government agencies in the US. In 2022, Anonymous began a targeted cyber campaign against Russia after it invaded Ukraine in an attempt to disrupt government systems and combat Russian propaganda.

  • Propaganda and disinformation

In 2020, 81 countries were found to have used some form of social media manipulation. This type of manipulation was usually ordered by government agencies, political parties, or politicians. Such campaigns, which largely involve the spread of fake news, tended to focus on three key goals – distract or divert conversations away from important issues, increase polarization between religious, political, or social groups, and suppress fundamental human rights, such as the right to freedom of expression or freedom of information.

The future of cyber warfare

“Governments, corporations, and the public need to understand this emerging landscape and protect themselves by taking care of their physical security as well as cybersecurity. From the mass cyberattacks of 2008’s Russo-Georgian War to the cyber onslaught faced by Ukraine today, this is the new battleground for both civil and international conflicts,” Daniel Markuson says.

Markuson predicts that in the future, cyber war will become the primary theater of war for global superpowers. He also thinks that terrorist cells may focus their efforts on targeting civilian infrastructure and other high-risk networks: terrorists would be even harder to detect and could launch attacks anywhere in the world. Lastly, Markuson thinks that activism will become more virtual and allow citizens to hold large governmental authorities to account.

A regular person can’t do much to fight in a cyber war or to protect themselves from the consequences.

However, educating yourself, paying attention to the reliability of sources of information, and maintaining a critical attitude  to everything you read online could help  increase your awareness and feel less affected by propaganda.  For the Silo, Darija Grobova.

Great Tips For Winter Storing Your Classic

The trees are almost bare and the evening arrives sooner each day. We all know what that means: It’s time to tuck away our classics into storage.

Just when you thought you’d heard every suggestion and clever tip for properly storing your classic automobile, along comes another recommendation—or two, or three or twelve 😉

As you can imagine, I’ve heard plenty of ideas and advice about winter storage over the years. Some of those annual recommendations are repeated here. And some have been amended—for example, the fragrance of dryer sheets is way more pleasing to noses than the stench of moth balls, and the fresh smell actually does a superior job of repelling mice.

Wash and wax

ferrari 458 wax
Sabrina Hyde

It may seem fruitless to wash the car when it is about to be put away for months, but it is an easy step that shouldn’t be overlooked. Water stains or bird droppings left on the car can permanently damage the paint. Make sure to clean the wheels and undersides of the fenders to get rid of mud, grease and tar. For added protection, give the car a coat of wax and treat any interior leather with a good conditioner.

Car cover

Viper car cover
Don Rutt

Even if your classic is stored in a garage in semi-stable temperatures and protected from the elements, a car cover will keep any spills or dust off of the paint. It can also protect from scratches while moving objects around the parked car.

Oil change

Checking oil 1960 plymouth fury
Sabrina Hyde

If you will be storing the vehicle for longer than 30 days, consider getting the oil changed. Used engine oil has contaminants that could damage the engine or lead to sludge buildup. (And if your transmission fluid is due for a change, do it now too. When spring rolls around, you’ll be happy you did.)

Fuel tank

camaro red fill up gas
Sabrina Hyde

Before any extended storage period, remember to fill the gas tank to prevent moisture from accumulating inside the fuel tank and to keep the seals from drying out. You should also pour in fuel stabilizer to prevent buildup and protect the engine from gum, varnish, and rust. This is especially critical in modern gasoline blended with ethanol, which gums up more easily. The fuel stabilizer will prevent the gas from deteriorating for up to 12 months.

Radiator

This is another area where fresh fluids will help prevent contaminants from slowly wearing down engine parts. If it’s time to flush the radiator fluid, doing it before winter storage is a good idea. Whether or not you put in new antifreeze, check your freezing point with a hydrometer or test strips to make sure you’re good for the lowest of winter temperatures.

Battery

car battery
Optima

An unattended battery will slowly lose its charge and eventually go bad, resulting in having to purchase a new battery in the spring. The easiest, low-tech solution is to disconnect the battery cables—the negative (ground) first, then the positive. You’ll likely lose any stereo presets, time, and other settings. If you want to keep those settings and ensure that your battery starts the moment you return, purchase a trickle charger. This device hooks up to your car battery on one end, then plugs into a wall outlet on the other and delivers just enough electrical power to keep the battery topped up. Warning: Do not use a trickle charger if you’re storing your car off property. In rare cases they’ve been known to spark a fire.

Parking brake

For general driving use it is a good idea to use the parking brake, but don’t do it when you leave a car in storage long term; if the brake pads make contact with the rotors for an extended period of time, they could fuse together. Instead of risking your emergency brake, purchase a tire chock or two to prevent the car from moving.

Tire care

Ferrari tire care
Sabrina Hyde

If a vehicle is left stationary for too long, the tires could develop flat spots from the weight of the vehicle pressing down on the tires’ treads. This occurs at a faster rate in colder temperatures, especially with high-performance or low-profile tires, and in severe cases a flat spot becomes a permanent part of the tire, causing a need for replacement. If your car will be in storage for more than 30 days, consider taking off the wheels and placing the car on jack stands at all four corners. With that said, some argue that this procedure isn’t good for the suspension, and there’s always this consideration: If there’s a fire, you have no way to save your car.

If you don’t want to go through the hassle of jack stands, overinflate your tires slightly (2–5 pounds) to account for any air loss while it hibernates, and make sure the tires are on plywood, not in direct contact with the floor.

Repel rodents

buick in the barn
Gabe Augustine

A solid garage will keep your car dry and relatively warm, conditions that can also attract unwanted rodents during the cold winter months. There are plenty of places in your car for critters to hide and even more things for them to destroy. Prevent them from entering your car by covering any gaps where a mouse could enter, such as the exhaust pipe or an air intake; steel wool works well for this. Next, spread scented dryer sheets or Irish Spring soap shavings inside the car and moth balls around the perimeter of the vehicle. For a more proactive approach and if you’re the killing type, you can also lay down a few mouse traps (although you’ll need to check them regularly for casualties).

Maintain insurance

In order to save money, you might be tempted to cancel your auto insurance when your vehicle is in storage. Bad idea. If you remove coverage completely, you’ll be on your own if there’s a fire, the weight of snow collapses the roof, or your car is stolen. If you have classic car insurance, the policy covers a full year and takes winter storage into account in your annual premium.

  • “An ex-Ferrari race mechanic (Le Mans three times) recommends adding half a cup of automatic transmission fluid to the fuel tank before topping up, and then running the engine for 10 minutes. This applies ONLY to carburetor cars. The oil coats the fuel tank, lines and carb bowls and helps avoid corrosion. It will easily burn off when you restart the car.”
  • A warning regarding car covers: “The only time I covered was years ago when stored in the shop side of my machine shed. No heat that year and the condensation from the concrete caused rust on my bumpers where the cover was tight. The next year I had it in the dirt floor shed and the mice used the cover ties as rope ladders to get in.”
  • “I use the right amount of Camguard in the oil to protect the engine from rust. It’s good stuff.”
  • Your car’s biggest villain is rust, that’s why I clean the car inside and out, and wax it prior to putting it in storage. For extra protection, I generously wax the bumpers and other chrome surfaces, but I do not buff out the wax. Mildew can form on the interior; to prevent this I treat the vinyl, plastic, and rubber surfaces with a product such as Armor All.
  • “Ideally, your car should be stored in a clean, dry garage. I prepare the floor of the storage area by laying down a layer of plastic drop cloth, followed by cardboard. The plastic drop cloth and cardboard act as a barrier to keep the moisture that is in the ground from seeping through the cement floor and attacking the underside of my car.”
  • “Fog out the engine. I do this once the car is parked where it is to be stored for the winter, and while it is still warm from its trip. Remove the air cleaner and spray engine fogging oil into the carburetor with the engine running at a high idle. Once I see smoke coming out of the exhaust, I shut off the engine and replace the air cleaner. Fogging out the engine coats many of the internal engine surfaces, as well as the inside of the exhaust with a coating of oil designed to prevent rust formation.”

Relax, rest, and be patient

Ford Model a roadster in garage
Gabe Augustine

For those of us who live in cold weather provinces or states, there’s actually a great sense of relief when you finally complete your winter prep and all of your summer toys are safely put to bed before the snow flies. Relax; you’ve properly protected your classic. It won’t be long before the snow is waist-high and you’re longing for summer—and that long wait may be the most difficult part of the entire storage process. Practice patience and find something auto-related to capture your attention and bide your time. You’ll be cruising again before you know it. (Keep telling yourself that, anyway.) For the Silo, Rob Siegel/Hagerty.

The Theory of Intrinsic Energy

Donald H. MacAdam

Abstract

Gravitational action at a distance is non-Newtonian and independent of mass, but is proportional to intrinsic energy, distance, and time. Electrical action at a distance is proportional to intrinsic energy, distance, and time.

The conventional assumption that all energy is kinetic and proportional to velocity and mass has resulted in an absence of mechanisms to explain important phenomena such as stellar rotation curves, mass increase with increase in velocity, constant photon velocity, and the levitation and suspension of superconducting disks.

In addition, there is no explanation for the existence of the fine structure constant, no explanation for the value of the proton-electron mass ratio, no method to derive the spectral series of atoms larger than hydrogen, and no definitive proof or disproof of cosmic inflation.

All of the above issues are resolved by the existence of intrinsic energy.

Table of contents

  • Part One “Gravitation and the fine structure constant” derives the fine structure constant, the proton-electron mass ratio, and the mechanisms of non-Newtonian gravitation including the precession rate of mercury’s perihelion and stellar rotation curves.
  • Part Two “Structure and chirality” describes the structure of particles and the chirality meshing interactions that mediate action at a distance between particles and gravitons (gravitation) and particles and quantons (electromagnetism) and describes the properties of photons (with the mechanism of diffraction and constant photon velocity).
  • Part Three “Nuclear magnetic resonance” is a general derivation of the gyromagnetic ratios and nuclear magnetic moments of isotopes.
  • Part Four “Particle acceleration” derives the mechanism for the increase in mass (and mass-energy) in particle acceleration.
  • Part Five “Atomic Spectra” reformulates the Rydberg equations for the spectral series of hydrogen, derives the spectral series of helium, lithium, beryllium, and boron, and explains the process to build a table of the spectral series for any elemental atom.
  • Part Six “Cosmology” disproves cosmic inflation.
  • Part Seven “Magnetic levitation and suspension” quantitatively explains the levitation of pyrolytic carbon, and the levitation, suspension and pinning of superconducting disks.

Part One

Gravitation and the fine structure constant

That gravity should be innate inherent & essential to matter so that one body may act upon another at a distance through a vacuum without the mediation of anything else by & through which their action or force may be conveyed from one to another is to me so great an absurdity that I believe no man who has … any competent faculty of thinking can ever fall into it.”1

Intrinsic energy is independent of mass and velocity. Intrinsic energy is the inherent energy of particles such as the proton and electron. Neutrons are composite particles composed of protons, electrons, and binding energy. Atoms, composed of protons, neutrons, and electrons, are the substance of larger three-dimensional physical entities, from molecules to galaxies.

Gravitation, electromagnetism, and other action at a distance phenomenon are mediated by gravitons, quantons and neutrinos. Gravitons, quantons and neutrinos are quanta that have a discrete amount of intrinsic energy and are emitted by particles in one direction at a time and absorbed by particles from one direction at a time. Emission-absorption events can be chirality meshing interactions that produce accelerations or achiral interactions that do not produce accelerations. Chirality meshing absorption of gravitons produces attractive accelerations, chirality meshing absorption of quantons produces either attractive or repulsive accelerations, and achiral absorption of neutrinos do not produce accelerations. The word neutrino is burdened with non-physical associations thus achiral quanta are henceforth called neutral flux.

A single chirality meshing interaction produces a deflection (a change in position), but a series of chirality meshing interactions produces acceleration (serial deflections). A single deflection in the direction of existing motion produces a small finite positive acceleration (and inertia) and a single deflection in the direction opposite existing motion produces a small finite negative acceleration (and inertia).

There are two fundamental differences between the mechanisms of Newtonian gravitation and discrete gravitation. The first is the Newtonian probability two particles will gravitationally interact is 100% but the discrete probability two particles will gravitationally interact is significantly less. The second difference is the treatment of force. In Newtonian physics a gravitational force between objects always exists, the force is infinitesimal and continuous, and the strength of the force is inversely proportional to the square of the separation distance. In discrete physics the existence of a gravitational force is dependent on the orientations of the particles of which objects are composed, the force is discrete and discontinuous, and the number of interactions is inversely proportional to the square of the separation distance. While there are considerable differences in mechanisms, in many phenomena the solutions of Newtonian and discrete gravitational equations are nearly identical.

There are similar fundamental differences between mechanisms of electromagnetic phenomena and in many cases the solutions of infinitesimal and discrete equations are nearly identical.

A particle emits gravitons and quantons at a rate proportional to particle intrinsic energy. A particle absorbs gravitons and quantons, subject to availability, at a maximum rate proportional to particle intrinsic energy. Each graviton or quanton emission event reduces the intrinsic energy of the particle and each graviton or quanton absorption event increases the intrinsic energy of the particle. Because graviton and quanton emission events continually occur but graviton and quanton absorption events are dependent on availability, these mechanisms collectively reduce the intrinsic energy of particles.

Only particles in nuclear reactions or undergoing radioactive disintegration emit neutral flux but in the solar system all particles absorb all available neutral flux.

In the solar system, discrete gravitational interactions mediate orbital phenomena and, for objects in a stable orbit the intrinsic energy loss due to the emission-absorption of gravitons is balanced by the absorption of intrinsic energy in the form of solar neutral flux.

Within the solar system, particle absorption of solar neutral flux (passing through a unit area of a spherical shell centered on the sun) adds intrinsic energy at a rate proportional to the inverse square of orbital distance, and over a relatively short period of time, the graviton, quanton, and neutral flux emission-absorption processes achieve Stable Balance resulting in constant intrinsic energy for particles of the same type at the same orbital distance, with particle intrinsic energies higher the closer to the sun and lower the further from the sun.

The process of Stable Balance is bidirectional.

If a high energy body consisting of high energy particles is captured by the solar gravitational field and enters into solar orbit at the orbital distance of earth, the higher particle intrinsic energies will result in an excess of intrinsic energy emissions compared to intrinsic energy absorptions at that orbital distance, and the intrinsic energy of the body will be reduced to bring it into Stable Balance.

If, on the other hand, a low energy body consisting of low energy particles is captured by the solar gravitational field and enters into solar orbit at the orbital distance of earth, the lower particle intrinsic energies will result in an excess of intrinsic energy absorptions at that orbital distance compared to the intrinsic energy emissions, and the intrinsic energy of the body will be increased to bring it into Stable Balance.

In an ideal two-body earth-sun system, a spherical and randomly symmetrical earth is in Stable Balance orbit about a spherical and randomly symmetrical sun. A randomly symmetrical body is composed of particles that collectively emit an equal intensity of gravitons (graviton flux) through a unit area on a spherical shell centered on the emitting body.

Unless otherwise stipulated, in this document references to the earth or sun assume they are part of an ideal two-body earth-sun system.

The gravitational intrinsic energy of earth is proportional to the gravitational intrinsic energy of the sun because total emissions of solar gravitons are proportional to the number of gravitons passing into or through earth as it continuously moves on a spherical shell centered on the sun (and also proportional to the volume of the spherical earth, to the cross-sectional area of the earth, to the diameter of the earth and to the radius of the earth).

Likewise, because the sun and the earth orbit about their mutual barycenter, the gravitational intrinsic energy of the sun is proportional to the gravitational intrinsic energy of the earth because total emissions of earthly gravitons are proportional to the number of gravitons passing into or through the sun as it continuously moves on a spherical shell centered on the earth (and also proportional to the volume of the spherical sun, to the cross-sectional area of the sun, to the diameter of the sun and to the radius of the sun).

We define the orbital distance of earth equal to 15E10 meters and note earth’s orbit in an ideal two-body system is circular. If additional planets are introduced, earth’s orbit will become elliptical and the diameter of earth’s former circular orbit will be equal to the semi-major axis of the elliptical orbit.

We define the intrinsic photon velocity c equal to 3E8 m/s and equal in amplitude to the intrinsic constant Theta which is non-denominated. We further define the elapsed time for a photon to travel 15E10 meters equal to 500 seconds.

The non-denominated intrinsic constant Psi, 1E-7, is equal in amplitude to the intrinsic magnetic constant denominated in units of Henry per meter.

Psi is also equal in amplitude to the 2014 CODATA vacuum magnetic permeability divided by 4 (after 2014 CODATA values for permittivity and permeability are defined and no longer reconciled to the speed of light); half the electromagnetic force (units of Newton) between two straight ideal (constant diameter and homogeneous composition) parallel conductors with center-to-center distance of one meter and each carrying a current of one Ampere; and to the intrinsic voltage of a magnetically induced minimum amplitude current loop (3E8 electrons per second).

The intrinsic electric constant, the inverse of the product of the intrinsic magnetic constant and the square of the intrinsic photon velocity, is equal to the inverse of 9E9 and denominated in units of Farad per meter.

The Newtonian mass of earth, denominated in units of kilogram, is equal to 6E24, and equal in amplitude to the active gravitational mass of earth, denominated in units of Einstein (the unit of intrinsic energy).

The active gravitational mass is proportional to the number of gravitons emitted and the Newtonian mass is proportional to the number of gravitons absorbed. Every graviton absorbed contributes to the acceleration and inertia of the absorber, therefore the Newtonian mass is also the inertial mass.


We define the radius of earth, the square root of the ratio of the Newtonian inertial mass of earth divided by orbital distance, or the square root of the ratio of the active gravitational mass of earth divided by its orbital distance, equal to the square root of 4E13, 6.325E6, about 0.993 the NASA volumetric radius of 6.371E6. Our somewhat smaller earth has a slightly higher density and a local gravitational constant equal to 10 m/s2 at any point on its perfectly spherical surface.

We define the Gravitational constant at the orbital distance of earth, the ratio of the local gravitational constant of earth divided by its orbital distance, equal to the inverse of 15E9.

The unit kilogram is equal to the mass of 6E26 protons at the orbital distance of earth, and the proton mass equal to the inverse of 6E26.

The proton intrinsic energy at the orbital distance of earth is equal to the inverse of the product of the proton mass and the mass-energy factor delta (equal to 100). Within the solar system, the proton intrinsic energy increases at orbital distances closer to the sun and decreases at orbital distances further from the sun. Changes in proton intrinsic energy are proportional to the inverse square of orbital distance.

The Newtonian mass of the sun, denominated in units of kilogram, is equal to 2E30, and equal in amplitude to the active gravitational mass of the sun, denominated in units of Einstein.

The active gravitational mass is proportional to the number of gravitons emitted and the Newtonian mass is proportional to the number of gravitons absorbed. Every graviton absorbed contributes to the acceleration and inertia of the absorber, therefore the Newtonian mass is also the inertial mass.

The active gravitational mass of earth divided by the active gravitational mass of the sun is equal to the intrinsic constant Beta-square and its square root is equal to the intrinsic constant Beta.

The charge intrinsic energy ei, denominated in units of intrinsic Volt, is proportional to the number of quantons emitted by an electron or proton. The charge intrinsic energy is equal to Beta divided by Theta-square, the inverse of the square root of 27E38.

Intrinsic voltage does not dissipate kinetic energy.

The electron intrinsic energy Ee, equal to the ratio of Beta-square divided by Theta-cube, the ratio of Psi-square divided by Theta-square, the product of the square of the charge intrinsic energy and Theta, and the ratio of the intrinsic electron magnetic flux quantum divided by the intrinsic Josephson constant, is denominated in units of Einstein.

The intrinsic electron magnetic flux quantum, equal to the square root of the electron intrinsic energy, is denominated in units of intrinsic Volt second.

The intrinsic Josephson constant, equal to the inverse of the square root of the electron intrinsic energy, the ratio of Theta divided by Psi and the ratio of the photon velocity divided by the intrinsic sustaining voltage of a minimum amplitude superconducting current, is denominated in units of Hertz per intrinsic Volt.

The discrete (dissipative kinetic) electron magnetic flux quantum, equal to the product of 2π and the intrinsic electron magnetic flux quantum, is denominated in units of discrete Volt second, and the discrete rotational Josephson constant, equal to the intrinsic Josephson constant divided by 2π and the inverse of the discrete electron magnetic flux quantum, is denominated in units of Hertz per discrete Volt. These constants are expressions of rotational frequencies.

We define the electron amplitude equal to 1. The proton amplitude is equal to the ratio of the proton intrinsic energy divided by the electron intrinsic energy.

We define the Coulomb, ec, equal to the product of the charge intrinsic energy and the square root of the proton amplitude divided by two. The Coulomb denominates dissipative current.

We define the Faraday equal to 1E5, and the Avogadro constant equal to the Faraday divided by the Coulomb.

Lambda-bar, the quantum of particle intrinsic energy, equal to the intrinsic energy content of a graviton or quanton, is the ratio of the product of Psi and Beta divided by Theta-cube, the ratio of Psi-cube divided by the product of Beta and Theta-square, the product of the charge intrinsic energy and the intrinsic electron magnetic flux quantum, and the charge intrinsic energy divided by the intrinsic Josephson constant.

CODATA physical constants that are defined as exact have an uncertainty of 10-12 decimal places therefore the exactness of Newtonian infinitesimal calculations is of a similar order of magnitude. We assert that Lambda-bar and proportional physical constants are discretely exact (equivalent to Newtonian infinitesimal calculations) because discretely exact physical properties can be exactly expressed to greater accuracy than can be measured in the laboratory.

All intrinsic physical constants and intrinsic properties are discretely rational. The ratio of two positive integers is a discretely rational number.

  • The ratio of two discretely rational numbers is discretely rational.
  • The rational power or rational root of a discretely rational number is discretely rational.
  • The difference or sum of discretely rational numbers is discretely rational. This property is important in the derivation of atomic spectra where it serves the same purpose as a Fourier transform in infinitesimal mathematics.

The intrinsic electron gyromagnetic ratio, equal to the ratio of the cube of the charge intrinsic energy divided by Lambda-bar square, is denominated in units of Hertz per Tesla.

The intrinsic proton gyromagnetic ratio, equal to the ratio the intrinsic electron gyromagnetic ratio divided by the square root of the cube of the proton amplitude divided by two and the ratio of eight times the photon velocity divided by nine, is denominated in units of Hertz per Tesla.

The intrinsic conductance quantum, equal to the product of the intrinsic Josephson constant and the discrete Coulomb, is denominated in units of intrinsic Siemen.

The kinetic conductance quantum, equal to the intrinsic conductance quantum divided by 2π, is denominated in units of kinetic Siemen.

The CODATA conductance quantum is equal to 7.748091E-5.

The intrinsic resistance quantum, equal to the inverse of the intrinsic conductance quantum, is denominated in units of Ohm.

The kinetic resistance quantum, equal to the inverse of the kinetic conductance quantum, is denominated in units of Ohm.

The CODATA resistance quantum is equal to 1.290640E4.

The intrinsic von Klitzing constant, equal to the ratio of the discrete Planck constant divided by the square of the intrinsic electric constant, is denominated in units of Ohm.

The kinetic von Klitzing constant, equal to the ratio of the discrete Planck constant divided by the square of the discrete Coulomb, is denominated in units of Ohm.

The CODATA von Klitzing constant is equal to 2.581280745E4.

In Newtonian physics the probability particles at a distance will interact is 100% but in discrete physics a certain granularity is needed for interactions to occur.

A particle G-axis is a single-ended hollow cylinder. The mechanism of the G-axis is analogous to a piston which moves up and down at a frequency proportional to particle intrinsic energy. At the end of the up-stroke a single graviton is emitted and during a down-stroke the absorption window is open until the end of the downstroke or the absorption of a single graviton.

The difference (the intrinsic granularity) between the inside diameter of the hollow cylindrical G-axis and the outside diameter of the graviton allows absorption of incoming gravitons at angles that can deviate from normal (straight down the center) by plus or minus 20 arcseconds.

There are three kinds of intrinsic granularity: the intrinsic granularity in phenomena mediated by the absorption of gravitons and quantons; the intrinsic granularity in phenomena mediated by the emission of gravitons and quantons; and the intrinsic granularity in certain electromagnetic phenomena.

  • The intrinsic granularity in phenomena mediated by the absorption of gravitons or quantons by particles in tangible objects (with kilogram mass greater than one microgram or 1E20 particles) is discretely infinite therefore the average value of 20 arcseconds is discretely exact.
  • The intrinsic granularity in phenomena mediated by the emission of gravitons or quantons by particles is 20 arcseconds because gravitons and quantons emitted in the direction in which the emitting axis is pointing have an intrinsic granularity of not more than plus or minus 10 arcseconds.
  • The intrinsic granularity of certain electromagnetic phenomena, in particular a Faraday disk generator, governed by a “Lorentz force” that causes the velocity of an electron to be at a right angle to the force also causes an additional directional change of 20 arcseconds in the azimuthal direction.

In the above diagram, the intrinsic granularity of graviton absorption is illustrated on the left.

Above center illustrates the aberration between the visible and the actual positions of the sun with respect to an observer on earth as the sun moves across the sky. Position A is the visible position of the sun, position B is the actual position of the sun, position B will be the visible position of the sun in 500 seconds, and position C will be the actual position of the sun in 500 seconds. The elapsed time between successive positions is proportional to the separation distance, but 20 arcseconds of aberration is independent of separation distance.

Above right illustrates the six directions within a Cartesian space and the six possible forms describing the six possible facing directions in which a vector can point. A vector pointing up the G-axis of particle A in the facing direction of particle B has one and only one of the six possible forms. The probability a gravitational interaction will occur, if the vector is facing in one of the other five facing directions, is zero. Therefore, a gravitational interaction involving a graviton emitted by a specific particle A and absorbed by a specific particle B is possible (not probable) in only one-sixth the total volume of Cartesian space.

We define the intrinsic steric factor equal to 6. The intrinsic steric factor is inversely proportional to the probability a specific gravitational intrinsic energy interaction can occur on a scale where the probability a Newtonian gravitational interaction will occur is 100%.

The intrinsic steric factor points outward from a specific particle located at the origin of a Cartesian space facing outward into the surrounding space. The intrinsic steric factor applies to action at a distance in phenomena mediated by gravitons and quantons.

To convert 20 arcseconds of intrinsic granularity into an inverse possibility, divide the 1,296,000 arcseconds in 360 degrees by the product of 20 arcseconds and the intrinsic steric factor.

A possibility is not the same as a probability. The possibility two particles can gravitationally interact (each with the other) is equal to 1 out of 10,800. The probability two particles will gravitationally interact (each with the other) is dependent on the geometry of the interaction.

Because Newtonian gravitational interactions are proportional to the quantum of kinetic energy, the discrete Planck constant, and discrete gravitational interactions are proportional to the quantum of intrinsic energy, Lambda-bar, the factor 10,800 is a conversion factor.

In a bidirectional gravitational interaction, the ratio of the square of the discrete Planck constant divided by the square of Lambda-bar is equal to 10,800.

In a one-directional gravitational interaction the ratio of the discrete Planck constant divided by Lambda-bar is equal to the square root of 10,800.

The discrete Planck constant is equal to Lambda-bar times the square root of 10,800 and denominated in units of Joule second.

The value of the discrete Planck constant, approximately 1.006 times larger than the 2018 CODATA value, is the correct value for the two-body earth-sun system and proportional to the intrinsic physical constants previously defined.

The CODATA fine structure constant alpha is equal to the ratio of the square of the CODATA electron charge divided by the product of two times the CODATA Planck constant, the CODATA vacuum permittivity and the CODATA speed of light (2018 CODATA values).

The intrinsic constant Beta is a transformation of the CODATA expression.

By substitution of the charge intrinsic energy for the CODATA electron charge, Lambda-bar for two times the CODATA Planck constant, the intrinsic electric constant for the CODATA vacuum permittivity and the intrinsic photon velocity for the CODATA speed of light, the dimensionless CODATA fine structure constant alpha is transformed into the dimensionless intrinsic constant Beta.

The existence of the fine structure constant and its ubiquitous appearance in seemingly unrelated equations is due to the assumption that phenomena are governed by kinetic energy, consequently measured values of phenomena governed or partly governed by intrinsic energy do not agree with the theoretical expectations.

A gravitational phenomenon governed by intrinsic energy is the solar system Kepler constant equal to the square root of the cube of the planet’s orbital distance divided by 4π-square times the orbital period of the planet, the product of the active gravitational mass of the sun and the Gravitational constant at the orbital distance of earth divided by 4π-square, and the ratio of the product of the square of the planet’s velocity and the orbital distance of the planet divided by 4π-square.

The intrinsic constant Beta-square, previously shown to be the ratio of the active gravitational mass of earth divided by the active gravitational mass of the sun, is also proportional to the key orbital properties of the sun, earth, and moon.

An electromagnetic phenomenon governed by intrinsic energy is the proton-electron mass ratio, here termed the electron-proton deflection ratio, equal to the square root of the cube of the proton intrinsic energy divided by the cube of the electron intrinsic energy, and to the square root of the cube of the proton amplitude divided by the cube of the unit electron amplitude.

The CODATA proton-electron mass ratio is a measure of electron deflection (1836.15267344) in units of proton deflection (equal to 1). Because the directions of proton and electron deflections are opposite, the electron-proton deflection ratio is approximately equal to the CODATA proton-electron mass ratio plus one.

In this document, unless otherwise specified (as in CODATA constants denominated in units of Joule proportional to the CODATA Planck constant), units of Joule are proportional to the discrete Planck constant.

The ratio of the discrete Planck constant divided by Lambda-bar, equal to the product of the mass-energy factor delta and omega-2, is denominated in units of discrete Joule per Einstein.

In the above equation the denomination discrete Joule represents energy proportional to the discrete Planck constant and the denomination Einstein represents energy proportional to Lambda-bar. The mass-energy factor delta converts non-collisional energy (action at a distance) into collisional energy in units of intrinsic Joule. The factor omega-2 converts units of intrinsic Joule into units of discrete Joule.

Omega factors correspond to the geometry of graviton-mediated and quanton-mediated phenomena.

We will begin with a brief discussion of electrical (quanton-mediated) phenomena then exclusively focus on gravitational phenomena for the remainder of Part One.

Electrical phenomena

The discrete steric factor, equal to 8, is the number of octants defined by the orthogonal planes of a Cartesian space.

Each octant is one of eight signed triplets (—, -+-, -++, –+, +++, +-+, +–, ++-) which correspond to the direction of the x, y, and z Cartesian axes.

A large number of random molecules, each with a velocity coincident with its center of mass, are within a Cartesian space. If the origin is the center of mass of specific molecule1, then random molecule2 is within one of the eight signed octants and, because the same number of random molecules are within each octant, then the specific molecule1 is within one of the eight signed octants with respect to random molecule2, and the possibility (not probability) of a center of mass collisional interaction between molecule2 and molecule1 is equal to the inverse of the discrete steric factor (one in eight).

The discrete and intrinsic steric factors correspond to the geometries of phenomena governed by discrete kinetic energy (proportional to the discrete Planck constant) and to phenomena governed by intrinsic energy:

  • The discrete steric factor points inward from a random molecule in the direction of a specific molecule and applies to phenomena mediated by collisional interactions.
  • The intrinsic steric factor points outward from a specific particle into the surrounding space and applies to phenomena mediated by gravitons and quantons (action at a distance).

The intrinsic molar gas constant, equal to the discrete steric factor, is the intrinsic energy (units of intrinsic Joule) divided by mole Kelvin.

The discrete molar gas constant, equal to the product of the intrinsic molar gas constant and omega-2, is the intrinsic energy (units of discrete Joule) divided by mole Kelvin. The discrete molar gas constant agrees with the CODATA value within 1 part in 13,000.

The ratio of the CODATA electron charge (the elementary charge in units of Coulomb) divided by the charge intrinsic energy (in units of intrinsic Volt) is nearly equal to the discrete molar gas constant.

The intrinsic Boltzmann constant, equal to the ratio of the intrinsic molar gas constant divided by the Avogadro constant, is denominated in units of Einstein per Kelvin.

The discrete Boltzmann constant, equal to the product of omega-2 and the intrinsic Boltzmann constant, and the ratio of the discrete molar gas constant divided by the Avogadro constant, is denominated in units of discrete Joule per Kelvin. The CODATA Boltzmann constant is equal to 1.380649×10-23.

Gravitational phenomena

Omega-2, the square root of 1.08, corresponds to one-directional gravitational interactions between non-orbiting objects (objects not by themselves in orbit, that is, the object might be part of an orbiting body but the object itself is not the orbiting body), for example graviton emission by the large lead balls or absorption by the small lead balls in the Cavendish experiment.

Omega-4, 1.08, corresponds to two-directional gravitational interactions (emission and absorption) between non-orbiting objects, for example the acceleration of the large lead balls or the acceleration of the small lead balls in the Cavendish experiment.

Omega-6, the square root of the cube of 1.08, corresponds to gravitational interactions between a planet and moon in a Keplerian orbit where the square root of the cube of the orbital distance divided by the orbital period is equal to a constant.

Omega-8, the square of 1.08, corresponds to four-directional gravitational interactions by non-orbiting objects, for example the acceleration of the small lead balls and the acceleration of the large lead balls in the Cavendish experiment.

Omega-12, equal to the cube of 1.08, corresponds to gravitational interactions between two objects in orbit about each other, for example the sun and a planet in orbit about their mutual barycenter.

Except where previously defined (the Gravitational constant at the orbital distance of earth, the orbital distance of earth, the mass and volumetric radius of earth, the mass of the sun) the following equations use the NASA2 values for the Newtonian masses, orbital distances, and volumetric radii of the planets.

The local gravitational constant for any of the planets is equal to the product of the Gravitational constant of earth and the Newtonian mass (kilogram mass) of the planet divided by the square of the volumetric radius of the planet.

The v2d value of a planetary moon is equal to the product of the Gravitational constant at the orbital distance of earth and the Newtonian mass of the planet.

The active gravitational mass of a planet, denominated in units of Einstein, is equal to the product of the square of the volumetric radius of the planet and the orbital distance of the planet, divided by the square of the orbital distance of the planet in units of the orbital distance of earth.

The mass of a planet in a Newtonian orbit about the sun (the planet and sun orbit about their mutual barycenter) is a kinetic property. The active gravitational mass of such a planet, denominated in units of Joule, is equal to the product of the active gravitational mass of the planet in units of Einstein and omega-12.

The Gravitational constant at the orbital distance of the planet is equal to the product of the local gravitational constant of the planet and the square of the volumetric radius of the planet, divided by the active gravitational mass of the planet.

The v2d value of a planetary moon is equal to the product of the Gravitational constant at the orbital distance of the planet and the active gravitational mass of the planet.

The v2d values calculated using the NASA orbital parameters for the moon is larger than the above calculated values by 1.00374; the v2d calculations using the NASA orbital parameters for the major Jovian moons (Io, Europa, Ganymede and Callisto) are larger than the above calculated values by 1.0020, 1.0016, 1.00131, and 1.00133.

Newtonian gravitational calculations are extremely accurate for most gravitational phenomena but there are a number of anomalies for which the Newtonian calculations are inaccurate. The first of these anomalies to come to the attention of scientists in 1859 was the precession rate of the perihelion of mercury for which the observed rate was about 43 arcseconds per century larger than the Newtonian calculated rate.3

According to Gerald Clemence, one of the twentieth century’s leading authorities on the subject of planetary orbital calculations, the most accurate method for calculating planetary orbits, the method of Gauss, was derived for calculating planetary orbits within the solar system with distance expressed in astronomical units, orbital period in days and mass in solar masses.4

The Gaussian method was used by Eric Doolittle in what Clemence believed to be the most reliable theoretical calculation of the perihelion precession rate of mercury.5

With modifications by Clemence including newer values for planetary masses, newer measurements of the precession of the equinoxes and a careful analysis of the error terms, the calculated rate was determined to be 531.534 arc-seconds per century compared to the observed rate of 574.095 arc-seconds per century, leaving an unaccounted deficit of 42.561 arcseconds per century.

The below calculations are based on the method of Price and Rush.6 This method determines a Newtonian rate of precession due to the gravitational influences on mercury by the sun and five outer planets external to the orbit of mercury (venus, earth, mars, jupiter and saturn) The solar and planetary masses are treated as Newtonian objects and in calculations of planetary gravitational influences the outer planets are treated as circular mass rings.

The Newtonian gravitational force on mercury due to the mass of the sun is equal to ratio of the product of the negative Gravitational constant at the orbital distance of earth, the mass of the sun and the mass of mercury divided by the square of the orbital distance of mercury.

The Newtonian gravitational force on mercury due to the mass of the five outer planets is equal to the sum of the gravitational force contributions of the five outer planets external to the orbit of mercury. The gravitational force contribution of each planet is equal to the ratio of the product of the Gravitational constant at the orbital distance of earth, the mass of the planet, the mass of mercury and the orbital distance of mercury, divided by the ratio of the product of twice the planet’s orbital distance and the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.

The gravitational force ratio is equal to the gravitational force on mercury due to the mass of the five outer planets external to the orbit of mercury divided by the gravitational force on mercury due to the mass of the sun.

The gamma factor is equal to the sum of the gamma contributions of the five outer planets external to the orbit of mercury. The gamma contribution of each planet is equal to the ratio of the product of the mass of the planet, the orbital distance of mercury, and the sum of the square of the planet’s orbital distance and the square of the orbital distance of mercury, divided by the product of 2π, the planet’s orbital distance and the square of the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.

Psi-mercury is equal to the product of π and the sum of one plus the difference between the negative of the gravitational force ratio and the ratio of the product of the Gravitational constant at the orbital distance of earth, π, the mass of mercury and the gamma factor divided by twice the gravitational force on mercury due to the mass of the sun.

The number of arc-seconds in one revolution is equal to 360 degrees times sixty minutes times sixty seconds.

The number of days in a Julian century is equal to 100 times the length of a Julian year in days.

The perihelion precession rate of mercury is equal to the ratio of the product of the difference between 2ψ-mercury and 2π, the number of arc-seconds in one revolution and the number of days in a Julian century, divided by the product of 2π and the NASA sidereal orbital period of mercury in units of day (87.969).

The Newtonian perihelion precession rate of mercury determined above is 0.139 arc-seconds per century less than the Clemence calculated rate of 531.534 arc-seconds per century.

The following equations, the same format as the Newtonian equations, derive the non-Newtonian values (when different).

The Newtonian gravitational force on mercury due to the mass of the sun is equal to ratio of the product of the negative Gravitational constant at the orbital distance of earth, the mass of the sun and the mass of mercury divided by the square of the orbital distance of mercury.

The non-Newtonian gravitational force on mercury due to the mass of the five outer planets is equal to the sum of the gravitational force contributions of the five outer planets external to the orbit of mercury. The gravitational force contribution of each planet is equal to the product of the ratio of the product of the Gravitational constant at the orbital distance of earth, the active gravitational mass (in units of Joule) of the planet, the Newtonian mass of mercury and the orbital distance of mercury, divided by the ratio of the product of twice the planet’s orbital distance and the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.

The non-Newtonian gravitational force ratio is equal to the gravitational force on mercury due to the mass of the five outer planets external to the orbit of mercury divided by the gravitational force on mercury due to the mass of the sun.

The gamma factor is equal to the sum of the gamma contributions of the five outer planets external to the orbit of mercury. The gamma contribution of each planet is equal to the ratio of the product of the mass of the planet, the orbital distance of mercury, and the sum of the square of the planet’s orbital distance and the square of the orbital distance of mercury, divided by the product of 2π, the planet’s orbital distance and the square of the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.

The non-Newtonian value for Psi-mercury is equal to the product of π and the sum of one plus the difference between the negative of the gravitational force ratio and the ratio of the product of the Gravitational constant at the orbital distance of earth, π, the mass of mercury and the gamma factor divided by twice the gravitational force on mercury due to the mass of the sun.

The non-Newtonian perihelion precession rate of mercury is equal to the ratio of the product of the difference between 2ψ-mercury and 2π, the number of arc-seconds in one revolution and the number of days in a Julian century, divided by the product of 2π and the NASA sidereal orbital period of mercury in units of day (87.969).

The non-Newtonian perihelion precession rate of mercury is 6.128 arc-seconds per century greater than the Clemence observed rate of 574.095 arc-seconds per century.

We have built a model of gravitation proportional to the dimensions of the earth-sun system. A different model, with different values for the physical constants, would be equally valid if it were proportional to the dimensions of a different planet in our solar system or a planet in some other star system in our galaxy.

Our sun and the stars in our galaxy, in addition to graviton flux, emit large quantities of neutral flux that establish Stable Balance orbits for planets that emit relatively small quantities of neutral flux.

Our galactic center emits huge quantities of gravitons and neutral flux, and its dimensional relationship with our sun is dependent on the neutral flux emissions of our sun. If the intrinsic energy of our sun was less, its orbit would be further out from the galactic center, and if it was greater, its orbit would be closer in.

  • Of two stars at the same distance from the galactic center with different velocities, the star with higher velocity has a higher graviton absorption rate (higher stellar internal energy) and the star with lower velocity has a lower graviton absorption rate (lower stellar internal energy).
  • Of two stars with the same velocity at different distances from the galactic center, the star closer in will have a higher graviton absorption rate (higher stellar internal energy) and the star further out will have a lower graviton absorption rate (lower stellar internal energy).

The active gravitational mass of the Galactic Center is equal to the active gravitational mass of the sun divided by Beta-fourth and the cube of the active gravitational mass of the sun divided by the square of the active gravitational mass of earth.

The second expression of the above equation, generalized and reformatted, asserts the square root of the cube of the active gravitational mass of any star in the Milky Way divided by the active gravitational mass of any planet in orbit about the star is equal to a constant.

The above equation, combined with the detailed explanation of the chirality meshing interactions that mediate gravitational action at a distance, the derivation of solar system non-Newtonian orbital parameters, the derivation of the non-Newtonian rate of precession of the perihelion of mercury, and the detailed explanation of non-Newtonian stellar rotation curves, disproves the theory of dark matter.

Part Two

Structure and chirality

A particle has the property of chirality because its axes are orthogonal and directed, pointing in three perpendicular directions and, like the fingers of a human hand, the directed axes are either left-handed (LH) or right-handed (RH). The electron and antiproton exhibit LH structural chirality and the proton and positron exhibit RH structural chirality. The two chiralities are mirror images.

The electron G-axis (black, index finger) points into the paper, the electron Q-axis (blue, thumb) points up in the plane of the paper, and the north pole of the electron P-axis (red, middle finger) points right in the plane of the paper.

The orientation of the axes of an RH proton are the mirror image: the proton G-axis (black, index finger) points into the paper, the proton Q-axis (blue, thumb) points up in the plane of the paper, and the north pole of the proton P-axis (red, middle finger) points left in the plane of the paper.

Above, to visualize orientations, models are easier to manipulate than human hands.

When Michael Faraday invented the disk generator in 1831, he discovered the conversion of rotational force, in the presence of a magnetic field, into electric current. The apparatus creates a magnetic field perpendicular to a hand-cranked rotating conductive disk and, providing the circuit is completed through a path external to the disk, produces an electric current flowing inward from axle to rim (electron flow not conventional current), photograph below.7

Above left, the electron Q-axis points in the CCW direction of motion. The inertial force within a rotating conductive disk aligns conduction electron G-axes to point in the direction of the rim. The alignment of the Q-axes and G-axes causes the orthogonal P-axes to point down.

Above right, the electron Q-axis points in the CW direction of motion. The inertial force within a rotating conductive disk aligns conduction electron G-axes to point in the direction of the rim. The alignment of the Q-axes and G-axes causes the orthogonal P-axes to point up.

In generally accepted physics (GAP), the transverse alignment of electron velocity with respect to magnetic field direction is attributed to the Lorentz force but, as explained above it is a consequence of electron chirality.

In addition to the transverse alignment of the electron direction with respect to the direction of the magnetic field, the electron experiences an additional directional change of 20 arcseconds in the azimuthal direction which causes the electron to spiral in the direction of the axle. Thus, in both a CCW rotating conductive disk and a CW rotating conductive disk, the current (electron flow not conventional current) flows from the axle to the rim.

The geometries of the Faraday disk generator apply to the orientation of conduction electrons in the windings of solenoids and transformers. CCW and CW windings advance in the same direction, below into the plane of the paper. In contrast to the rotating conductor in the disk generator, the windings are stationary, and the conduction electrons spiral through in the direction of the positive voltage supply (which continually reverses in transformers and AC solenoids).

Above left, the electron Q-axes point down in the direction of current flow through the CCW winding. The inertial force on conduction electrons moving through the CCW winding aligns the direction of the electron G-axes to the left. The electron P-axes, perpendicular to both the Q-axes and P-axes, point S→N out of the paper.

Above right, the electron Q-axes point up in the direction of current flow through the CW winding. The inertial force on conduction electrons moving through the CW winding aligns the direction of the electron G-axes to the left. The electron P-axes, perpendicular to both the Q-axes and G-axes, point S→N into the paper.

Above is a turnbuckle composed of a metal frame tapped at each end. On the left end an LH bolt passes through an LH thread and on the right end an RH bolt passes through an RH thread. If the LH bolt is turned CCW (facing right into the turnbuckle frame) the bolt moves to the right and the frame moves to the left and if the LH bolt is turned CW the bolt moves to the left and the frame moves to the right. If the RH bolt is turned CW (facing left into the turnbuckle frame) the bolt moves to the left and the frame moves to the right and if the RH bolt is turned CCW the bolt moves to the right and the frame moves to the left.

In the language of this analogy, a graviton or quanton emitted by the emitting particle is a moving spinning bolt, and the absorbing particle is a turnbuckle frame with a G-axis, Q-axis or P-axis passing through.

In a chirality meshing interaction, absorption of a graviton or quanton by the LH or RH G-axis, Q-axis or P-axis of a particle, causes an attractive or repulsive acceleration proportional to the difference between the graviton or quanton velocity and the velocity of the absorbing particle.

An electron G-axis has a RH inside thread and a proton G-axis has a LH inside thread. An electron G-axis emits CW gravitons and a proton G-axis emits CCW gravitons.

In the bolt-turnbuckle analogy, a graviton is a moving spinning bolt, and the absorbing particle through which the G-axis passes is a turnbuckle frame:

  • If a CCW graviton emitted by a proton is absorbed into a proton LH G-axis, the absorbing proton is attracted, accelerated in the direction of the emitting proton.
  • If a CW graviton emitted by an electron is absorbed into an electron RH G-axis, the absorbing electron is attracted, accelerated in the direction of the emitting electron.

Protons and electrons do not gravitationally interact with each other because a proton is larger than an electron, a graviton emitted by a proton is larger than a graviton emitted by an electron, the inside thread of a proton G-axis is larger than the inside thread of an electron G-axis, and the size differences prevent the ability of a graviton emitted by an electron to mesh with a proton G-axis or a graviton emitted by a proton to mesh with an electron G-axis.

Tangible objects are composed of atoms which are composed of protons, electrons and neutrons.

In gravitational interactions between tangible objects (with kilogram mass greater than one microgram or 1E20 particles) the total intensity of the interaction is the sum of the contributions of the electrons and protons of which the object is composed (note that neutrons themselves do not gravitationally interact but each neutron is composed of one electron and one proton both of which do gravitationally interact).

A particle Q-axis is a single-ended hollow cylinder. The mechanism of the Q-axis is analogous to a piston which moves up and down at a frequency proportional to charge intrinsic energy. At the end of each up-stroke a single quanton is emitted. The absorption window opens at the beginning of the up-stroke and remains open until the beginning of the downstroke or the absorption of a single quanton.

The difference (the intrinsic granularity) between the inside diameter of the hollow cylindrical Q-axis and the outside diameter of the quanton allows absorption of incoming quantons at angles that can deviate from normal (straight down the center) by plus or minus 20 arcseconds.

An electron Q-axis has a RH inside thread and a proton Q-axis has a LH inside thread. An electron Q-axis emits CCW quantons and a proton Q-axis emits CW quantons.

In the bolt-turnbuckle analogy, a quanton is a moving spinning bolt, and the absorbing particle through which the G-axis passes is a turnbuckle frame:

  • If a CCW p-quanton emitted by a proton is absorbed into an electron RH Q-axis, the absorbing electron is attracted, accelerated in the direction of the emitting proton.
  • If a CCW p-quanton emitted by a proton (or the anode plate in a CRT) is absorbed into a proton LH Q-axis, the absorbing proton is repulsed, accelerated in the direction of the cathode plate (opposite the direction of the emitting proton).
  • If a CW e-quanton emitted by an electron is absorbed into an electron RH Q-axis, the absorbing electron is repulsed, accelerated in the direction opposite the emitting electron.
  • If a CW e-quanton emitted by an electron (or the cathode plate in a CRT) is absorbed into a proton LH Q-axis, the absorbing proton is repulsed, accelerated in the direction of the cathode plate (the direction opposite the emitting electron).

In a CRT, the Q-axis of an accelerated electron is oriented in the linear direction of travel and its P-G-axis are oriented transverse to the linear direction of travel. After the electron is linearly accelerated, the electron passes between oppositely charged parallel plates that emit quantons perpendicular to the linear direction of travel and these e-quantons are absorbed into the electron P-axes. The chirality meshing interactions between an electron with a linear direction of travel and a quantons emitted by either plate results in a transverse acceleration in the direction of the anode plate:

  • An incoming CCW p-quanton approaching an electron RH P-axis within less than 20 arcseconds deviation from normal (straight down the center) is absorbed in an attractive chirality meshing interaction in which the electron is deflected in the direction of the anode plate.
  • An incoming CW e-quanton approaching an electron RH P-axis within less than 20 arcseconds deviation from normal (straight down the center) is absorbed in a repulsive chirality meshing interaction in which the electron is deflected in the direction of the anode plate.

This is the mechanism of the experimental determination of the electron-proton deflection ratio.

The magnitude of the ratio between these masses is not equal to the ratio of the measured gravitational deflections but rather to the inverse of the ratio of the measured electric deflections. It would not matter which of these measurable quantities were used in the experimental determination if Newton’s laws of motion applied. However, in order for Newton’s laws to apply the assumptions behind Newtons laws, specifically the 100% probability that particles gravitationally and electrically interact, must also apply. But this is not the case for action at a distance.

The electron orientation below top left, rotated 90 degrees CCW, is identical to the electron orientations previously illustrated for a CW disk generator or a CW-wound transformer or solenoid; and the electron orientation bottom left is a 180 degree rotation of top left.

Above are reversals in Q-axis orientation due to reversals in direction of incoming quantons

Above top right and bottom right are the left-side electron orientations with the electron Q-axis directed into the plane of the paper (confirmation of the perspective transformation is easier to visualize with a model). These are the orientations of conduction electrons in an AC current.

In the top row CW quantons, emitted by the positive voltage source are absorbed in chirality meshing interactions by the electron RH Q-axis, attracting the absorbing electron. In the bottom row CCW quantons, emitted by the negative voltage source are absorbed in chirality meshing interactions into the electron RH Q-axis repelling the absorbing electron.

In either case the direction of current is into the paper.

In an AC current, a reversal in the direction of current is also a reversal in the rotational chirality of the quantons mediating the current.

  • In a current moving in the direction of a positive voltage source each linear chirality meshing absorption of a CW p-quanton into an electron RH Q-axis results in an attractive deflection.
  • In a current moving in the direction of a negative voltage source each linear chirality meshing absorption of a CCW e-quanton into an electron RH Q-axis results in a repulsive deflection.

In an AC current, each reversal in the direction of current, reverses the direction of the Q-axes of the conduction electrons. This reversal in direction is due to a complex rotation (two simultaneous 180 degree rotations) that results in photon emission.

During a shorter or longer period of time (the inverse of the AC frequency) during which the direction of current reverses, a shorter or longer inductive pulse of electromagnetic energy flows into the electron Q and P axes and the quantons of which the electromagnetic energy is composed are absorbed in rotational chirality meshing interactions.

Above left, the electron P and Q axes mesh together at their mutual orthogonal origin in a mechanism analogous to a right angle bevel gear linkage.8

Above center and right, an incoming CCW quanton induces an inward CCW rotation in the Q-axis and causes a CW outward (CCW inward) rotation of the P-axis. The rotation of the Q-axis reverses the orientation of the P-axis and G-axis, and the rotation of the P-axis reverses the orientation of the Q-axis and the orientation of the G-axis thereby restoring its orientation to the initial direction pointing left and perpendicular to a tangent to the cylindrical wire.

Above center and right, an incoming CW quanton induces an inward CW rotation in the Q-axis and causes a CCW outward (CW inward) rotation of the P-axis. The rotation of the Q-axis reverses the orientation of the P-axis and G-axis, and the rotation of the P-axis reverses the orientation of the Q-axis and the orientation of the G-axis thereby restoring its orientation to the initial direction pointing left and perpendicular to a tangent to the cylindrical wire.

In either case the electron orientations are identical, but CCW electron rotations cause the emission of CCW photons and CW electron rotations cause the emission of CW photons.

The absorption of CCW e-quantons by the Q-axis rotates the Q-axis CCW by the square root of 648,000 arcseconds (180 degrees) and the P-Q axis linkage simultaneously rotates the P-axis CW by the square root of 648,000 arcseconds (180 degrees).

If the orientation of the electron G-axis is into the paper in a plane defined by the direction of the Q-axis, the CCW rotation of the Q-axis tilts the plane of the G-axis down by the square root of 648,000 arcseconds and the CW rotation of the P-axis tilts the plane of the G-axis to the right by the square root of 648,000 arcseconds.

The net rotation of the electron G-axis is equal to the product of the square root of 648,000 arcseconds and the square root of 648,000 arcseconds.

In the production of photons by an AC current, the photon wavelength and frequency are proportional to the current reversal time, and the photon energy is proportional to the voltage.

Above, an axial projection of the helical path of a photon traces the circumference of a circle and the sine and cosine are transverse orthogonal projections.9 The crest to crest distance of the transverse orthogonal projections, or the distance between alternate crossings of the horizontal axis, is the photon wavelength.

The helical path of photons explains diffraction by a single slit, by a double slit, by an opaque circular disk, or a sphere (Arago spot).

In a beam of photons with velocity perpendicular to a flat screen or sensor, each individual photon makes a separate impact that can be sensed or is visible somewhere on the circumference of one of many separate and non-overlapping circles corresponding to all of the photons in the beam. The divergence of the beam increases the spacing between circles and the diameter of each individual photon circle which is proportional to the wavelength of each individual photon. The sensed or visible photon impacts form a region of constant intensity.

Below, the top image shows those photons, initially part of a photon beam illuminating a single slit, which passed through the single slit.10

Above, the bottom image shows those photons, initially part of a photon beam illuminating a double slit, that passed through a double slit.

Below, the image illustrating classical rays of light passing through a double slit is equally illustrative of a photon beam illuminating a double slit but, instead of constructive and destructive interference, the photons passing through the top slit diverge to the right and photons passing through the bottom slit diverge to the left. The spaces between divergent circles are dark and, due to coherence, the photon circles are brightest at the distance of maximum overlap, resulting in the characteristic double slit brighter-darker diffraction pattern.11

The mechanism of diffraction by an opaque circular disk or a sphere (Arago spot) is the same. In either case the opaque circular disk or sphere is illuminated by a photon beam of diameter larger than the diameter of the disk or sphere.

The photons passing close to the edge of the disk or sphere diverge inwards, and the spiraling helical path of a inwardly diverging CW photon passing one side of the disk will intersect in a head-on collision the spiraling helical path of a inwardly diverging CCW photon passing on the directly opposite side of the disk or sphere (if the opposite chirality photons are equidistant from the center of the disk or sphere).

In the case of a sphere illuminated by a laser, the surface of the sphere must be smooth and the ratio of the square of the diameter of the sphere divided by the product of the distance from the center of the sphere to the screen and the laser wavelength must be greater than one (similar to the Fresnel number).

Photon velocity

Constant photon velocity is due to a resonance driven by the emission of photon intrinsic energy which results in an increase in wavelength and a proportional decrease in frequency. In a related phenomenon, Arthur Holly Compton demonstrated Compton scattering in which the loss of photon kinetic energy does not change velocity but increases wavelength and proportionally decreases frequency.12

The mechanism of constant photon velocity is the emission of quantons and gravitons.

Below top, looking down into the plane of the paper a photon G-axis points in the direction of photon velocity and the P and Q-axes are orthogonal. In the language of the turnbuckle analogy, the mechanism of the photon P and Q-axes are analogous to pistons which move up and down or back and forth and emit a single quanton or graviton at the end of each stroke.

Above middle, in column A of the P-axis row, at the position of the oscillation the up-stroke has just completed, a single graviton has been emitted, and the current direction of the oscillation is now down. In column B of the P-axis row, the position of the oscillation is mid-way, and the direction of the oscillation is down. In column C of the P-axis row, at the position of the oscillation the downstroke has just completed, a single graviton has been emitted, and the current direction of the oscillation is up. In column D of the P-axis row, the position of the oscillation is mid-way, and the direction of the oscillation is up.

Above middle, in column A of the Q-axis row, the position of the oscillation is mid-way and the direction of oscillation is left. In column B of the Q-axis row, at the position of the oscillation the left-stroke has just completed, a single quanton has been emitted, and the current direction of the oscillation is right. In column C of the Q-axis row, the position of the oscillation is mid-way and the direction of the oscillation is right. In column D of the Q-axis row, at the position of the oscillation the right-stroke has just completed, a single quanton has been emitted, and the current direction of the oscillation is left.

Above left or right bottom, in each cycle of the photon frequency there are eight sequential CCW or CW alternating quanton/graviton emissions and the intrinsic energy of the photon is reduced by Lambda-bar on each emission.

This is the mechanism of intrinsic redshift.

Part Three

Nuclear magnetic resonance

In the 1922 Stern-Gerlach experiment, a molecular beam of identical silver atoms passed through an inhomogeneous magnetic field. Contrary to classical expectations, the beam of atoms did not diverge into a cone with intensity highest at the center and lowest at the outside. Instead, atoms near the center of the beam were deflected with half the silver atoms deposited on a glass slide in an upper zone and half deposited in a lower zone, illustrating “space quantization.”

The Stern-Gerlach experiment, designed to test directional quantization in a magnetic field as predicted by old quantum theory (the Bohr-Sommerfeld hypothesis)13, was conducted two years before intrinsic spin was conceived by Wolfgang Pauli and six years before Paul Dirac formalized the concept. Intrinsic spin became part of the foundation of new quantum theory.

The concept of intrinsic spin, where the property that causes the deflection of silver atoms in two opposite directions “space quantization” is inherent in the particle itself, is incorrect.

However, a molecular beam composed of atoms with magnetic moments passed through a Stern-Gerlach apparatus does exhibit the numerical property attributed to intrinsic spin but this property, interactional spin, is not inherent in the atom but is dependent on external factors.

The protons within a nucleus are the origin of spin, magnetic moment, Larmor frequency, and other nuclear gyromagnetic properties. A nucleus contains “ordinary protons” which, for clarity, will be termed Pprotons, and “protons within neutrons” will be termed Nprotons.

In nuclei with an even number of Pprotons, the Pproton magnetic flux is contained within the nucleus and does not contribute to the nuclear magnetic moment.

With neutrons the situation is quite different. A neutron is achiral: it is a composite particle composed of an Nproton-electron pair and binding energy, it has no G-axis therefore does not gravitationally interact, and no Q-axis therefore is electrically neutral.

Within a nucleus, a neutron does not have a magnetic moment (during its less than 15-minute mean lifetime after a neutron is emitted from its nucleus, a free neutron has a measurable magnetic moment, but there are no free neutrons within nuclei) but the Nproton and electron of which a neutron is composed do have magnetic moments.

The gyromagnetic properties of a nucleus, its magnetic moment, its spin, its Larmor frequency, and its gyromagnetic ratio are due to Pprotons and Nprotons.

A molecular beam (composed of nuclei, atoms and/or molecules) emerging from an oven into a vacuum will have a thermal distribution of velocities. Molecules within the beam are subject to collisions with faster or slower molecules that cause rotations and vibrations, and the orientations of unpaired Pprotons and unpaired Nprotons are constantly subject to change.

In a silver atom there is a single unpaired Pproton and the orientation of its P-axis, with respect to its direction of motion through an inhomogeneous magnetic field, will be either leading or trailing. Out of a large number of unpaired Pprotons, the P-axes will be leading 50% of the time and trailing 50% of the time, and a silver atom containing an unpaired Pproton with a leading P-axis can be deflected in the direction of the inhomogeneous magnetic north pole while a silver atom containing an unpaired Pproton with a trailing P-axis can be deflected in the direction of the south pole.

If the magnetic field is strong enough for a sufficient percentage of unpaired Pprotons (the orientation of which is constantly changing) to encounter within 20 arcseconds lines of magnetic flux and be deflected up or down, the molecular beam of silver atoms deposited on a glass slide at the center of the magnetic field (where it is strongest) will be split into two zones and, consistent with the definition of spin as equal to the difference between the number of zones minus one divided by 2 (S = (z-1)/2), a Stern-Gerlach experiment determines a spin equal to ½. This result is the only example of spin clearly determined by the position of atoms deposited on a glass slide.14

The above explanation is correct for silver atoms passed through the inhomogeneous magnetic fields of the Stern-Gerlach apparatus, but in the 1939 Rabi experimental apparatus15 (upon which modern molecular beam apparatus are modeled) the mechanism of deflection due to leading or trailing P-axes has nothing to do with the results achieved.

The 1939 Rabi experimental apparatus included back-to-back Stern-Gerlach inhomogeneous magnetic fields with opposite magnetic field orientations, but the result that dramatically changed physics, the accurate measurement of the Larmor frequency of nuclei, was done in a separate Rabi analyzer placed between the inhomogeneous magnetic fields. To Rabi, the importance of the Stern-Gerlach inhomogeneous magnets was for use in the alignment and tuning of the entire apparatus.

In a Rabi analyzer there is a strong constant magnetic field and a weaker transverse oscillating magnetic field. The purpose of the strong constant field is to decouple (increase the separation distance between) electrons and protons. The purpose of the transverse oscillating field is to stimulate the emission of photons by the decoupled protons.

When the Rabi apparatus is initially assembled, before installation of the Rabi analyzer the Stern-Gerlach apparatus is set up and tuned such that the intensity of the molecular beam leaving the apparatus is equal to its intensity upon entering.

After the unpowered Rabi analyzer is mounted between the Stern-Gerlach magnets, and the molecular beam exiting the first inhomogeneous magnetic field passes through the Rabi analyzer and enters the second inhomogeneous magnetic field, the intensity of the molecular beam leaving the apparatus decreases. In this state the entire Rabi apparatus is tuned and adjusted until the intensity of the entering molecular beam is equal to the intensity of the exiting beam.

When the crossed magnetic fields of the Rabi analyzer are switched on, for a second time the intensity of the exiting beam decreases. Then, by adjustment of the relative positions and orientations of the three magnetic fields (and also adjustment of the detector position to optimally align with decoupled protons in the nucleus of interest) the intensity of the exiting beam is returned to its initial value.

During an operational run, the transverse oscillating field stimulates the emission of photons at the same frequency as that of the transverse oscillating magnetic field. The ratio of the photon frequency divided by the strength of the strong magnetic field is equal to the Larmor frequency of the nucleus, and the Larmor frequency divided by the strong magnetic field strength is equal to the gyromagnetic ratio. The Larmor frequency has a very sharp resonant peak limited only by the accuracy of the two experimental measurables: the intensity of the strong magnetic field and the frequency of the oscillating weak magnetic field.

The gyromagnetic ratios of Li6, Li7, and F19, experimentally determined by Rabi in 1939, agree with the 2014 INDC16 values to better than 1 part in 60,000. Importantly, measurements of the gyromagnetic ratios of Li6 and Li7 were made in three different lithium molecules (LiCl, LiF, and Li2) requiring three separate operational runs, thereby demonstrating the Rabi analyzer was adjusted to optimally detect the nucleus of interest.

Modern determinations of spin are based on various types of spectroscopy, the results of which stand out as peaks in the collected data.

The magnetic flux of nuclei with an even number of Pprotons and Nprotons circulates in flux loops between pairs of Pprotons and pairs of Nprotons, and such nuclei do not have magnetic moments. The flux loops within nuclei with an odd number of Pprotons and/or Nprotons do have magnetic moments. In order for all nuclei of the same isotope to have zero or non-zero magnetic moments of the same amplitude, it is necessary for the magnetic flux loops to be circulating in the same plane.

All of the 106 selected magnetic nuclear isotopes from Lithium and Uranium, including all stable isotopes with atomic number (Z) greater than 2, plus a number of important isotopes with relatively long half-lives, belong to one of twelve different Types. The Type is determined based the spin of the isotope and the number of odd and even Pprotons and Nprotons.

An isotope contains an internal physical structure to which the property of magnetic moment correlates, but the magnetic moment is not entirely determined by the internal physical structure of a nucleus. The property of interactional spin is that portion of the magnetic moment due to factors external to the nucleus, including electromagnetic radiation, magnetic fields, electric fields and excitation energy.

Of significance to the present discussion, the detectable magnetic properties of 82 of the 106 selected isotopes (the relative spatial orientations of the flux loops associated with the Pprotons and Nprotons) can be manipulated by four different orientations of directed planar electric fields.

The magnetic signatures of the 106 selected isotopes can be sorted into twelve isotope Types with seven spin values.

Spin ½ isotopes with an odd number of Pprotons and even number of Nprotons are Type A-0. Of the 106 selected isotopes, 10 are Type A-0.

Spin ½ isotopes with an even number of Pprotons and odd number of Nprotons (odd/even Reversed) are Type RA-0. Of the 106 selected isotopes, 14 are Type RA-0.

Spin 1 isotopes with an odd number of Pprotons and an odd number of Nprotons are Type B-1. Of the 106 selected isotopes, 2 are Type B-1.

Spin 3/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type C-1. Of the 106 selected isotopes, 18 are Type C-1.

Spin 3/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RC-1. Of the 106 selected isotopes, 12 are Type RC-1.

Spin 5/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type C-2. Of the 106 selected isotopes, 13 are Type C-2.

Spin 5/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RC-2. Of the 106 selected isotopes, 11 are Type RC-2.

Spin 3 isotopes with an odd number of Pprotons and an odd number of Nprotons are Type B-3. Of the 106 selected isotopes, 2 are Type B-3.

Spin 7/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type A-3.

Of the 106 selected isotopes, 9 are Type A-3.

Spin 7/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RA-3. Of the 106 selected isotopes, 8 are Type RA-3.

Spin 9/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type C-4. Of the 106 selected isotopes, 3 are Type C-4.

Spin 9/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RC-4. Of the 106 selected isotopes, 4 are Type RC-4.

Above, the horizontal line is in the inspection plane. The vertical line, the photon path to the Rabi analyzer, is parallel to the constant magnetic field. The circle indicates the diameter of the molecular beam, and the crosshairs indicate the velocity of the beam is directed into the paper.

A molecular beam is not needed for the operation of a Rabi analyzer, all that is required is for an analytical sample (gas or liquid phase) comprising a large number of molecules containing a larger number of nuclei enclosing an even larger number of particles to be located at the intersection of the cross hairs.

The position of the horizontal inspection plane is irrelevant to Rabi analysis but it is crucial for spectroscopic analysis of flux loops.

Above left, the molecular beam (directed into the paper in the previous illustration) is directed from right to left, and the photon path to the Rabi analyzer is in the same location as in the previous illustration.

For spectroscopic analysis, the inspection plane is the plane defined by the direction the molecular beam formerly passed and the direction of the positive electric field when pointing up.

Above right, the inspection plane for spectroscopic analysis, is labelled at each corner. The dashed line in place of the former position of the molecular beam is an orthogonal axis (OA) passing through the direction of the positive side of the electric field when pointing up (UP),

and passing through the direction of the spectroscopic detectors (SD).

The intersection of OA, UP and SD is the location where the analytical sample (gas or liquid phase) is placed in the inspection plane. The electric field that orients particle Q-axes is in the inspection plane.

The detection of ten of the twelve Types of magnetic signatures (in the 106 selected isotopes) requires one of four alignments of directed electric fields: the positive side of the electric field pointing up, the positive side of the electric field pointing right, the positive side of the electric field pointing down, or the positive side of the electric field pointing left.

The four possible alignments of the electric field are illustrated on either side of the inspection plane (but in operation the entire breadth of the electric field points in the same direction) and the directed lines on the edges of the inspection plane represent the positions of thin wire cathodes that produce planar electric fields.

Prior to an operational run, the spectroscopic detectors are adjusted to optimally detect the magnetic properties of the isotope to be analyzed.

Above is a summary of isotope magnetic signatures.

Column 1 lists the twelve magnetic isotope Types.

In column 2, with the P-axes of particles oriented by a constant magnetic field directed up in the direction of the magnetic north pole and in the absence of a directed electric field, the magnetic signatures due to flipping odd Pproton P-axes (the arrow on the left of the vignette) and odd Nproton P-axes (the arrow on the right of the vignette) are illustrated.

See below, in the detailed discussion of Type B-1, for the reason there is a zero instead of an arrow in Types B-1 and B-3.

The magnetic signatures due to flux loops in the presence of the four orientations of an electric field, are given in columns 3, 4, 5 and 6 for electric fields directed up, directed down, directed to the right, or directed to the left.

In illustrations of flux loop magnetic signatures, if the arrows are oriented up and down the arrow on the left of the vignette represents the direction of Pproton flux loops and the arrow on the right represents the direction of Nproton flux loops, if the arrows are oriented left and right the arrow on the top of the vignette represents the direction of Pproton flux loops and the arrow on the bottom represents the direction of Nproton flux loops.

In total there are six directed orthogonal planes in Cartesian space but only four of these are represented in columns 3, 4, 5 and 6. This omission is due to the elliptical planar shape of magnetic flux loops: the missing orientations provide edge-on views without detectable magnetic signatures.

Type A-0

7N15, with 7 Pprotons and 8 Nprotons, is the lowest atomic number Type A-0 isotope. In Type A-0 isotopes the flux loops associated with Pprotons and Nprotons lie in a directed Cartesian plane without detectable flux loop signatures.

In an analytical sample, 50% of the odd (unpaired) Pproton P-axes will be oriented in one direction and 50% in the opposite direction. The orientation of the magnetic axes of the odd Pproton are flipped by the transverse oscillating magnetic field and the spectroscopic detectors sense two different magnetic signatures resulting in two peaks corresponding to a spin of ½.

Above is the magnetic signature of Type A-0. The left arrow pointing up is the direction of the odd Pproton P-axis after emission of a photon (previously the constant magnetic field aligned the Pproton P-axis in this orientation, then absorption of intrinsic energy from the transverse oscillating magnetic field flipped the axis to pointing down then, due to the 180 degree rotation of the P-Q axes with respect to the direction of the G-axis, the absorbed intrinsic energy was released as a photon when the axis was flipped back to pointing up). The arrow pointing down is the antiparallel direction of the P-axis of a paired Nproton (which does not emit a photon).

The experimental detection of Type A-0 isotopes requires a constant magnetic field oriented in the direction of magnetic north.

Type RA-0

6C13, with 6 Pprotons and 7 Nprotons, is the lowest atomic number Type RA-0 isotope. In Type RA-0 isotopes the flux loops associated with Pprotons and Nprotons lie in a directed Cartesian plane without detectable flux loop signatures.

In an analytical sample, 50% of the odd (unpaired) Nproton P-axes will be oriented in one direction and 50% in the opposite direction. The orientation of the magnetic axes of the odd Nproton are flipped by the transverse oscillating magnetic field and the spectroscopic detectors produce two different magnetic signatures resulting in two peaks corresponding to a spin of ½.

Above is the magnetic signature of Type RA-0. The left arrow pointing up is the direction of the P-axis of a paired Pproton (which does not emit a photon). The right arrow pointing down is the direction of the odd Nproton P-axis after emission of a photon (previously the constant magnetic field aligned the Nproton P-axis in this orientation, then absorption of intrinsic energy from the transverse oscillating magnetic field flipped the axis to pointing up then, due to the 180 degree rotation of the P-Q axes with respect to the direction of the G-axis, the absorbed intrinsic energy was released as a photon when the axis was flipped back to pointing down).

The experimental detection of Type RA-0 isotopes requires a constant magnetic field oriented in the direction of magnetic north.

Type B-1

3Li6, with 3 Pprotons and 3 Nprotons, is the lowest atomic number Type B-1 isotope. In isotopes with an odd number of Pprotons and Nprotons, the odd Pproton interacts with the electron in the odd Nproton preventing electron-Nproton decoupling by the constant magnetic field and the odd Nproton P-axis is unable to be flipped by the transverse oscillating magnetic field, but the electron-Pprotonis decoupled and the orientation of the odd Pproton magnetic axis is flipped by the transverse oscillating magnetic field, and the spectroscopic detectors, adjusted to optimally recognize the magnetic signatures of 3Li6, sense one distinctive magnetic signature, resulting in one peak.

In Type B-1, the odd Nproton P-axis is unable to be flipped thus there is no magnetic signature due to the Nproton itself, but both the Nproton and the Pproton have associated flux loops and spectroscopic detectors can sense the magnetic signatures of the flux loops in the presence of a directed electric field pointing up.

In the analysis of isotopes with detectable flux loop signatures there are four possible orientations of the directed electric fields. The magnetic flux loops associated with Type-1 isotopes are detectable if the directed electric field is pointing up. The magnetic flux loops associated with Type-2 isotopes are detectable if the directed electric field is pointing down. The magnetic flux loops associated with Type-3 isotopes are detectable if the directed electric field is pointing right. The magnetic flux loops associated with Type-4 isotopes are detectable if the directed electric field is pointing left.

Each of these directed electric field orientations require different experiments, therefore the results of five experiments (including one experiment without directed electric fields) are needed to fully establish the Type of an unknown isotope.

The flux loops circulating through particle P-axes can pass through all radial planes. The radial flux planes in the above diagram are in the plane of the paper demonstrating, when detected from opposite directions, flux loops will be CW (directed right-left) or CCW (directed left-right).

Since Pprotons and Nprotons are oppositely aligned, a CW Pproton signature is identical to an Nproton CCW signature, and a CCW Pproton signature is identical to an Nproton CW signature.

Because the magnetic signatures of the particles in the field of view of a detector are differently oriented, on average 50% of the flux loop magnetic signatures will be CW and 50% CCW. Of the 50% of the CW signatures 25% will be due to Pprotons and 25% due to Nprotons, and of the 50% of the CCW signatures 25% will be due to Pprotons and 25% due to Nprotons.

Thus, there will be two different magnetic signatures resulting to two peaks, but we are unable to distinguish which is due to CW Pproton flux loops or CCW Nproton flux loops, and which is due to CCW Pproton flux loops or CW Nproton flux loops.

In Type B-1, the magnetic signature due to the odd Pproton (experimentally determined in the absence of an electric field) has one peak, and the magnetic signature due to flux loops associated with Pprotons and Nprotons (experimentally determined in an electric field oriented parallel to the magnetic field) has two peaks, totaling three peaks corresponding to a spin of 1.

Here we come to a fundamental issue. Is the uncertainty in situations involving linked physical properties (complementarity) described by probability or is it causedby probability? In 1925 Werner Heisenberg theorized this type of uncertainty was caused by probability and that opinion became, along with intrinsic spin, an important part of the foundation of new quantum theory.

In nature, the orientation of the magnetic signatures of isotopes and the orientation of the nuclei containing the particles responsible for the magnetic signatures are random. The magnetic signatures due to a large number of randomly oriented particles are indistinguishable from background noise, but under the proper experimental conditions, the magnetic signatures are discernable.

The magnetic signatures of flux loops, imperceptible in nature, are perceptible when the Q-axes of the associated particles are aligned.

A constant magnetic field is not needed to detect the magnetic signatures of flux loops, but compared to the Rabi analyzer the inspection plane to detect the magnetic signatures of flux loops is in the identical position, and the directed orthogonal plane pointing up in the direction of magnetic north in the Rabi analyzer is the identical to the directed orthogonal plane pointing up in the direction of the positive electric field in the flux loops analyzer, that is, the direction of the electric field is parallel to the magnetic field.

Therefore, even though the magnetic field is not needed to detect the magnetic signatures of flux loops, if the magnetic field is present in addition to the directed electric field, its presence would not alter the experimental results, but it might provide additional information.

Here is a prediction of the present theory. If the experiment detecting the magnetic signature of Type B-1 is conducted in the presence of a constant magnetic field and a directed electric field pointing up, that one experiment will determine the magnetic signatures shown above plus two additional signatures: (1) the magnetic signature due to CW Pproton flux loops and CCW Nproton flux loops and (2) the magnetic signature due to CW Nproton flux loops and CCW Pproton flux loops.

This result would demonstrate the uncertainty in at least one situation involving linked physical properties is described by probability but is not causedby probability. This and other experiments yet to be devised, will overturn the concept of causation by probability, and validate Einstein’s intuition that God “does not play dice with the universe.”17

Type C-1

3Li7, with 3 Pprotons and 4 Nprotons, is the lowest atomic number Type C-1 isotope.

As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. In total, Type C-1 isotopes have four peaks corresponding to a spin of 3/2.

Type RC-1

4Be9, with 4 Pprotons and 5 Nprotons, is the lowest atomic number RC-1 isotope.

As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. In total, Type RC-1 isotopes have four peaks corresponding to a spin of 3/2.

Type C-2

13Al27, with 13 Pprotons and 14 Nprotons, is the lowest atomic number Type C-2 isotope.

As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks.

In the identification of Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. In total, Type C-2 isotopes have six peaks corresponding to a spin of 5/2.

Type RC-2

8O17, with 8 Pprotons and 9 Nprotons, is the lowest atomic number Type RC-2 isotope. 8O17 has one odd Nproton and no odd Pprotons.

As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks.

In the identification of Type RC-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. In total, Type RC-2 isotopes have six peaks corresponding to a spin of 5/2.

Type B-3

5B10, with 5 Pprotons and 5 Nprotons, is the lowest atomic number Type B-3 isotope.

As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks.

In the identification of Type B-3, the odd Pproton flux loops, determined in an electric field pointing right, has two peaks. In total, Type B-3 isotopes have seven peaks corresponding to a spin of 3.

A-3

21Sc45, with 21 Pprotons and 24 Nprotons, is the lowest atomic number Type A-3 isotope.

As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In total, Type A-3 isotopes have eight peaks corresponding to a spin of 7/2.

RA-3

20Ca43, with 20 Pprotons and 23 Nprotons, is the lowest atomic number Type RA-3 isotope.

As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type RC-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In total, Type RA-3 isotopes have 8 peaks corresponding to a spin of 7/2.

C-4

41NB93, with 41 Pprotons and 52 Nprotons, is the lowest atomic number Type C-4 isotope.

As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In the identification of Type C-4, the odd Nproton flux loops, determined in an electric field pointing left, has two peaks. In total, Type C-4 isotopes have 10 peaks corresponding to a spin of 9/2.

RC-4

32Ge73, with 32 Pprotons, 41 Nprotons, is the lowest atomic number Type RC-4 isotope.

As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type RC-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In the identification of Type RC-4, the odd Nproton flux loops, determined in an electric field pointing left, has two peaks. In total, Type RC-4 isotopes have 10 peaks corresponding to a spin of 9/2.


ZNZ+NSpinPeaksType







7N1578150.52A-0
9F19910190.52A-0
15P311516310.52A-0
39Y893950890.52A-0
45Rh10345581030.52A-0
47Ag10947621090.52A-0
47Ag10747601070.52A-0
69Tm169691001690.52A-0
81Tl203811222030.52A-0
81Tl205811242050.52A-0
6C1367130.52RA-0
14Si291415290.52RA-0
26Fe572631570.52RA-0
34Se773443770.52RA-0
48Cd11148631110.52RA-0
50Sn11750671170.52RA-0
50Sn11550651150.52RA-0
52Te12552731250.52RA-0
54Xe12954751290.52RA-0
74W183741091830.52RA-0
76Os187761111870.52RA-0
78Pt195781171950.52RA-0
80Hg199801191990.52RA-0
82Pb207821252070.52RA-0







3Li63361.03B-1
7N1477141.03B-1







3Li73471.54C-1
5B1156111.54C-1
11Na231112231.54C-1
17Cl351718351.54C-1
17Cl371720371.54C-1
19K391920391.54C-1
19K411922411.54C-1
29Cu632934631.54C-1
29Cu652936651.54C-1
31Ga693138691.54C-1
31Ga713140711.54C-1
33As753342751.54C-1
35Br793544791.54C-1
35Br813546811.54C-1
65Tb15965941591.54C-1
77Ir193771161931.54C-1
77Ir191771141911.54C-1
79Au197791181971.54C-1
4Be94591.54RC-1
10Ne211011211.54RC-1
16S331617331.54RC-1
24Cr532429531.54RC-1
28Ni612833611.54RC-1
54Xe13154771311.54RC-1
56Ba13556791351.54RC-1
56Ba13756811371.54RC-1
64Gd15564911551.54RC-1
64Gd15764931571.54RC-1
76Os189761131891.54RC-1
80Hg201801212011.54RC-1







13Al271314272.56C-2
25Mn512526512.56C-2
25Mn552530552.56C-2
37Rb853748852.56C-2
51Sb12151701212.56C-2
53I12753741272.56C-2
59Pr14159821412.56C-2
61Pm14561841452.56C-2
63Eu15163881512.56C-2
63Eu15363901532.56C-2
75Re185751101852.56C-2
8O1789172.56RC-2
12Mg251213252.56RC-2
22Ti472225472.56RC-2
30Zn673037672.56RC-2
40Zr914051912.56RC-2
42Mo954253952.56RC-2
42Mo974255972.56RC-2
44Ru10144571012.56RC-2
44Ru994455992.56RC-2
46Pd10546591052.56RC-2
66Dy16166951612.56RC-2
66Dy16366971632.56RC-2
70Yb173701031732.56RC-2







5B1055103.07B-3
11Na221111223.07B-3







21Sc452124453.58A-3
23V512328513.58A-3
27Co592732593.58A-3
51Sb12351721233.58A-3
55Cs13355781333.58A-3
57La13957821393.58A-3
67Ho16567981653.58A-3
71Lu175711041753.58A-3
73Ta181731081813.58A-3
20Ca432023433.58RA-3
22Ti492227493.58RA-3
60Nd14360831433.58RA-3
60Nd14560851453.58RA-3
62Sm14962871493.58RA-3
68Er16768991673.58RA-3
72Hf177721051773.58RA-3
92U235921432353.58RA-3







41Nb934152934.510C-4
49In11349641134.510C-4
83Bi209831262094.510C-4
32Ge733241734.510RC-4
36Kr833647834.510RC-4
38Sr873849874.510RC-4
72Hf179721071794.510RC-4

ZNZ+NSpinPeaksType







3Li63361.03B-1
3Li73471.54C-1
4Be94591.54RC-1
5B1055103.07B-3
5B1156111.54C-1
6C1367130.52RA-0
7N1477141.03B-1
7N1578150.52A-0
8O1789172.56RC-2
9F19910190.52A-0
10Ne211011211.54RC-1
11Na231112231.54C-1
11Na221111223.07B-3
12Mg251213252.56RC-2
13Al271314272.56C-2
14Si291415290.52RA-0
15P311516310.52A-0
16S331617331.54RC-1
17Cl351718351.54C-1
17Cl371720371.54C-1
19K391920391.54C-1
19K411922411.54C-1
20Ca432023433.58RA-3
21Sc452124453.58A-3
22Ti472225472.56RC-2
22Ti492227493.58RA-3
23V512328513.58A-3
24Cr532429531.54RC-1
25Mn512526512.56C-2
25Mn552530552.56C-2
26Fe572631570.52RA-0
27Co592732593.58A-3
28Ni612833611.54RC-1
29Cu632934631.54C-1
29Cu652936651.54C-1
30Zn673037672.56RC-2
31Ga693138691.54C-1
31Ga713140711.54C-1
32Ge733241734.510RC-4
33As753342751.54C-1
34Se773443770.52RA-0
35Br793544791.54C-1
35Br813546811.54C-1
36Kr833647834.510RC-4
37Rb853748852.56C-2
38Sr873849874.510RC-4
39Y893950890.52A-0
40Zr914051912.56RC-2
41Nb934152934.510C-4
42Mo954253952.56RC-2
42Mo974255972.56RC-2
44Ru10144571012.56RC-2
44Ru994455992.56RC-2
45Rh10345581030.52A-0
46Pd10546591052.56RC-2
47Ag10747601070.52A-0
47Ag10947621090.52A-0
48Cd11148631110.52RA-0
49In11349641134.510C-4
50Sn11550651150.52RA-0
50Sn11750671170.52RA-0
51Sb12151701212.56C-2
51Sb12351721233.58A-3
52Te12552731250.52RA-0
53I12753741272.56C-2
54Xe12954751290.52RA-0
54Xe13154771311.54RC-1
55Cs13355781333.58A-3
56Ba13556791351.54RC-1
56Ba13756811371.54RC-1
57La13957821393.58A-3
59Pr14159821412.56C-2
60Nd14360831433.58RA-3
60Nd14560851453.58RA-3
61Pm14561841452.56C-2
62Sm14962871493.58RA-3
63Eu15163881512.56C-2
63Eu15363901532.56C-2
64Gd15564911551.54RC-1
64Gd15764931571.54RC-1
65Tb15965941591.54C-1
66Dy16166951612.56RC-2
66Dy16366971632.56RC-2
67Ho16567981653.58A-3
68Er16768991673.58RA-3
69Tm169691001690.52A-0
70Yb173701031732.56RC-2
71Lu175711041753.58A-3
72Hf177721051773.58RA-3
72Hf179721071794.510RC-4
73Ta181731081813.58A-3
74W183741091830.52RA-0
75Re185751101852.56C-2
76Os187761111870.52RA-0
76Os189761131891.54RC-1
77Ir191771141911.54C-1
77Ir193771161931.54C-1
78Pt195781171950.52RA-0
79Au197791181971.54C-1
80Hg199801191990.52RA-0
80Hg201801212011.54RC-1
81Tl203811222030.52A-0
81Tl205811242050.52A-0
82Pb207821252070.52RA-0
83Bi209831262094.510C-4
92U235921432353.58RA-3

In GAP, the gyromagnetic ratio of a nucleus is equal to the product of the INDC isotope g-factor and the CODATA nuclear magneton divided by the product of the INDC intrinsic spin and the CODATA reduced Plank constant, and the magnetic moment of a nucleus is equal to the product of the INDC isotope g-factor and the CODATA nuclear magneton.

In discrete physics, the magnetic moment of a nucleus is equal the product of two times the interactional spin (converts spin to number of odd Pprotons and/or odd Nprotons), the kinetic steric factor (converts molecular beam thermal energy into Joules), Lambda-bar, and the GAP value for the gyromagnetic ratio (assumed correct).

In the 106 isotopes tested, the ratio of the INDC isotope magnetic moment divided by the value denominated in discrete units is equal to 1.0288816.

The difference can be narrowed by adjustment but cannot be eliminated because CODATA constants are not exactly reconciled.

Part Four

Particle acceleration

Einstein believed mass was constant and many of his revolutionary discoveries were based on that concept. Constancy of mass is an eminently reasonable assumption because Newtonian equations are also founded on mass conservation and in the majority of situations his equations accurately predict the observables. But in fact, as was succinctly expressed in his letter to Richard Bentley, his equations do not correspond to physical reality.18

Einstein also believed the speed of light was constant and since kinetic energy is proportional to mass and velocity, he concluded that the mass of a particle increases with velocity and approaches (but never reaches) a maximum value as the velocity approaches the speed of light. In special relativity he was able to derive, in a few simple equations, the relativistic momentum and energy (mass-energy) of a particle.

In general relativity, Einstein’s field equations described the curvature of space-time in intense gravitational fields in agreement with the measured value for the precession of the perihelion of mercury. It seems likely the field equations were derived with that result in mind. Even so, this approach is eminently justifiable because measurables are valid assumptions for a physical theory.

Einstein’s prediction that the curvature of space-time in intense gravitational fields was not only responsible for the precession of the perihelion of mercury but would also bend rays of light was verified in two astronomical expeditions led by Arthur Eddington and Andrew Crommelin. Their observations were acclaimed as verification of general relativity and today the curvature of space-time is considered by most scientists to be undisputed.

Unfortunately, this undisputed theory cannot determine the velocity of a relativistically accelerated electron or proton and does not provide a mechanism for the increase in energy and mass (mass-energy).

The present theory derives the velocity and mass-energy of accelerated electrons and protons, and provides a mechanism.

In particle acceleration, charged particles are electrostatically formed into a linear beam and accelerated, then injected into a circular accelerator (or cyclotron) where they are magnetically formed into a circular beam and further accelerated by oscillating magnetic fields. Particle acceleration in linear and circular beams is mediated by chirality meshing interactions.

An electrostatic voltage is the emission of quantons:

  • In electrostatic acceleration of negatively charged particles between a negative cathode on the left emitting CCW quantons and a positive anode on the right emitting CW quantons, chirality meshing absorptions of CCW quantons results in repulsive deflections (voltage acceleration) to the right and chirality meshing absorptions of CCW quantons results in attractive deflections (voltage acceleration) to the right.
  • If positively charged particles are between a negative cathode on the left emitting CCW quantons and a positive anode on the right emitting CW quantons, chirality meshing absorptions of CW quantons results in attractive deflections (voltage acceleration) to the left and chirality meshing absorptions of CW quantons results in repulsive deflections (voltage acceleration) to the left.

Quantons are also produced transverse to a magnetic field with CCW quantons emitted by the magnetic North pole and CW quantons emitted by the magnetic South pole:

  • In acceleration by a transverse oscillating magnetic field, charged particles are alternately pushed (repulsively deflected) from one direction and pulled (attractively deflected) from the opposite direction.
  • Negatively charged particles are alternately pushed (deflected in the direction of the positive anode) due to the absorption of CCW quantons and pulled (deflected in the direction of the positive anode) due to the absorption of CW quantons.
  • Positively charged particles are alternately pulled (deflected in the direction of the negative cathode) due to the absorption of CCW quantons, and pushed (deflected in the direction of the negative cathode) due to the absorption of CW quantons.

In either case (electrostatic voltage or oscillating magnetic voltage) the energy of simultaneous acceleration by oppositely directed voltages is proportional to the square of the voltage.

A chirality meshing absorption of a quanton increases the intrinsic energy of a particle and produces an intrinsic deflection that increases the particle velocity. Like kinetic acceleration, an intrinsic deflection increases the velocity but does so without the dissipation of kinetic energy.

The number of particles and quantons is directly proportional to the intrinsic Josephson constant: 3.0000E15 quantons are absorbed by 3.0000E15 particles per second per Volt. At 400 Volts 1.2000E18 quantons are absorbed by 1.2000E18 particles per second; and at 250,000 Volts 7.5000E20 quantons are absorbed by 7.5000E20 particles per second.

Each quanton absorption produces a deflection (acceleration) equal to the square root of Lambda-bar divided by the particle amplitude. Quanton absorption by an electron produces a deflection of 2.5327E-18 meters, and quanton absorption by a proton produces a deflection of 2.0680E-19 meters.

The number of chirality meshing interactions is equal to the square of the voltage divided by the square root of Lambda-bar. The intrinsic energy absorbed by a particle in a chirality meshing interaction is equal to the product of the number of chirality meshing interactions and Lambda-bar, divided by the number of particles. The accelerated particle intrinsic energy is equal to the sum of the particle intrinsic energy plus the intrinsic energy absorbed by the particle in a chirality meshing interaction.

The kinetic mass-energy in units of Joule is equal to the product of the accelerated particle intrinsic energy, the square of the photon velocity, and the ratio of the discrete Planck constant divided by Lambda-bar.

Electron acceleration

Below left, the GAP equation for electron velocity due to electrostatic or electromagnetic voltage is equal to the square root of the ratio of the product of 2, the CODATA elementary charge (units of Coulomb) and the voltage, divided by the CODATA electron mass (units of kilogram).

Above right, the discrete equation for electron velocity due to electrostatic or electromagnetic voltage is equal to the square root of the ratio of the product of 2, the charge intrinsic energy and the voltage, divided by the electron intrinsic energy.

The velocity calculated by the GAP equation is higher than the discrete equation by a factor of 1.007697. The difference can be narrowed by adjustment but cannot be eliminated because CODATA constants are not reconciled.

The analysis of electron acceleration includes a range of ten voltages between a minimum voltage and a maximum voltage. The maximum voltage is equal to a few millivolts less than the theoretical voltage required to accelerate an electron to the photon velocity (an impossibility), which, if calculated to fifteen significant digits, is 259807.621135332 Volts.

Top row column 1, the voltages used in this example analysis are 1, 100, 400, 800, 4000, 10000, 25000, 100000, 250000, and 259807.621135 Volts. The highest voltage, calculated to thirteen significant digits, exactly converts to the photon velocity (an impossibility) to eleven significant digits but is less than the photon velocity (the correct result) at 12 significant digits (this is an excellent example of a discretely exact property).

The equations following, calculations for 100 Volts, are identical to the equations for any other of the nine voltages, or for any other range of ten voltages greater than zero and less than the theoretical maximum.

Top row column 2, the calculated electron velocity per the discrete equation.

Top row column 3, the number of accelerated (deflected) electrons is equal to the ratio of the voltage divided by the intrinsic electron magnetic flux quantum.

Top row column 4, the deflection per quanton is equal to the square root of Lambda-bar divided by the electron amplitude.

This is the deflection of a chirality meshing interaction between a quanton and an electron.

Bottom row column 1, the number of chirality meshing interactions is equal to the square of the voltage divided by the square root of Lambda-bar.

Bottom row column 2, the increase in intrinsic energy per electron due to chirality meshing interactions, equal to the product of the number of chirality meshing interactions and Lambda-bar divided by the number of electrons, is denominated in units of Einstein.

Bottom row column 3, the accelerated electron energy is equal to the sum of the electron intrinsic energy and the increase in intrinsic energy per electron.

Bottom row column 4, the mass-energy in units of Joule is equal to the product of the accelerated electron intrinsic energy, the square of the photon velocity and the ratio of the discrete Planck constant divided by Lambda-bar.

Proton acceleration

The analysis of proton acceleration includes a range of ten voltages between a minimum voltage and a maximum voltage. For purposes of comparison, we specify the same voltages as used for the electron.

The theoretical voltage required to accelerate a proton to the photon velocity (an impossibility) is 38971143.1702997 Volts. Any voltage less than this theoretical maximum will accelerate a proton to less than the photon velocity.

The voltage range used in this example analysis is 1, 100, 400, 800, 4000, 10000, 25000, 100000, 250000, and 259807.621135 Volts. The equations below, the calculations for 100 Volts, are identical to the equations for any other accelerating voltage range greater than zero and less than the theoretical maximum.

The analysis of proton acceleration includes a range of ten voltages between a minimum voltage and a maximum voltage. The maximum voltage is equal to a few millivolts less than the theoretical voltage required to accelerate a proton to the photon velocity (an impossibility), which, if calculated to fifteen significant digits, is 259807.621135332 Volts.

Below left, the GAP equation for proton velocity due to electrostatic or electromagnetic voltage is equal to the square root of the ratio of the product of 2, the CODATA elementary charge (units of Coulomb) and the voltage, divided by the CODATA proton mass (units of kilogram).

Above right, the discrete equation for proton velocity, due to electrostatic or electromagnetic voltage, is equal to the square root of the ratio of the product of 2, the charge intrinsic energy (in units of intrinsic Volt) and the voltage, divided by the proton intrinsic energy (in units of Einstein).

The discrete proton velocity is lower than the discrete electron velocity by the square root of 150 (the square root of the proton amplitude).

The equations below, calculations for 100 Volts, are identical to the equations for any other of the nine voltages, or for any other range of ten voltages greater than zero and less than the theoretical maximum.

Top row column 1, the voltages used in this example analysis are 1, 100, 400, 800, 4000, 10000, 25000, 100000, 250000, and 259807.621135 Volts. The highest voltage, calculated to thirteen significant digits, exactly converts to the photon velocity (an impossibility) to eleven significant digits but is less than the photon velocity (the correct result) at 12 significant digits.

The equations following, calculations for 100 Volts, are identical to the equations for any other of the nine voltages, or for any other range of ten voltages greater than zero and less than the theoretical maximum.

Top row column 2, the calculated proton velocity per the discrete equation.

Top row column 3, the number of accelerated (deflected) protons is equal to the ratio of the voltage divided by the intrinsic electron magnetic flux quantum.

Top row column 4, the deflection per quanton is equal to the square root of Lambda-bar divided by the proton amplitude.

This is the deflection of a chirality meshing interaction between a quanton and a proton.

Bottom row column 1, the number of chirality meshing interactions is equal to the square of the voltage divided by the square root of Lambda-bar.

Bottom row column 2, the increase in intrinsic energy per proton due to chirality meshing interactions, equal to the product of the number of chirality meshing interactions and Lambda-bar divided by the number of protons, is denominated in units of Einstein.

Bottom row column 3, the accelerated proton energy is equal to the sum of the intrinsic proton energy and the increase in intrinsic energy per proton.

Bottom row column 4, the mass-energy in units of Joule is equal to the product of the accelerated proton intrinsic energy, the square of the photon velocity and the ratio of the discrete Planck constant divided by Lambda-bar.

Part Five

Atomic Spectra

The Rydberg equations correspond to high accuracy with the hydrogen spectral series and the Newtonian equations correspond to high accuracy with orbital motion but, despite many years of considerable effort, physicists have been unable to account for the spectrum of helium or for non-Newtonian stellar rotation curves.

Previously, we reformulated the Newtonian equations and explained stellar rotation curves. In this chapter we will reformulate the Rydberg equations for the spectral series of hydrogen and derive a general explanation for atomic spectra.

The equation formulated by Johann Balmer in 1885, in which the hydrogen spectrum wave numbers are proportional to the product of a constant and the difference between the inverse square of two integers, is correct, but the Bohr Model is not.

The electron is not a point particle, the electron does not orbit the proton, the force conveyed by an electron is not transmitted an infinite distance, at an infinitesimal distance the force is not infinite, electrons with lower energy and lower wave number are closer to the proton, and electrons with higher energy and higher wave number are further away from the proton (the Bohr distance-energy relationship must be reversed).

In hydrogen an electron and proton are engaged in a positional resonance. In atoms larger than hydrogen many electrons and protons are engaged in positional resonances. Each resonance is between one electron external to the nucleus and one proton internal to the nucleus, in which the electron and the nuclear proton are facing in opposite directions and each particle emits quantons that are absorbed by the other particle. On emission by the electron the quanton is CCW and on emission by the nuclear proton the quanton is CW. On emission the emitting particle recoils by a distance proportional to the particle intrinsic energy and on absorption the absorbing particle is attractively deflected (a chirality meshing interaction) by a distance proportional to the particle intrinsic energy. The result is a sustained positional resonance of a CCW quanton emitted in one direction by the electron and absorbed by the nuclear proton and a CW quanton emitted in the opposite direction by the nuclear proton and absorbed by the electron.

In the hydrogen atom, the resonance can be situated at any one of several quantized positions proportional to energy and corresponding to spectral emission and absorption lines. On emission of a photon the energy of the resonance decreases, and the electron drops to the adjacent lower energy level. On absorption of a photon the energy of the resonance increases, and the electron jumps to the adjacent higher energy level. The highest stable energy level, corresponding to an emission-only line, the maximum electron-proton separation distance beyond which the positional resonance no longer exists, is the hydrogen ionization energy.

The above paragraphs summarize the spectral mechanism which, for the time being, shall be considered a hypothesis.

The intrinsic to kinetic energy factor is equal to the ratio of the discrete Planck constant divided by the Coulomb divided by the ratio of Lambda-bar divided by the charge intrinsic energy, the ratio of the discrete Planck constant divided by the product of Lambda-bar and the square root of the proton amplitude divided by two, and two times the intrinsic steric factor.

The ionization energy of hydrogen (in larger atoms the ionization energy required to remove the last electron) is a discretely exact single value above which the atom no longer exists. The measured energy of hydrogen ionization is 1312 kJ/mol, and the corresponding CRC value is 13.59844 (units of kinetic electron Volts).19 Kinetic electron Volts divided by Omega-2 equals intrinsic Volts (units of Joule), which divided by 12 (the intrinsic to kinetic energy factor) equals intrinsic Volts (units of Einstein), which multiplied by the intrinsic electron charge equals intrinsic energy, which divided by Lambda-bar is equal to the photon frequency of hydrogen ionization.

Working backwards from the calculation sequences above, the discretely exact value of the photon ionization frequency is 3.28000000E15.

The intrinsic energy of hydrogen ionization, denominated in units of Einstein, is equal to the product of the photon frequency and Lambda-bar.

The intrinsic energy of hydrogen ionization, denominated in units of Joule, is equal to the product of the photon frequency and the discrete Planck constant.

The intrinsic voltage of hydrogen ionization, denominated in units of Einstein, is equal to the product of the photon frequency and Lambda-bar, divided by the charge intrinsic energy.

The ratio of the intrinsic voltage of hydrogen ionization divided by Psi is equal to the discrete Rydberg constant and denominated in units of inverse meter (spatial frequency).

The intrinsic voltage of hydrogen ionization, denominated in units of Joule, is equal to the product of 12 (the intrinsic to kinetic energy factor) and the discrete Rydberg constant, and the product of the photon frequency and the discrete Planck constant, divided by the Coulomb.

The kinetic voltage of hydrogen ionization, denominated in units of electron Volt, is equal to the product of the intrinsic voltage of hydrogen ionization and omega-2.

The difference between the above calculated energy of ionization and the CRC value is less than 0.30%. The poor accuracy is due to the performance standards of calorimeters.20 In the measurement of a sample against a calibration standard, a statistical analysis of the results will show the data lie within three standard deviations (sigma-3) of the mean (the expected value) and the accuracy will be 0.15% (99.85% of the measurements will lie in the range of higher than the calibration standard by no more than 0.15% or lower than the calibration standard by no more than 0.15%). If the identical procedure is used without prior knowledge of the expected result and whether the measurement is higher or lower than the actual value is unknown, the accuracy falls to no more than 0.30%.

The calculated value of the kinetic voltage of hydrogen ionization divided by the measured CRC value, expressed as a percentage, is 0.2666%.

Spectral series consist of a number of emission-absorption lines with a lower limit on the left and an upper limit on the right. Both limits are asymptotes: the lower limit corresponds to minimum energy, minimum frequency, and maximum wavelength; and the upper limit corresponds to maximum energy, maximum frequency, and minimum wavelength.

The below diagram of the Lyman spectral series consists of seven black emission-absorption lines to the left and a red emission-only line on the right. From left to right these lines are the Lyman lower limit (Lyman-A), Lyman-B, Lyman-C, Lyman-D, Lyman-E, Lyman-F, Lyman-G, and the Lyman upper limit.

The Rydberg equation expresses the wave numbers of the hydrogen spectrum equal to the product of the discrete Rydberg constant and the difference between the inverse square of the m-index minus the inverse square of the n-index.

The m-index has a constant value for each spectral series within the hydrogen spectrum. The six series ordered by highest energy (at the series upper limit) are Lyman, Balmer, Paschen, Brackett, Pfund and Humphreys.

Each line of a spectral series can be expressed in terms of energy, wave number, wavelength and photon frequency. The energy, wave number, and frequency increase from left to right, but the wavelength decreases from left to right.

For each spectral series the m-index increases from lowest to highest positional energy (Lyman = 1, Balmer = 2, Paschen = 3, Brackett = 4, Pfund = 5, Humphreys = 6). Each spectral series is composed of a sequence of lines (A, B, C, D, E, F, G) in which the n-index is equal to m+1, m+2, m+3, m+4, etc.

In the following analysis we will apply the Rydberg formula to calculate, based on the discretely exact value of the photon ionization frequency of 3.280000E15, the values for energy, wave number and frequency of the six spectral series of hydrogen.

The below calculations begin with the discretely exact values for the Lyman limit photon frequency and the hydrogen ionization energy (intrinsic voltage units of Joule), and the value of the discrete Rydberg constant.

The Lyman upper limit is an emission-only line because at any energy above the Lyman upper limit the hydrogen atom no longer exists. The calculation for the line prior to the Lyman upper limit is based on an n-index equal to 8, but there are additional discernable lines after Lyman-G because the Lyman upper limit is an asymptote. The identical situation holds for the limit of any spectral series.

The spectral series lower limit, the A-line (Lyman-A, Balmer-A, etc.) is also an asymptote and there are additional discernable lines between the C-line and the A-line. The number of lines included in a spectral series analysis is optional, but it is convenient to use the same number of lines in spectral series to be compared.

In this presentation, 8 Lyman and Balmer lines are included because these lines are specified in at least one of the easily available online sources. In the Paschen, Brackett, Pfund and Humphreys spectral series, 6 lines are included because these are also easily available.21

The ratio of the Lyman upper limit divided by the upper limit of another hydrogen spectral series is equal to the square of the m-index of the other series:

  • The Lyman upper limit divided by the Balmer upper limit is equal to 4.
  • The Lyman upper limit divided by the Paschen upper limit is equal to 9.
  • The Lyman upper limit divided by the Brackett upper limit is equal to 16.
  • The Lyman upper limit divided by the Pfund upper limit is equal to 25.
  • The Lyman upper limit divided by the Humphreys upper limit is equal to 36.

The ratio of the Lyman spectral series upper limit divided by the Lyman spectral series lower limit is equal to the ratio of the Rydberg wave number calculation for the upper limit divided by the Rydberg wave number calculation for the lower limit.

In all spectral series the Rydberg ratio is equal to the upper limit energy divided by the lower limit energy, the ratio of the upper limit structural frequency divided by the lower limit structural frequency, and the ratio of the lower limit wavelength divided by the upper limit wavelength.

The ratio of the Balmer spectral series upper limit divided by the Balmer spectral series lower limit is equal to the ratio of the Rydberg wave number calculation for the upper limit divided by the Rydberg wave number calculation for the lower limit.

The same calculation is used for the other four hydrogen spectral series:

  • The ratio of the Paschen spectral series upper limit divided by the Paschen lower limit is equal to 1312/574 (2.285714).
  • The ratio of the Brackett spectral series upper limit divided by the Brackett lower limit is equal to 25/9 (2.777777).
  • The ratio of the Pfund spectral series upper limit divided by the Pfund lower limit is equal to 36/11 (3.272727).
  • The ratio of the Humphreys spectral series upper limit divided by the Humphreys lower limit is equal to 3.769230.

Above, the frequencies under the A, B, C, D, E, F, G-lines and the series limit are the positional structural frequencies, and the transition frequencies between lines (B-A, C-B … F-E, G-F) are the photon emission-absorption frequencies.

The structural frequency of the G-line is equal to the product of the Rydberg calculated wave number and the photon velocity. The energy of the G-line (intrinsic Volts units of Joule) is equal to the product of the structural frequency of the G-line and the Coulomb divided by the discrete Planck constant.

The structural frequency of the F-line is equal to the product of the Rydberg calculated wave number and the photon velocity. The energy of the F-line (intrinsic Volts units of Joule) is equal to the product of the structural frequency of the F-line and the Coulomb divided by the discrete Planck constant.

The photon emission-absorption frequency of the G-F transition is equal to the structural frequency of the G-line minus the structural frequency of the F-line. The energy of the G-F transition (intrinsic Volts units of Joule) is equal to the energy of the G-line minus the energy of the F-line.

The identical process is used to calculate the emission-absorption frequencies and energies for all spectral series.

Note there is no transition frequency or energy between the G-line and the series limit because the series limit is emission-only.

Lyman series transition photons identical to Balmer series photons:

  • When a Lyman-C positional resonance drops down to Lyman-B, the Lyman-C energy is emitted as two photons: a 11.662222 Vi(J) Lyman-B photon frequency 2.915555E15 and a 0.637777 Vi(J) Lyman C-B photon frequency 1.594444E14. The frequency and wavelength of the transition photon is identical to the Balmer B-A transition photon.
  • When a Lyman-D positional resonance drops down to Lyman-C, the Lyman-D energy is emitted as two photons: a 12.300000 Vi(J) Lyman-C photon frequency 3.075000E15 and a 0.295200 Vi(J) Lyman D-C photon frequency 7.380000E13. The frequency and wavelength of the transition photon is identical to the Balmer C-B transition photon.
  • When a Lyman-E positional resonance drops down to Lyman-D, the Lyman-E energy is emitted as two photons: a 12.595200 Vi(J) Lyman-D photon frequency 3.148800E15 and a 0.160356 Vi(J) Lyman E-D photon frequency 4.008888E13. The frequency and wavelength of the transition photon is identical to the Balmer D-C transition photon.
  • When a Lyman-F positional resonance drops down to Lyman-E, the Lyman-F energy is emitted as two photons: a 12.755555 Vi(J) Lyman-E photon frequency 3.188888E15 and a 0.096689 Vi(J) Lyman F-E photon frequency 2.41723E13. The frequency and wavelength of the transition photon is identical to the Balmer E-D transition photon.
  • When a Lyman-G positional resonance drops down to Lyman-F, the Lyman-G energy is emitted as two photons: a 12.852245 Vi(J) Lyman-F photon frequency 3.21306E15 and a 0.062755 Vi(J) Lyman G-F photon frequency 1.568878E13. The frequency and wavelength of the transition photon is identical to the Balmer F-E transition photon.

The equivalence of Balmer-A and Lyman series transitions can be extended to the Paschen, Brackett, Pfund and Humphreys series.

The Lyman C-B transition is equal to the energy and frequency of Paschen-A.

The Lyman D-C transition is equal to the energy and frequency of Brackett-A.

The Lyman E-D transition is equal to the energy and frequency of Pfund-A.

The Lyman F-E transition is equal to the energy and frequency of Humphreys-A.

An explanation of atomic spectra begins with the ionization energies.

In atoms with more than one proton, the discretely exact energy (in red) for elemental ionization energy above which the atom no longer exists, is equal to product of the square of the number of protons times the discretely exact value for the hydrogen ionization energy. The intermediate ionization energies (in blue) are equal to the CRC value divided by omega-2.

The ionization frequency is equal to the product of the ionization energy and the Coulomb divided by the discrete Planck constant.

The ionization wave number is equal to the ionization frequency divided by the photon velocity.

The photon wavelength is the inverse of the wave number.

The difference between the calculated and measured value for the hydrogen ionization energy, divided by the difference between the measured wavelength and calculated wavelength for hydrogen ionization is very nearly equal to the difference between the photon velocity and the speed of light.

The difference between these two values, independent of how it is calculated, is a measurement error term of approximately 0.00468%.

The differences between the measured and calculated values for hydrogen are of no concern and, even though the Rydberg equations derive the measurable wavelengths to high accuracy, the explanation requiring the simultaneous emission of two photons is not consistent with the spectral mechanism hypothesis.

The Rydberg explanation for the emission of atomic spectra requires two frequencies:

  • One frequency is the structural frequency. Structural frequency is proportional to the energy of the positional resonance between an electron and proton (the energy required to hold the electron and proton in the positional resonance).
  • The photon frequency, equal to the difference between adjacent structural frequencies, is proportional to an ionization energy (the energy required to remove an electron from the positional resonance).

The photon frequency and wavelength are not directly proportional to structural energy and, in atoms larger than hydrogen, cannot be calculated by a Rydberg equation.

Proofs that wavelength and frequency are not directly proportional to energy:

  • Spectral wavelengths emitted by sources differing greatly in energy, by a discharge tube in the laboratory, by the sun or by the galactic center, are indistinguishable.
  • In 60 Hertz power transformers the energy of the emitted photons is proportional to the energy of the current (or the magnetic field).

A general explanation for atomic spectra requires an examination of the measured ionization energies and the measured wavelengths of the first four elements larger than hydrogen.

The number of CRC ionization energies (electron Volts in units of kinetic Joule) for each elemental atom larger than hydrogen is equal to the number of nuclear protons; and the number of atomic energies (intrinsic Volts in units of discrete Joule) is also equal to the number of nuclear protons.

While it is true that measured wavelengths are not directly proportional to energy, it is also true that shorter wavelengths are proportional to lower energies and longer wavelengths are proportional to higher energies. For example, ultraviolet photons have shorter wavelengths and lower energies, and visible photons have longer wavelengths and higher energies.

In any atomic spectrum, each measured wavelength corresponds to one specific energy and, in order for each measured wavelength to correspond to one specific energy, the number of wavelengths must either be equal to the number of energies or equal to an integer multiple of the number of energies.

For example, in helium there are two CRC ionization energies (electron Volts in units of kinetic Joule) corresponding to two atomic energies (intrinsic Volts in units of discrete Joule), fourteen measured wavelengths, and one transition between a wavelength proportional to a lower energy and a wavelength proportional to a higher energy.

In the below table, seven lower and seven higher helium atomic energies are in the first row, the measured wavelengths from shortest to longest are in the third row, and the second row is the ratio of the column wavelength divided by the adjacent lower wavelength. This is the definitive test for a transition from a wavelength corresponding to a lower energy to a wavelength corresponding to a higher energy. In the helium atom, the transition wavelength is also detectable by inspection of the previous wavelengths compared to the following wavelengths.

The transitions are less clear in lithium, beryllium, and boron.

In lithium, beryllium and boron the transition wavelengths are not definitively detectable by simple inspection. However, after the higher energy transitions are established by the ratios of the column wavelength divided by the adjacent lower wavelength, the first transition becomes apparent by inspection of the measured wavelengths.

The spectral mechanism hypothesis has been transformed into a general explanation for atomic spectra:

In hydrogen a single electron and proton are engaged in a positional resonance at a discretely exact frequency equal to 3.28E15 Hz. In atoms larger than hydrogen many electrons and protons are engaged in sustained positional resonances, equal to the product of the square of the number of nuclear protons and 3.28E15 Hz, in which CCW quantons are emitted in one direction by electrons and absorbed by nuclear protons, and CW quantons are emitted in the opposite direction by nuclear protons and absorbed by electrons. The positional resonances can be situated at any one of several quantized positions proportional to energy and corresponding to spectral emission and absorption lines. On emission of a photon the energy of the resonance decreases, and the electron drops to a lower energy level. On absorption of a photon the energy of the resonance increases and the electron jumps to a higher energy level.

Part Six

Cosmology

The purpose of this chapter is to disprove cosmic inflation:

  • The radiated intrinsic energy which drives the resonance of constant photon velocity is converted into units of intrinsic redshift per megaparsec.
  • A detailed general derivation of intrinsic redshift (applicable to any galaxy) is made.
  • The final results of the HST Key Project to measure the Hubble Constant are explained by intrinsic redshift.22

The only measurables in the determination of galactic redshifts are the photon wavelength emitted and received in the laboratory, the photon wavelength emitted by a galaxy and received by an observatory, and the ionization energies.

In the following equations Hydrogen-alpha (Balmer-A) wavelengths are used in calculations of intrinsic redshift.

Intrinsic redshift per megaparsec

The photon intrinsic energy radiated per second due to quanton/graviton emissions is equal to the product of 8 and the discrete Planck constant.

The 2015 IAC value for the megaparsec is proportional to the IAC exact SI definition of the astronomical unit (149,597,870,700 m).

The time of flight per megaparsec is equal to one mpc divided by the photon velocity.

The photon intrinsic energy radiated per megaparsec is equal to the product of time of flight per mpc and the photon intrinsic energy radiated per second due to quanton/graviton emissions.

The decrease in photon frequency due to the energy radiated is equal to the photon intrinsic energy radiated per megaparsec divided by the discrete Planck constant.

The increase in photon wavelength due to the photon intrinsic energy radiated is equal to the ratio of the photon velocity divided by decrease in photon frequency.

Note that wavelength and energy are independent thus wavelength cannot be directly determined from energy, but frequency is proportional to energy and the decrease in frequency is proportional to the increase in wavelength.

The intrinsic redshift per megaparsec is equal to the Hydrogen-alpha (Balmer-A) emission wavelength plus the wavelength increase.

General derivation of galactic intrinsic redshift

The distance of the galaxy in units of mpc is that determined by the Hubble Space Telescope Key Project.23 Below, the example calculations are for NGC0300.

The time of flight of photons emitted by NGC0300 is equal to the product of the time of flight per megaparsec and the Hubble Space Telescope Key Project distance of the galaxy.

The photon intrinsic energy radiated by NGC0300 is equal to the product of the time of flight at the distance of NGC0300 and the photon intrinsic energy radiated per second due to quanton/graviton emissions.

The decrease in photon frequency is equal to the photon intrinsic energy radiated by NGC0300 divided by the discrete Planck constant.

The increase in photon wavelength due to the photon intrinsic energy radiated is equal to the ratio of the photon velocity divided by decrease in photon frequency.

The intrinsic redshift at the distance of NGC0300 is equal to the Hydrogen-alpha (Balmer-A) emission wavelength plus the wavelength increase.

Results of the HST Key Project to measure the Hubble Constant

The goal of this massive international project, involving more than fifteen years of effort by hundreds of researchers, was to build an accurate distance scale for Cephied variables and use this information to determine the Hubble constant to an accuracy of 10%.

The inputs to the HST key project were the observed redshifts and the theoretical relativistic expansion rate of cosmic inflation.

In column 2 below, the galactic distances of 22 galaxies in units of mpc are the values determined by the HST Key Project.24

In column 3 below, the galactic distances are expressed in units of meter.

In column 4 below, the time of flight of photons emitted by the galaxy is equal to the distance of the galaxy in meters divided by the photon velocity.

The photon intrinsic energy radiated due to quanton/graviton emissions at the distance of the galaxy is equal to the product of the time of flight of photons emitted by the galaxy and the photon intrinsic energy radiated per second.

The decrease in photon frequency is equal to the photon intrinsic energy radiated by the galaxy divided by the discrete Planck constant.

The increase in photon wavelength due to the photon intrinsic energy radiated is equal to the ratio of the photon velocity divided by decrease in photon frequency.

Above column 5, the intrinsic redshift at the distance of the galaxy is equal to the Hydrogen-alpha (Balmer-A) emission wavelength plus the wavelength increase.

The Hubble parameter for a galaxy, equal to the product of the ratio of 2 omega-2 (converts intrinsic energy to kinetic energy) divided by the time of flight of photons received at the observatory that were emitted by the galaxy, and the ratio of the distance of the galaxy in units of kilometer divided by the distance of the galaxy in units of megaparsec, is denominated in units of km/s per mpc.

The Hubble constant is equal to the sum of the Hubble parameters for the galaxies examined divided by the number of galaxies.

The theory of cosmic inflation has been disproved.

Part Seven

Magnetic levitation and suspension

This chapter was motivated by a video about quantum magnetic levitation and suspension in which superconducting disks containing thin films of YBCO are levitated and suspended on a track composed of neodymium magnet arrays in which a unit array contains four neodymium magnets (two diagonal magnets oriented N→S and the other two S→N).25

An understanding of levitation and suspension by neodymium magnet arrays begins with consideration of the differences between the levitation of a superconducting disk containing thin films of metal oxides and the levitation of thin slice of pyrolytic carbon.

Oxygen is paramagnetic. An oxygen atom is magnetized by the magnetic field of a permanent magnet in the direction of the external magnetic field (for example, a S→N external magnetic field induces a S→N internal field) and reverts to a demagnetized state when the field is removed. The levitation of a superconducting disk requires an array of neodymium magnets and cooling below the critical temperature. In quantum levitation or suspension, the position of the disk is established by holding (pinning) it in the desired location and orientation, and if a pinned disk is forced into a new location and orientation, it remains pinned in the new location.

Carbon is diamagnetic. A carbon atom is magnetized by a magnetic field in the direction opposite to the magnetic field (for example, a N→S external magnetic field induces a S→N internal field) and reverts to a demagnetized state when the field is removed. Magnetic levitation occurs at room temperature, a thin slice of pyrolytic carbon levitates at a fixed distance parallel to the surface of an array of neodymium magnets, and a levitated slice forced closer to the surface springs back to the fixed distance once the force is removed.

Above, levitation of pyrolytic carbon.26

In the levitation of pyrolytic carbon, CCW quantons are emitted by a magnetic North pole and CW quantons are emitted by a magnetic South pole (magnetic emission of quantons is discussed in Part Four).

The number of chirality meshing interactions required to exactly oppose the gravitational force on a thin slice of pyrolytic carbon (or any object) is equal to the local gravitational constant of earth divided by the product of the proton amplitude and the square root of Lambda-bar.

In the above equation, the local gravitational constant of earth (as derived in Part One) is equal to 10 meters per second per second and the proton amplitude (also derived in Part One) is equal to 150 and, (as derived in Part Four) the square root of Lambda-bar is the deflection distance (units of meter) of a single chirality meshing interaction between a quanton and an electron.

The above equation is proportional to energy: the higher the energy, the higher the number of chirality meshing interactions, and the higher the levitation distance; the lower the energy, the lower the number of chirality meshing interactions, and the lower the levitation distance.

Pyrolytic carbon is composed of planar sheets of carbon atoms in which a unit cell is composed of a hexagon of carbon atoms joined by double bonds. Carbon atoms are bonded by either lower energy single bonds proportional to the first ionization energy or higher energy double bonds proportional to the second ionization energy. The measured first and second ionization energies of carbon are 1086.5 and 2352.0 (units of kJ/mol)27.

Due to the discretely exact value of PE charge resonance, in carbon (or any elemental atom) the quanton emission-absorption frequency is equal to 3.28E15 Hz.

The quanton emission frequency of a unit cell of pyrolytic carbon is equal to the product of the discretely exact PE charge resonance frequency of 3.28E15 Hz and the ratio of the second ionization energy of carbon divided by the first ionization energy of carbon.

The levitation distance of a thin slice of pyrolytic carbon (in units of mm) is equal to the product of the ratio of quanton emission frequency of a pyrolytic carbon unit cell divided by six (the number of carbon atoms in a unit cell) times 1000 mm/m and the square root of Lambda-bar.

The oxygen atoms in YBCO oxides are bonded by either lower energy single bonds proportional to the first ionization energy or higher energy double bonds proportional to the second ionization energy. The measured first and second ionization energies of oxygen are 1313.9 and 3388.3 (units of kJ/mol).

The three YBCO metallic oxides are composed of low energy single bonds, high energy double bonds, or single and double bonds. In yttrium oxide (Y2O3), a single bond connects each yttrium atom with the inside oxygen, and a double bond connects each yttrium atom with one of the two outside oxygens. In barium oxide (BaO) the two atoms are connected by a double bond. Copper oxide is a mixture of cupric oxide (copper I oxide) in which a single bond connects each of two copper atoms with the oxygen atom, and cuprous oxide (copper II oxide) in which a double bond connects the copper atom with the oxygen atom.

Voltage is the emission of quantons either directly by the Q-axis of an electron or proton or transversely by a magnetic field from which CCW quantons are emitted by the North pole and CW quantons by the South pole.

The mechanism of magnetic levitation or suspension of a superconducting disk is the absorption of quantons, emitted by a neodymium magnet array, in chirality meshing interactions by electrons in the oxygen atoms of superconductingYBCO oxides resulting in repulsive deflections due to CCW quantons (in quantum levitation) and attractive deflections due to CW quantons (in quantum suspension).

The levitation or suspension distance of a superconductingYBCO oxide is higher (the maximum distance) for double bonded oxides and lower (the minimum distance) for single bonded oxides. The initial position of the YBCO disk is established by momentarily holding (pinning) it in the desired location and orientation at some specific distance from the neodymium magnet array.

In each one-hundredth of a second more than 2E14 chirality meshing interactions establishes the intrinsic energy of electrons within the superconducting oxides. At the same time, at any specific distance above or below the neodymium magnet array the number of quanton interactions, inversely proportional to the square of distance, establishes the availability of quantons to be absorbed at that specific distance. The result is an electrical Stable Balance of the electrons in superconducting oxides at specific distances from the neodymium magnet array, analogous to the gravitational Stable Balance of particles in planets at a specific orbital distance from the sun.

This is the mechanism of pinning in YBCO superconducting disks.

The levitation or suspension distance (units of mm) of a single bonded superconductingYBCO oxide is equal to the product of the ratio of the first ionization energy of oxygen divided by itself, the discretely exact PE charge resonance of 3.28E15 Hz, the square root of Lambda-bar, the ratio of the discrete steric factor divided by 1 (single bond), and 1000 (to convert m to mm).

The levitation or suspension distance (units of mm) of a double bonded superconductingYBCO oxide is equal to the product of the ratio of the second ionization energy of oxygen divided by the first ionization energy of oxygen, the discretely exact PE charge resonance of 3.28E15 Hz, the square root of Lambda-bar, the ratio of the discrete steric factor divided by 2 (double bond), and 1000 (to convert m to mm).

1 Original letter from Isaac Newton to Richard Bentley, 189.R.4.47, ff. 7-8, Trinity College Library, Cambridge, UK http://www.newtonproject.ox.ac.uk

2 https://nssdc.gsfc.nasa.gov/planetary/planetfact.html, accessed Dec 24, 2021

3 Urbain Le Verrier, Reports to the Academy of Sciences (Paris), Vol 49 (1859)

4 Clemence G.M. The relativity effect in planetary motions. Reviews of Modern Physics, 1947, 19(4): 361-364.

5 Eric Doolittle, The secular variations of the elements of the orbits of the four inner planets computed for the epoch 1850 GMT, Trans. Am. Phil. Soc. 22, 37(1925).

6 Michael P. Price and William F. Rush, Nonrelativistic contribution to mercury’s perihelion precession. Am. J. Phys. 47(6), June 1979.

7 Wikimedia, by Daderot made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication, location National Museum of Nature and Science, Tokyo, Japan.

8 Illustration from 1908 Chambers’s Twentieth Century Dictionary. Public domain.

9 Wikimedia “Sine and Cosine fundamental relationship to Circle and Helix” author Tdadamemd.

10 By Jordgette – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=9529698

11 By Ebohr1.svg: en:User:Lacatosias, User:Stanneredderivative work: Epzcaw (talk) – Ebohr1.svg, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=15229922

12 https://www.nobelprize.org/prizes/physics/1927/summary/

13 O. Stern, Z. fur Physik, 7, 249 (1921), title in English: “A way to experimentally test the directional quantization in the magnetic field”.

14 Ronald G. J. Fraser, Molecular Rays, Cambridge University Press, 1931.

15 The Molecular Beam Resonance Method for Measuring Nuclear Magnetic Moments. II Rabi, S Millman, P Kusch, JR Zacharias – Physical review, 1939 – APS

16 INDC: N. J. Stone 2014. Nuclear Data Section, International Atomic Energy Agency, www-nds.iaea.org/publications

17 “Quantum theory yields much, but it hardly brings us close to the Old One’s secrets. I, in any case, am convinced He does not play dice with the universe.” Letter from Einstein to Max Born (1926).

18 “That gravity should be innate inherent & essential to matter so that one body may act upon another at a distance through a vacuum without the mediation of anything else by & through which their action or force may be conveyed from one to another is to me so great an absurdity that I believe no man who has … any competent faculty of thinking can ever fall into it.” Original letter from Isaac Newton to Richard Bentley, 189.R.4.47, ff. 7-8, Trinity College Library, Cambridge, UK http://www.newtonproject.ox.ac.uk

19 Ionization energies of the elements (data page), https://en.wikipedia.org/

20 How to determine the range of acceptable results for your calorimeter, Bulletin No. 100, Parr Instrument Company, www.parrinst.com.

21 See www.wikipedia.org, www.hyperphysics.com, www.shutterstock.com

22 Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant, Astrophysical Journal 0012-376v1, 18 Dec 2000.

23 Page 60, Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant, Astrophysical Journal 0012-376v1, 18 Dec 2000.

24 Page 60, Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant, Astrophysical Journal 0012-376v1, 18 Dec 2000.

25 “Dr. Boaz Almog: Quantum Levitation” https://www.youtube.com/watch?v=4HHJv8lPERQ .

26 This image has been released into the public domain by its creator, Splarka. https://commons.wikimedia.org/wiki/File:Diamagnetic_graphite_levitation.jpg

27 Ionization energies of the elements (data page), https://en.wikipedia.org/

Is New Porsche 911 GT3 Touring All The car You’ll Ever Need?

Top Gear UK November 2024- Not one but two new Porsche 911 GT3s are upon us, both a regular be-winged car and the more subtle Touring model. And for once, the headline news isn’t the power, the peak revs or the Nürburgring lap time, but how practical it is.

That’s right, because for the first time in the 25-year history of the GT3, it’s being offered with back seats.

It’s only for the Touring, but that addition alone will be enough to start The Internet chattering about whether this is ‘all the car you’ll ever need’.

However, if kids, or at least taking your kids with you, isn’t your thing, then worry not. The back seats are merely an option, and the non-Touring GT3 can’t be had with them at all. Plus, if you’re the sort of Porsche purest who hates weight, you can double down on that ethos with either a Weissach pack for the GT3 or a Leichtbau (aka Lightweight) pack for the Touring.

As for what else is new (and there are a lot of detailed, GT3 RS-inspired changes), join Top Gear’s Tom Ford for an in-depth walkaround of both new GT3s with Andreas Preuninger, Porsche’s Director of GT Cars…

The Dawn of Artificial Intelligence: A Journey Through Time

AI

Artificial Intelligence (AI) has become an integral part of our daily lives, influencing everything from how we interact with technology to how businesses operate. But where did it all begin? Let’s take a journey through the early days of AI, exploring the key milestones that have shaped this fascinating field.

Early Concepts and Inspirations

The concept of artificial beings with intelligence dates back to ancient myths and legends. Stories of mechanical men and intelligent automata can be found in various cultures, reflecting humanity’s long-standing fascination with creating life-like machines1. However, the scientific pursuit of AI began much later, with the advent of modern computing.

The Birth of AI as a Discipline

The field of AI was officially founded in 1956 during the Dartmouth Conference, organized by computer science pioneers John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon2. This conference is often considered the birth of AI as an academic discipline. The attendees proposed that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

Early Milestones

One of the earliest successful AI programs was written in 1951 by Christopher Strachey, who later became the director of the Programming Research Group at the University of Oxford. Strachey’s checkers (draughts) program ran on the Ferranti Mark I computer at the University of Manchester, England3. This program demonstrated that machines could perform tasks that required a form of intelligence, such as playing games.

In 1956, Allen Newell and Herbert A. Simon developed the Logic Theorist, a program designed to mimic human problem-solving skills. This program was able to prove mathematical theorems, marking a significant step forward in AI research4.

The Rise and Fall of AI Hype

The initial success of AI research led to a period of great optimism, often referred to as the “AI spring.” Researchers believed that human-level AI was just around the corner. However, progress was slower than expected, leading to periods of reduced funding and interest known as “AI winters”4. Despite these setbacks, significant advancements continued to be made.

The Advent of Machine Learning

The 1980s and 1990s saw the rise of machine learning, a subset of AI focused on developing algorithms that allow computers to learn from and make predictions based on data. This period also saw the development of neural networks, inspired by the structure and function of the human brain4.

The Modern Era of AI

The 21st century has witnessed a resurgence of interest and investment in AI, driven by advances in computing power, the availability of large datasets, and breakthroughs in algorithms. The development of deep learning, a type of machine learning involving neural networks with many layers, has led to significant improvements in tasks such as image and speech recognition4.

Today, AI is a rapidly evolving field with applications in various domains, including healthcare, finance, transportation, and entertainment. From virtual assistants like me, Microsoft Copilot, to autonomous vehicles and systems, AI continues to transform our world in profound ways.

A Copilot self generated image when queried “Show me what you look like”. CP

Conclusion

The journey of AI from its early conceptual stages to its current state is a testament to human ingenuity and perseverance. While the field has faced numerous challenges and setbacks, the progress made over the past few decades has been remarkable. As we look to the future, the potential for AI to further revolutionize our lives remains immense.

2: Timescale 3: Encyclopedia Britannica 4: Wikipedia 1: Wikipedia


For the Silo, Microsoft Copilot AI. 😉

Inside The High Flying & Spying World Of Hot Air Balloons

Remember early last year when we were besieged by strange, large balloons in our airspace- the kind of balloons that certain nations are using to spy on us or possibly manipulate the weather?

Okay let’s put those conspiracy theories aside for now. Hopefully everyone has seen a hot air balloon in flight at least once because they are majestic and otherworldly and quite calming to watch drifting around up in the blue sky above.

But have you ever wondered when the first one was invented? Or how much hot air is required to get them safely off the ground and ready for a flight around the skies? Or where they are stored when they aren’t being used? Our friends at SpareFoot were wondering the same thing, and the data they shared with us below is quite astonishing.

Hot Air Balloon InfoGraphic

SupplementalBaumgartner’s record setting Free Fall event utilized Hot Air Balloon to reach “edge of space”

Amidst Waves of Data Breaches, U.S. Gov Advised Agencies: Implement Zero Trust Architecture

It’s been nearly two years since arguments and questions kept rising following the FAA outage that happened on January 11th, 2023, which resulted in the complete closure of the U.S. Airspace and most of the airspace here in Canada.

Although the FAA later confirmed that the outage was, in fact, caused by a contractor who unintentionally damaged a data file related to the Notices to Air Missions (NOTAM) system, the authenticity of the fact is still debated. 

The FAA initially urged airlines to ground domestic departures following the system glitch Credit: Reuters

“The FAA said it was due to one corrupted file – who believes this? Are there no safeguards against one file being corrupted, bringing everything down? Billions of Dollars are being spent on cybersecurity, yet this is going on – are there any other files that could be corrupted?” questions Walt Szablowski, Founder and Executive Chairman of Eracent, a company that specializes in providing IT and cybersecurity solutions to large organizations such as the USPS, Visa, U.S. Airforce, British Ministry of Defense — and dozens of Fortune 500 companies.

There has been a string of cybersecurity breaches across some high-profile organizations.

Last year, on January 19th, T-Mobile disclosed that a cyberattacker stole personal data pertaining to 37 million customers, December 2022 saw a trove of data on over 200 million Twitter users circulated among hackers. In November 2022, a hacker posted a dataset to BreachForums containing up-to-date personal information of 487 million WhatsApp users from 84 countries.

The Ponemon Institute in its 2021 Cost of a Data Breach Report analyzed data from 537 organizations around the world that had suffered a data breach. Note all of the following figures are in US dollars. They found that healthcare ($9.23 million ), financial ($5.72 million), pharmaceutical ($5.04 million), technology ($4.88 million), and energy organizations ($4.65 million) suffered the costliest data breaches.

The average total cost of a data breach was estimated to be $3.86 million in 2020, while it increased to $4.24 million in 2021.

“In the software business, 90% of the money is thrown away on software that doesn’t work as intended or as promised,” argues Szablowski“Due to the uncontrollable waves of costly network and data breaches, the U.S. Federal Government is mandating the implementation of the Zero Trust Architecture.

Eracent’s ClearArmor Zero Trust Resource Planning (ZTRP) consolidates and transforms the concept of Zero Trust Architecture into a complete implementation within an organization.

This image has an empty alt attribute; its file name is image-4.png

“Relying on the latest technology will not work if organizations do not evolve their thinking. Tools and technology alone are not the answer. Organizations must design a cybersecurity system that fits and supports each organization’s unique requirements,” concludes Szablowski. For the Silo, Karla Jo Helms.

China Innovates Shenzhen Sea World With Robot Whale Shark

SHENZHEN, China (October, 2024) — After five years of renovations, Xiaomeisha Sea World have taken the bold step to include forward-thinking robotic alternatives to using live animals to educate and entertain visitors.

“We are thrilled to see Xiaomeisha Sea World taking a step toward more compassionate entertainment with its animatronic whale shark, and we hope this move encourages people to reconsider why they feel entitled to see live marine animals in confinement — especially when it comes to species who are known to suffer extreme psychological and physical harm as a result of captivity — and that that this aquarium will continue to lead the way with more exhibits that don’t use live animals.”  Hannah Williams, Cetacean Consultant for In Defense of Animals.

Xiaomeisha Sea World’s decision comes in the context of a broader global movement toward protecting marine life. In recent years, New Zealand made headlines for banning swimming with dolphins to prevent the disturbance of wild populations — a step in recognizing the importance of reducing stress on these sentient beings. In Mexico City, the ban on keeping dolphins and whales in captivity has been a landmark victory, specifically citing the former use of living dolphins in displays that landed the city’s aquarium on In Defense of Animals’ “10 Worst Tanks” list.

Developed by Shenyang Aerospace Xinguang Group under the Third Academy of China Aerospace Science and Industry Corporation Limited, this groundbreaking achievement marks a significant step forward in modern marine technology.

The nearly five-meter-long, 350-kilogram bionic marvel is capable of replicating the movements of a real whale shark with remarkable precision, including swimming, turning, floating, diving, and even movements of its mouth.

At Xiaomeisha Sea World- cutting edge display technology is front and center.

Wild whale and dolphin populations are in global decline. Fishing has caused a severe decline of Indian Ocean dolphins and Pacific Ocean orcas — who also suffer additionally from ship traffic and marine noise. The marine animal entertainment industry puts further pressure on wild animals since it depends on continual top ups of captive populations with wild captures of dolphins and small whales, such as Japan’s infamous Taiji Cove drive hunt. Each year, dolphins face traumatic experiences during live captures, either being killed or traumatically ripped from their pods and shipped for a life of confinement.

In light of the inherent cruelty and conservation impacts of traditional aquarium captivity, Xiaomeisha Sea World’s animatronic whale shark represents a promising shift towards humane marine entertainment. We encourage Xiaomeisha to build on this achievement by becoming the world’s first fully animatronic aquarium. By adopting more “species” of advanced marine robots — which include manta rays, dolphins, and orcas — Xiaomeisha could address lingering concerns, such as new reports of fish with white spot diseasecrowded tanks, “lots of excrement in the snow wolf garden,” ongoing harmful beluga whale shows, and firmly put to rest the heartbreaking legacy of Pezoo, a zoochotic polar bear who suffered in extreme confinement for years. Transitioning away from outdated live-animal performances would position Xiaomeisha as a global leader in innovative, ethical marine exhibits.

Exciting developments in next-generation animal entertainment are taking place around the world. Time Magazine named Axiom Holographics’ animal-free Hologram Zoo in Brisbane among the best inventions of 2023.

Edge Innovations in California has created hyper-realistic animatronic animals, including dolphins that can swim, respond to questions, and engage closely with audiences — without any of the ethical concerns associated with real captive animals. These lifelike creations offer enhanced levels of interaction and can thrive in confined environments like theme parks, aquariums, and shopping malls, preventing real animals from suffering and premature death.

“A tidal wave of excitement is building for the future of animal-free entertainment, driven by cutting-edge technologies like animatronics, holograms, and virtual reality. “Aquariums and zoos have a unique opportunity to captivate audiences with these immersive experiences — without capturing live animals. Modern technology can bring the wonders of animal life to people in ways that were never possible before. We urge Xiaomeisha Sea World to fully embrace animatronics and seize this chance to proudly and openly lead the way to a sustainable, cruelty-free model that respects marine animal lives.” Fleur Dawes, Communications Director for In Defense of Animals.

For the Silo, Hannah Williams/IDA.

In Defense of Animals is an international animal protection organization with over 250,000 supporters and a 40-year history of defending animals, people, and the environment through education, campaigns, and hands-on rescue facilities in California, India, South Korea, and rural Mississippi. For more information, visit https://www.idausa.org/campaign/cetacean-advocacy

Over Half Canadians Opposed To Fed’s Unaffordable 2035 Ban On Gas Powered Cars

Over Half of Canadians Oppose Fed’s Plan to Ban Sale of Conventional Vehicles by 2035: Poll
An electric vehicle is seen being charged in Ottawa on on July 13, 2022. The Canadian Press/Sean Kilpatrick

More than half of Canadians DO NOT support the federal government’s mandate to require all new cars sold in Canada to be electric by 2035, a recent Ipsos poll finds.

Canadians across the country are “a lot more hesitant to ban conventional cars than their elected representatives in Ottawa are,” said Krystle Wittevrongel, research director at the Montreal Economic Institute (MEI), in a news release on Oct. 3.

“They have legitimate concerns, most notably with the cost of those cars, and federal and provincial politicians should take note.”

The online poll, conducted by Ipsos on behalf of the MEI, surveyed 1,190 Canadians aged 18 and over between Sept. 18 and 22. Among the participants overall, 55 percent said they disagree with Ottawa’s decision to ban the sale of conventional vehicles by 2035 and mandate all new cars be electric or zero-emissions.

“In every region surveyed, a larger number of respondents were against the ban than in favour of it,” MEI said in the news release. According to the poll, the proportion of those against the ban was noticeably higher in Western Canada, at 63 percent, followed by the Atlantic provinces at 58 percent. In Ontario, 51 percent were against, and in Quebec, 48 percent were against.

In all, only 40 percent nationwide agreed with the federal mandate.

‘Lukewarm Attitude’

Just 1 in 10 Canadians own an electric vehicle (EV), the poll said. Among those who don’t, less than one-quarter (24 percent) said their next car would be electric.

Fewer Canadians Willing to Buy Electric Vehicles: Federal Research

ANALYSIS: ‘Bumpy Road’ Ahead as Canada Moves Toward 2035 EV Goals

A research report released by Natural Resources Canada (NRCan) in March this year suggests a trend similar to that of the Ipsos poll’s findings. The report indicated that only 36 percent of Canadians had considered buying an EV in 2024—down from 51 percent in 2022.

“Survey results reveal that Canadians hold mixed views on ZEVs [Zero-Emission Vehicles] and continue to have a general lack of knowledge about these vehicles,” said the report by EKOS Research Associate, which was commissioned by NRCan to conduct the online survey of 3,459 Canadians from Jan. 17 to Feb. 7.

The MEI cited a number of key reasons for “this lukewarm attitude” in adopting EVs, including high cost (70 percent), lack of charging infrastructure (66 percent), and reduced performance in Canada’s cold climate (64 percent).

Canada’s shift from gas-powered vehicles to EVs is guided by federal and provincial policies aimed at zero-emission transportation. The federal mandate requires all new light-duty vehicles, which include passenger cars, SUVs, and light trucks, sold by 2035 to be zero-emission—with interim targets of 20 percent by 2026 and 60 percent by 2030.

Some provincial policies, such as those in Quebec, are even stricter, including a planned ban on all gas-powered vehicles and used gas engines by 2035.

‘Unrealistic’

The MEI survey indicated that two-thirds of respondents (66 percent) said the mandate’s timeline is “unrealistic,” with only 26 percent saying Ottawa’s plan is realistic.

In addition, 76 percent of Canadians say the federal government’s environmental impact assessment process used for energy projects takes too long, with only 9 percent taking the opposite view, according to the survey.

A study by the Fraser Institute in March said that achieving Ottawa’s EV goal could increase Canada’s demand for electricity by 15.3 percent and require the equivalent of 10 new mega hydro dams or 13 large natural gas plants to be built within the next 11 years.

“For context, once Canada’s vehicle fleet is fully electric, it will require 10 new mega hydro dams (capable of producing 1,100 megawatts) nationwide, which is the size of British Columbia’s new Site C dam. It took approximately 10 years to plan and pass environmental regulations, and an additional decade to build. To date, Site C is expected to cost $16 billion,” said the think tank in a March 14 news release.

On April 25, Prime Minister Justin Trudeau announced that Canada since 2020 has attracted more than $46 billion cad in investments for projects to manufacture EVs and EV batteries and battery components. A Parliamentary Budget Officer report published July 18 said Ottawa and the provinces have jointly promised $52.5 billion cad in government support from Oct. 8, 2020, to April 25, 2024, which included tax credits, production subsidies, and capital investment for construction and other support.

On July 26, a company slated to build a major rechargeable battery manufacturing plant in Ontario announced that it would halt the project due to declining demand for EVs.

In a news release at the time, Umicore Rechargeable Battery Materials Canada Inc. said it was taking “immediate action” to address a “recent significant slowdown in short- and medium-term EV growth projections affecting its activities.”

For The Silo, Isaac Teo with contribution from the Canadian Press.

Isaac Teo

Porsche Rarities Coming To Auction

Broad Arrow Auctions has released the complete digital catalog for its upcoming inaugural Chattanooga Auction, set for 12 October 2024 at the Chattanooga Convention Center in Tennessee and we have it here for you to drool over (see below).

Among the 90+ collector cars on offer at the single-day sale are no less than 15 variations of the 911 model, including such rarities as the 1984 Porsche 911 SC RS Gruppe B “Evolutionsserie”, the vertible “missing link” in any Carrera RS collection.

Friday, October 11 9:00 am – 5:00 pm ET
Saturday, October 12 9:00 am – 1:00 pm ETAuction
Saturday, October 12 1:00 pm ET

Drool Time

1984 Porsche 911 SC RS Gruppe B “Evolutionsserie”Lot 180
Estimate: $2,600,000 – $3,500,000 USD/ $3,528,000 CAD- $4,750,000 CAD

Looking for something less German? View all lots- click here.

Featured image-

2019 Porsche 911 Speedster Heritage Design Package Lot 140
Estimate: $375,000 – $425,000 USD/ $509,000 CAD- $577,000 CAD

USB Juice Jacking Is New Way Hackers Attack Travelers

How to avoid being hacked during this Fall’s travel season. 

According to a recent study by cybersecurity firm NordVPN, one in four travelers has been hacked when using public Wi-Fi while traveling abroad. However, unsecured Wi-Fi is not the only factor travelers should be worried about. 

Last year, the FBI published a tweet (see below) warning users against smartphone charging stations in public places (airports, hotels, and shopping malls). Hackers may have modified the charging cables with the aim of installing malware on phones to perform an attack called juice jacking. 

“Digital information, although it exists virtually, can also be stolen using physical devices. So it is important to take a 360-degree approach and secure your device from both online and offline threats,” says Adrianus Warmenhoven, a cybersecurity advisor.

What is juice jacking?

Juice jacking is a cyberattack where a public USB charging port is used to steal data or install malware on a device. Juice jacking attacks allow hackers to steal users’ passwords, credit card information, addresses, names, and other data. Attackers can also install malware to track keystrokes, show ads, or add devices to a botnet.

Image

Is juice jacking detectable?

Juice jacking attacks can be difficult to detect. If your device has already been compromised, you may notice some suspicious activity – but that won’t always be the case.

For example, you may notice something you don’t recognize on your phone — like purchases you didn’t make or calls that look suspicious.

Your phone may also start working unusually slowly or feel hotter than usual. Chances are you may have picked up malware. For a full list of signs to watch out for read on and find out how to know if your phone is hacked.

How to protect yourself

Since no sign of juice jacking is 100% reliable, it is best to avoid falling victim to this attack by using the following the advice:

  • Get a power bank. Power banks are a safe and convenient way to charge your device on the go. Getting a portable power bank means that you’ll never have to use public charging stations where juice jacking attacks occur. Always ensure your power bank is fully charged so you can use it on the go.
     
  • Use a USB data blocker. A USB data blocker is a device that protects your phone from juice jacking when you’re using a public charging station. It plugs into the charging port on your phone and acts as a shield between the public charging station’s cord and your device.
     
  • Use a power socket instead. Juice jacking attacks only happen when you’re connected to a USB charger. If you absolutely need to charge your phone in public, avoid the risk of infected cables and USB ports and use a power outlet. This is typically a safe way to charge your mobile device and other devices in public.

For the Silo, Darija Grobova.

8 Cars That Deserved Better Engines

What vehicle never got the engine it deserved? That’s the question posed to our friends at Hagerty Auto Insurance. Their love of cars goes back decades, or centuries and they’ve all been wondering how much better certain cars would be if they had a different engine …

… Or a better engine, something that truly spoke to the rest of the car. Let’s see what alternate car realities they would have created.

A Standard V-8 for Every Cadillac

engine cadillac VVT
Lies! All lies! Cadillac

For me, it’s the fact that all Cadillac cars (cars—Escalade excluded) from the last 20 or so years lack a standard V-8 engine. GM has an excellent LS motor, and a baby Caddy with a modest 4.8-liter small-block would give buyers more reason to avoid a thirsty BMW for a slightly more thirsty Caddy.

As the Caddy becomes larger, the V-8 engine follows suit (5.3-liter CTS, 6.2-liter CT-6, etc.) with increased displacement, and forced induction for the V-series examples. The inherent torque and simplicity of a pushrod V-8 complements the minimalist architecture of GM’s new EV powertrains, and exclusively pairing those two in a luxury car brand will make Cadillac more appealing than any of its competition. — Sajeev Mehta

As under-the-radar-good (and as mod-friendly) as the ATS-V’s LF4 V-6 is, I agree. After having spent over ten thousand miles with the smaller of the Alpha-chassis Caddys, the ATS should have gotten the 455-horse LT1 from the Camaro, and the ATS-V should have gotten the LT4. — Eddy Eckart

V-8 Bronco Raptor/ Ford GT

2024 Ford Bronco Raptor climb front three quarter
Ford

Ford Bronco Raptor. Lack of a V-8 is … yeaaaaah. For the record, I am fully aware that you can’t easily fit that V-8 into Ford’s T-6 frame. Actually, here’s the same opinion again: This also applies to the most recent Ford GT. — Matt Tuccillo

For sure, the Ford GT shoulda had a V-8. — Larry Webster

I think I’ll also jump on the Ford GT bandwagon, as I don’t care for the reasoning of why it got the EcoBoost V-6. That car deserved a V-8 based on heritage alone. – Greg Ingold

That buttress really flies Sajeev Mehta

Yes, please! Kill the flying buttress, make room for a 900+ horsepower Coyote with a twin-screw supercharger. — Sajeev Mehta

V-8 Prowler

1997 Plymouth prowler rear three-quarter
FCA

The Plymouth Prowler comes to mind. Chrysler Corporation came up with a car that was a modern nod to the classic hot rod but forgot the one factor that people want from a hot rod: A V-8 engine. You have to actively try to miss that detail. I don’t think anyone would’ve minded seeing a 318 Magnum out of a Ram pickup in the Prowler, as long as it came with eight cylinders. — Greg Ingold

Honda Motors in a Modern Lotus

Lotus Evora GT40 front three quarter
Lotus

Any modern-day Lotus fits in this category. They make do with Toyota engines but the chassis deserves the character of a Honda motor. — Larry Webster

Having a Lotus with a K-Series would be excellent! Totally agree with that take. — Greg Ingold

A Straight-Six SLK

Mercedes-Benz

Let’s not overlook the original Mercedes SLK. This folding-roof roadster needed Mercedes’ juicy and punchy 2.8-liter straight six. That supercharged four-cylinder engine was disappointing, and the manual gearbox was even worse. — Larry Webster

SHO-inental, If Only

1989 continental signature series engine
Sajeev Mehta

I only thought of this car/engine combo since I yanked my 1989 Continental Signature Series out of storage. Turns out it needed new rubber, and tires from a 1989 Ford Taurus SHO are a smidge wider on the same-sized wheel. Getting a set of those and slapping a set of 1/4-inch spacers on the rear gave it a stance that I can’t stop looking at. And now, curiously, it’s getting a lot more compliments. Even the manager of a local burger joint stopped me from giving my order so he could compliment me on it.

He thought it was a Town Car, but that’s not the point. These moments get this Lincoln-restomodding fool thinking about one thing: Ford needed an automatic transmission ready for the Taurus SHO sooner, and should have slapped it all into the 1989 Continental. Such a tragedy! — Sajeev Mehta

Citroën DS

citroen ds engine
Le nuancier DS

The Citroën DS was so unconventional and interesting that it’s easy to forget there was only ever an old-fashioned, underwhelming OHV four under the hood. The later SM got a Maserati V-6, but the DS was never so lucky. — Andrew Newton

The Sky Shoulda Been the Limit

2007 Saturn Sky Red Line front three-quarter
GM

GM flogged its Ecotec four-banger, and I know they made crazy power for drag racing. But I thought the Pontiac Solstice and Saturn Sky deserved a more refined motor. — Larry Webster

They needed an LS, maybe just a small-displacement 4.8-liter, to keep Chevrolet appeased with their Corvette’s dominance. But I am sure that was discussed in some conference room at GM, and it was quickly shot down. — Sajeev Mehta

Featured image- Ford GT with Ecoboost 6 cylinder engine.

World Economic Forum EDISON Alliance Speeding Global Digital Inclusion

World Economic Forum’s EDISON Alliance Impacts Over 1 Billion Lives, Accelerating Global Digital Inclusion.

  • The EDISON Alliance has connected over 1 billion people globally to essential digital services like healthcare, education and finance through a network of 200+ partners in over 100 countries.
  • Investments in bridging the universal digital divide could bring $8.7 trillion usd/ $11.7 trillion cad in benefits to developing countries, home to more than 70% of the Alliance’s beneficiaries.
  • The Alliance’s 300+ partner initiatives, including digital dispensaries in India, economy digitalization programmes in Rwanda and blended learning in Bangladesh, continue to shape a digitally equitable society.
  • Follow the Sustainable Development Impact Meetings 2024 here and on social media using #SDIM24.

New York, USA, September 2024 – The EDISON Alliance, a World Economic Forum initiative, has successfully connected over 1 billion people globally – ahead of its initial 2025 target – to essential digital services in healthcare, education and finance in over 100 countries. Since its launch in 2021, the Alliance has united a diverse network of 200+ partners from the public and private sectors, academia and civil society to create innovative solutions for digital inclusion.


Despite living in a digitally connected world, 2.6 billion people are currently not connected to the internet.

This digital exclusion impacts access to healthcare, financial services and education, contributing to significant economic costs for both the individuals involved and their countries’ economies.

Klaus Schwab- German mechanical engineer, economist and founder of the World Economic Forum.


“Ensuring universal access to the digital world is not merely about connectivity, but a fundamental pillar of equality and opportunity,” said Klaus Schwab, Founder and Chairman of the World Economic Forum. “Let us reaffirm our commitment to ensuring that every individual, regardless of their geographic or socioeconomic status, has access to meaningful connectivity.”

The Alliance has made substantial progress in South Asia and Africa.

In Madya Pradesh, India, The EDISON Alliance fostered the Digital Dispensaries initiative, a collaboration between the Apollo Hospitals Group and a US telecom infrastructure provider. This partnership has successfully delivered quality and affordable healthcare, improving patient engagement, addressing gender health disparities and optimizing patient convenience, and making it a scalable model for delivering patient-centric healthcare through digital solutions. Other partner projects improved digital access through economy digitalization programmes in Rwanda, provided solutions for bridging the education gap in Bangladesh with blended learning techniques and explored solutions to reduce financial exclusion in Pakistan.



“Everybody, no matter where they were born or where they live, should have access to the digital services that are essential for life in the 21st century,” said Hans Vestberg, Chair of the EDISON Alliance, Chairman and CEO of Verizon. “Making sure that everybody can get online is too big a challenge for any one company or government, so the EDISON Alliance brings people together to find practical, community-based solutions that can scale globally.”

By driving digital inclusion through its 300+ partner initiatives, the Alliance contributes to unlocking the immense potential of the digital economy. Achieving universal internet access by 2030 could require $446 billion usd/ $600 billion cad, but would yield $8.7 trillion usd/ $11.7 trillion cad in benefits for developing countries. This highlights the significant potential of digital inclusion to drive economic growth and improve lives. The EDISON Alliance has made substantial contributions to this goal, with over 70% of its impact concentrated in developing nations.

The milestone of connecting 1 billion lives was initially targeted for 2025.

Achieving this ahead of schedule demonstrates the effectiveness of its partners, through collaboration and targeted projects, in bridging the digital divide and providing access to critical services to underserved communities.

Beyond digital access, the rapidly evolving technological landscape – marked by such advancements as artificial intelligence, presents opportunities and challenges. The EDISON Alliance remains committed to ensuring that marginalized communities can fully benefit from these developments and avoid being left behind. As technology continues to advance, the Alliance will focus on expanding digital access, fostering innovation and addressing the digital gender gap to create a more inclusive digital future.

About the Sustainable Impact Meetings 2024


The Sustainable Development Impact Meetings 2024 are being held this week in New York. Over 1,000 global leaders from diverse sectors and geographies will come together to assess and renew global action around the United Nations Sustainable Development Goals (SDGs) through a series of impact-oriented multistakeholder dialogues. The meetings are an integral part of the Forum’s year-round work on sustainable development and its progress.

DyslexicU: World’s First ‘University Of Dyslexic Thinking’ & It’s Free

I’m so excited to join forces with charity Made By Dyslexia today to launch the free online University of Dyslexic Thinking, hosted by Open University and available to access from all around the world.

We decided to launch the university to teach the skills most relevant to today’s world – Dyslexic Thinking skills.

The courses are for anyone, at any stage of life; you might be a dyslexic looking to learn more about your Dyslexic Thinking skills and apply them to different industries, or someone who isn’t dyslexic but is curious to understand how this kind of thinking works in action, and why these skills are more valuable than ever before.

This morning, Made By Dyslexia revealed its new Intelligence 5.0 report, which includes research from Randstad Enterprise that shows the skills inherent to dyslexics are the most sought-after in every job, in every sector, globally.

The report clearly demonstrates that today’s AI-driven world needs a new kind of intelligence focused on human skills such as complex problem solving, adaptability, resilience, communication and creative thinking.

These are skills dyslexics naturally possess but aren’t measured by traditional education and workplace tests, which instead focus on dyslexic challenges. Based on this, it concludes the outdated systems that are designed to teach and measure intelligence need a rethink – it’s time for a new school of thought.

And this is where DyslexicU comes in! We’re shaking things up and teaching the skills the world needs. We need more innovators, problem-solvers, storytellers and unconventional thinking. The online course features many of the world’s greatest dyslexics talking about how Dyslexic Thinking skills like this have fuelled innovation and success, and the lessons we can gain from their experiences. They’re the kind of lessons I wish I was taught in the classroom.

I’m delighted to be joined by some of the incredible (dyslexic) course contributors today to launch DyslexicU at Virgin Hotels New York City, including HRH Princess Beatrice, Dame Maggie Aderin-Pocock, and Jean Oelwang.

HRH Princess Beatrice

Courses in ‘Entrepreneurs & Start-Up Mentality’ (made in partnership with Virgin StartUp) and another on ‘Changemakers & Activism’ (made in partnership with Virgin Unite) are available on DyslexicU, hosted on Open University today, with lots more to come later this year (or next term, should I say?!) They cover subjects such as storytelling, sport, fashion, culinary arts, and music.

While ‘U’ might technically’ stand for ‘University’, I quite like the irony that it resembles the ‘U’ that myself and many dyslexics sometimes see scribbled on our report cards, because traditional education systems are not made for minds like ours. If you’re a dyslexic, I know how disheartening that can be. I hope the launch of DyslexicU today can be a reminder to you that thinking in a different way to everyone else is indispensable in this new world of work. It’s your superpower.

Enroll today and join this new school of thought. Sir Richard Branson.

Self Driving Cars Now Reliable Via “Liquid AI”

Driving Change: Autonomous Vehicle Trust, Reliability Restored with Autobrains ‘Liquid AI’ Innovation

As the automotive industry evolves at a rapid-fire pace, trust in autonomous driving vehicles remains a critical challenge amid pervasive reliability concerns. Addressing this substantial industry pain point is automotive AI technology disruptor Autobrains Technologies. Its game-changing “Liquid AI” innovation—combining AI-assisted driving with its Autonomous Driving capabilities—directly addresses such marketplace reliability concerns, setting new standards for autonomous driving in the process.



“The safety debate surrounding AVs is more relevant than ever,” notes Autobrains Founder and CEO Igal Raichelgauz. “While AVs promise to reduce traffic fatalities by eliminating human error such as distracted driving, there are still significant reliability concerns for both manufacturers and drivers. The ongoing dialogue around AVs is critical, and we’re not only at the forefront of these discussions, but also advancing AI that prioritizes driverless car safety. We believe our Liquid AI technology offers a paradigm shift by mimicking human cognitive processes, thereby improving the system’s adaptability and decision-making in real-time. The automotive industry stands at a crossroad. We are proud to lead this charge, setting new standards for what AI in driving can achieve.”

Driving Change

Autobrains’ revolutionary Liquid AI technology enhances situational awareness and decision-making, providing a safer and more reliable driving experience. As AI continues to evolve, these advancements are crucial in building trust and adoption among drivers and manufacturers, alike. Combining AI-assisted driving with its Autonomous Driving capabilities, Liquid AI enhances situational awareness and decision-making, providing a safer and more reliable driving experience, which is crucial in building trust and adoption among both drivers and manufacturers.  As AI continues to be integrated into vehicles, the question of generating trust becomes paramount.

“The reliability of Autonomous Driving has been a significant concern for both manufacturers and drivers,” said Raichelgauz. “We believe that our Liquid AI technology offers a paradigm shift by mimicking human cognitive processes, thereby improving the system’s adaptability and decision-making in real-time. Traditional AI, with its narrow focus, often falls short when faced with the unpredictable nature of real-world driving. Liquid AI, however, marks a significant departure from this approach. By incorporating principles of human cognition, it learns and adapts in real-time, ensuring that our driving systems are predictable and optimized for any real-world driving scenario.”

There are several key factors that differentiate Liquid AI from traditional AI systems. These include:

  • Robust Edge Case Handling: Effectively addresses the long tail of edge cases that traditional AI systems struggle with.
  • Human-Like Cognitive Processing: Mimics human decision-making, allowing for better handling of unpredictable real-world conditions.
  • Efficient Resource Utilization: Lower computational power requirements make it scalable across various vehicle models without compromising performance.
  • Real-Time Learning: Liquid AI adapts in real-time to new driving scenarios, ensuring higher accuracy and fewer false positives.


With a background in AI innovation spanning multiple disciplines, Raichelgauz is a distinguished technology executive who has co-founded several successful businesses, including Cortica—a company renowned for its self-learning technology in visual perception.  Under his leadership, the Autobrains Liquid AI technology is now driving consequential change in the automotive industry by resolving autonomous vehicle reliability.

“The automotive industry stands at a crossroad,” Raichelgauz continued. “As we continue to integrate AI into our vehicles, the question of generating trust becomes paramount. Traditional AI, with its narrow focus, often falls short when faced with the unpredictable nature of real-world driving. Liquid AI, however, marks a significant departure from this approach. By incorporating principles of human cognition, it learns and adapts in real-time, ensuring that our driving systems are predictable and optimized for any real-world driving scenario. At Autobrains, we are proud to lead this charge, setting new standards for what AI in driving can achieve.” For the Silo, Merilee Kern.

Rethinking Canada Tariffs On China EVs

Via friends at C.D. Howe Institute. A version of this memo first appeared in the Financial Post.

To: Canadian trade watchers 
From: Ari Van Assche 
Date:  August, 2024
Re: Canada’s Electric Vehicle De-Risking Trilemma 

With the recent wrap-up of Ottawa’s month-long public consultation on levying tariffs on electrical vehicles (EVs) made in China, let’s paraphrase a story Nobel Prize-winner Paul Krugman once used to explain the often under-appreciated benefits of free trade:

Consider a Canadian entrepreneur who starts a new business that uses secret technology to transform Canadian lumber and canola into affordable EVs. She is lauded as a champion of industry for her innovative spirit and commitment to Net Zero. But a suspicious reporter discovers that what she is really doing is exporting Canadian-made lumber and canola and using the proceeds to purchase Chinese-made EVs. Sentiment turns sharply against her. On social media, she is widely denounced as a fraud who is destroying Canadian jobs and threatening national security. Parliament passes a unanimous resolution condemning her.

Going the other direction: China is Canada’s third largest destination for agricultural products.

This story underscores a critical dilemma that should have been central in the public consultations.

Those opposing tariffs argue that trade is a potent yet undervalued tool in our fight against climate change: It provides Canada access to low-emissions technologies at increasingly affordable prices, which is essential for transitioning society away from carbon-intensive energy sources. In contrast, those in favour are concerned about supply security, fearing excessive reliance on our biggest geopolitical rival for low-emissions technologies. They warn against swapping the West’s age-old energy insecurity in oil for insecurity in the supply of critical minerals and EV batteries.

The $70,000 cad Polestar 2 EV produced by Volvo. In 2010, Geely Holding Group a Chinese automotive group bought Volvo.

Copilot AI

“As of now, the Chinese electric vehicle (EV) market is making strides globally, but in Canada, the landscape is still evolving: Tesla Model Y and Polestar 2: While not exclusively Chinese, the Tesla Model Y (which is produced in China) and the Polestar 2 (a subsidiary of Volvo, which has Chinese ownership) are currently the most prominent Chinese-made EVs available in Canada. These models have gained attention due to their performance, range, and brand reputation1.”

I examined some of the national security issues that have surfaced in the discussion surrounding supply chains for low-emissions energy technologies like EV batteries in my recent C.D. Howe Institute report.

After examining the various de-risking policies governments have implemented, including their downsides and unintended consequences, I conclude Ottawa probably should develop de-risking policies.

But it needs to apply them judiciously, prudently and rarely. And it needs to justify them with credible, detailed evidence regarding concerns about supply security and whether domestic industry really would be able to compete if market conditions were fairer. This will be important in upholding Canada’s reputation as a leading proponent of the rules-based multilateral system.

China’s role in the supply chains of low-emissions energy technologies does raise real security concerns. China has established near monopolies in several critical minerals and other components of EV batteries, solar panels and wind turbines. No ready alternatives are produced in other countries. For example, 79 percent of global production capacity of polysilicon, which is key for solar cell production, is in China. The next biggest producers, Germany and the United States, have difficulty competing with China’s high-quality, ultra-cheap polysilicon.

China’s monopolies create chokepoints that could enable its government to manipulate production to pursue its own geopolitical ambitions.

Precedents exist: China blocked rare-earth exports to Japan in 2010 and banned exports of rare-earth processing technology in 2023.

Several countries have started adopting de-risking policies to reduce their reliance on these Chinese chokepoints, usually either onshoring or friendshoring. Canada’s recent Critical Minerals Strategy is typical. It was designed in part to reduce this country’s dependence on foreign-mined and processed critical raw materials by, among other things, allocating $1.5 billion to support Canadian critical minerals projects related to advanced manufacturing, processing and recycling.

But these de-risking policies come at a cost.

Ottawa needs to carefully navigate a “policy trilemma” as it strives to formulate a policy agenda that simultaneously targets three goals: Advancing security, promoting low-emissions energy adoption, and capturing the benefits of trade for consumers and businesses.

Proposed steep tariffs on Chinese EV imports provide a good example of the trilemma.

They may well safeguard security by protecting a domestic production base. But they could discourage the uptake of EVs, which are already experiencing a slowdown in sales. Moreover, such unilateral action against China could escalate geopolitical tensions, thereby generating new risks, including Chinese retaliation. The path to effective de-risking is clearly fraught with trade-offs and requires careful navigation.

There is scant evidence that China is on its way to becoming a near-monopoly in global EV production itself, but it may seek to benefit from its near-monopoly in key inputs. The ultimate question that the government should answer is, therefore, whether the security concerns regarding these chokepoints, and more generally China’s willingness to compete fairly under these conditions, justify the costs and risks of higher tariffs. The burden on Ottawa is to provide concrete evidence to that effect before imposing an inherently costly tariff on Canadians.

Ari Van Assche is a professor of international business at HEC Montréal and Fellow-in-Residence at the C.D. Howe Institute.

Porsche Commit Long Term To Gasoline Engines

Change of Plans

There was a time, not terribly long ago, when it seemed like the automotive industry was on the fast track to total electrification.

Ahead of Their Time

Many of us think of hybrid or all-electric power as a relatively new technology. After all, Porsche just introduced its very first production EV, the Taycan. But in reality, electricity has been around in the automotive world for over a century. And Ferdinand Porsche was one of very first pioneers to embrace this technology. When Porsche was a teenager back in 1893, he installed an electric lighting system in his parents’ house. Even the very first vehicles he designed had electric drives. After toying around with a few different ideas, Porsche designed the world’s first functional hybrid car, the Semper Vivus (Latin for “always alive”), in 1900. But due to its modest power output, heavyweight, and lack of infrastructure, the idea was relegated to the back burner for many years. 

Amid concerns over global warming, governments around the globe began floating regulations that sought to ban ICE vehicles outright – but in recent months, with demand falling behind expected levels of growth, a lot has changed, and now, those same plans are being scaled back.

Up To and Beyond

While Porsche recently revealed that it continues to develop the all-electric version of its Cayenne crossover, it also plans to continue to offer hybrid and combustion engine-powered examples of that same model – “up to and beyond 2030,” in fact.

Keeping the V8

Interestingly, Porsche also noted that the currently, third generation of the Cayenne will be upgraded and will continue to be offered alongside the fourth, all-electric generation model. Engineers will focus on the Cayenne’s ICE powertrains, however, including its twin-turbocharged V8, which it will need to tweak to ensure that it meets increasingly stringent emissions standards.

Still Focused

This is obviously great news for fans of ICE powertrains and the V8 in general, but also note that Porsche remains focused on an electrified future, regardless. “Our product strategy could enable us to deliver more than 80 percent of our new cars fully electrified in 2030 – depending on the demand of our customers and the development of electromobility in the regions of the world.” Oliver Blume CEO Porsche AG.

As such, Porsche plans to continue making gas engines for some time, it seems. 

OPED: Made by Human: The Threat of Artificial Intelligence on Human Labor

This Article is 95.6% Made by Human / 4.4% by Artificial Intelligence

One of the most concerning uncertainties surrounding the emergence of artificial intelligence is the impact on human jobs.

100% Satisfaction Guarantee

Let us start with a specific example – the customer support specialist. This is a human-facing role. The primary objective of a Customer Support Specialist is to ensure customer satisfaction.

The Gradual Extinction of Customer Support Roles

Within the past decade or so, several milestone transformations have influenced the decline of customer support specialists. Automated responses for customer support telephone lines. Globalization. And chat-bots. 

Chat-bots evolved with the human input of information to service clients. SaaS-based products soon engineered fancy pop-ups for everyone. Just look at Uber if you want a solid case-study – getting through to a person is like trying to contact the King of Thailand. 

The introduction of new artificial intelligence for customer support solutions will make chat-bots look like an AM/FM frequency radio at the antique market. 

The Raging Battle: A Salute to Those on the Front Lines

There are a handful of professions waging a battle against the ominous presence of artificial intelligence. This is a new frontier – not only for technology, but for legal precedent and our appetite for consumption. 

OpenAI is serving our appetite in two fundamental ways: text-based content (i.e. ChatGPT) and visual-based content (i.e. DALL·E). How we consume this content boils down to our own taste-buds, perceptions and individual needs. It is all very human-driven, and it is our degrees of palpable fulfillment that will ultimately dictate how far this penetrates the fate of other professions. 

Sarah Silverman, writer, comedian and actress sued the ChatGPT developer OpenAI and Mark Zuckerberg’s Meta for copyright infringement. 

We need a way to leave a human mark. Literally, a Made by Human insignia that traces origins of our labor, like certifying products as “organic”.

If we’re building the weapon that threatens our very livelihood, we can engineer the solution that safeguards it. 

The Ouroboros Effect

If we seek retribution for labor and the preservation of human work, we need to remain ahead of innovation. There are several action-items that may safeguard human interests:

  • Consolidation of Interest. Concentration of efforts within formal structures or establish new ones tailored to this subject;
  • Litigation. Swift legal action based on existing laws to remedy breaches and establish legal precedents for future litigation;
  • Technological Innovation. Cutting-edge technology that: (a) engineers firewalls for preventing AI scraping technologies; (b) analyzes human work products; and (c) permits tracking of intellectual property.
  • Regulatory Oversight. Formation of a robust framework for monitoring, enforcing and balancing critical issues arising from artificial intelligence. United Nations, but without the thick, glacial layers of bureaucracy.  

These front-line professionals are just the first wave – yet if this front falls, it will be a fatal blow to intellectual property rights. We will have denied ourselves the ideological shields and weapons needed to preserve and protect origins of human creativity

At present, the influence of artificial intelligence on labor markets is in our own hands. If you think this is circular reasoning, like some ouroboros, you would be correct. The very nature of artificial intelligence relies on humans.

Ouroboros expresses the unity of all things, material and spiritual, which never disappear but perpetually change form in an eternal cycle of destruction and re-creation.

Equitable Remuneration 

Human productivity will continue to blend with artificial intelligence. We need to account for what is of human origin versus what has been interwoven with artificial intelligence. Like royalties for streaming music, with the notes of your original melody plucked-out. Even if it’s mashed-up, Mixed by Berry and sold overseas. 

These are complex quantum-powered algorithms. The technology exists. It is along the same lines of code that is empowering artificial intelligence. Consider a brief example: 

A 16-year old boy named Olu decides to write a book about growing-up in a war torn nation. 

 Congratulations on your work, Olu! 

47.893% Human /  52.107% Artificial

Meanwhile, back in London, a 57-year old historian named Elizabeth receives an email:

 Congratulations Elizabeth, your work has been recycled! 

34.546% of your writing on the civil war torn nation has been used in an upcoming book publication. Click here to learn more.

We need a framework that preserves and protects sweat-of-the-brow labor. 

As those on the front-line know: Progress begets progress while flying under the banner of innovation. If we’re going to spill blood to save our income streams – from content writers and hand models to lawyers and software engineers – the fruit of our labor cannot be genetically modified without equitable remuneration. 

The Best Theater Sound System In Finland

— Kino Piispanristi integrates Genelec’s famous “The Ones” loudspeaker range along with the latest Dolby Surround technology to deliver premium audio quality — the best to be found (and heard) in Finland.

NATICK, MA, August, 2024 —Kino Piispanristi is a full-service 286-seat independent movie theater close to Turku, Finland. The venue is a long-time passion project of Henry Erkkilä, a movie lover who wanted to create a modern cinema that transcends tradition when it comes to audio-visual technology. Kino Piispanristi cinema strives to continually deliver a superior experience, so its luxury new premium screen features a Genelec sound system comprising the brand’s unmatched smart active studio loudspeakers and subwoofers.

Genelec “The Ones”

Erkkilä discovered his love for the film industry as a young boy. His father had a film projector that he travelled around Sweden with, bringing the latest screen favorites to audiences in his home country. Prior to the screening, Erkkilä would be tasked with dropping off advertisements in the local area, showcasing the movie on offer that evening and encouraging people to attend.

Inspired by his father, he set up his very own touring movie theater concept in 1998, but it wasn’t until 2017 that Erkkilä finally opened his first permanent space. Kino Piispanristi began with two theaters, but now the cinema boasts five screens, as well as additional venues in Turku, Salo and Laitila.

“We strive to offer all the perks of a modern cinema without being a faceless corporation,” begins Erkkilä.

A look at some of the Genelec’s installed in Kino Premium.

“We react to trends quickly and make moves boldly so that our customers can walk in and out feeling happy. Having the greatest theater sound system in Finland is an excellent way to help us light up people’s faces!”

Kino Piispanristi’s newest screen is a premium, more intimate space with exceptional picture quality and a 7.1 audio system based around Genelec’s “The Ones” family of coaxial three-way studio loudspeakers – which deliver extended frequency response, controlled directivity and fatigue-free listening. Three 8361s – the flagship of The Ones range – are deployed for LCR, with six of the more compact 8341s in the surround positions, complemented by two 7380 subwoofers for clean, controlled LF performance.

“For our premium space theater, sound is everything.”

“Theater technology, be it projectors, screens, audio or seats, is constantly evolving and unless you’re among the frontrunners, you might get left behind,” Erkkilä explains. “Genelec is widely known and admired as a wonderful example of Finnish engineering and design. As a local business, we try to emphasize the importance of using locally sourced products, and Genelec’s quality is unmatched. This was a pilot project for us and we’re looking into expanding our other spaces – since it’s been such a hit. We charge a few Euros extra for the premium screen, but the movie experience is so good that our customers still see it as excellent value.”

GLM Space calibration software at work.

Usually found powering the world’s most notable music, broadcast and film studios, Genelec’s studio loudspeakers are now being specified for an increasing number of high-end residential and boutique commercial cinemas around the world – thereby allowing customers to experience the same kind of sonic detail and clarity as the movie creators themselves.

The Ones models provide optimized performance by intelligently adapting to the acoustics of the room, achieved by a combination of GLM space calibration software and internal DSP within each loudspeaker and subwoofer. “GLM calibration allowed us to achieve a better balance with the lower and higher voices on screen,” explains Erkkilä. “Without it, it’s likely that the room would’ve changed the natural feel of the audio. It gave us full control over the system.”

PDF brochure on how Genelec used this cinema for a product case study.

GLM offers precise calibration of each loudspeaker’s in-room frequency response, playback level and distance delay, minimizing unwanted room influences and ensuring the best possible audio quality. In addition to the Genelec system, Kino Piispanristi uses Dolby Cinema processors which bring a natural feel to film soundscapes – immersing the audience in the true excitement of cinema.

“Our expertise in cinema and Genelec’s legacy in sound was the perfect match, and the collaboration was even more meaningful because of the local connection,” concludes Erkkilä. “The Ones loudspeaker series has completely transformed the cinema, and now we can offer audiences everything that the big players can – and more. The cinema is a result of a lot of hard work and dedication, and the Genelec system feels like the icing on the cake. It’s reinvented what we show on the screen.”

Supercars Can Be Financed

Take this 2005 Porsche Carrera GT for example:

Lot 214 |Monterey Jet Center 2024 Thursday, 15 August 2024

2005 Porsche Carrera GT Lot 214 Estimate: $1,100,000 – $1,300,000 USD/ $1,509,000 CAD- $1,704,000 CAD
Illustrative Hammer: $1,100,000 USD/ $1,509,000 CAD
Illustrative Purchase Price*: $1,215,000 USD/ $1,667,000 CAD
Down Payment: $500,000 USD/ $686,100 CAD
Amount Financed: $715,000 USD/ $981,000 CAD
Monthly Payments**: $7,299
USD/ $10,015 CAD

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Broad Arrow Auctions | 2005 Porsche Carrera GT

Highlights of this supercar include:

  • A desirable single-owner example offered with less than 23,643 documented miles at the time of cataloging
  • One of just 477 produced for the U.S. market in 2005
  • Finished in classic Communication Colors of GT Silver exterior over Ascot Brown leather interior
  • Unmodified and offered with four pieces of its factory luggage set and other delivery accessories
  • Features servicing and maintenance by a single authorized Porsche dealer
  • One of the most collectible Porsche models ever produced

Chassis No. WP0CA29875L001120

Porsche seldom exits a motorsports arena without a taste of triumph. Yet, in 1991, an exception proved the rule as Porsche ventured into Formula One, supplying engines to the Footwork-Arrows team with their newly developed 3.5-liter naturally aspirated V12. This engine, essentially a combination of two TAG-Turbo V6s from Porsche’s McLaren days proved cumbersome and prone to reliability issues. Midway through the season, Footwork-Arrows terminated their contract with Porsche due to these setbacks.

Undeterred, Porsche embarked on a solitary path of refinement over the subsequent three years, nurturing the engine’s potential through advancements in technology and engineering. Eventually, they succeeded in transforming it into a robust and potent V12 powerplant. This worthwhile endeavor of internal engineering spurred Porsche to further explore Formula One’s evolving regulations, resulting in the development of a 3.5-liter V10 engine—purely as an educational pursuit. Later iterations saw this V10 engine grow to 5.5-liters and find application in Porsche’s LMP2000 sports racing prototype, codenamed Typ 9R3 and conceived for the prestigious 24 Hours of Le Mans. Despite its initial promise, the LMP2000 project met an untimely demise, leaving the formidable V10 engine temporarily abandoned until a pivotal turn of events.

Porsche’s engineers were fervently engaged in another ambitious project—the Carrera GT prototype, internally referred to as SCM (Super Car Millennium).

Housed in Huntington Beach, California, a select team of designers undertook the task of bringing SCM to life. In a nod to its showpiece stature, the decision was made to equip this extraordinary prototype with the same 5.5-liter V10 engine originally developed for the 9R3 project. So fantastic was the reaction to the prototype driven along the Champs-Élysées to the 2000 Paris Motor Show that the approval of a production version was a foregone conclusion.

Commencing in 2003, the Carrera GT swiftly became the quintessential analog supercar of its era. Embracing a back-to-basics philosophy, in stark contrast to its technologically intricate predecessor, the 959, the Carrera GT boasted a raw engineering ethos. Its naturally aspirated 5.7-liter V10, renowned for its rapid revving capability, paired seamlessly with a six-speed manual transmission nestled within a carbon fiber monocoque chassis. Eschewing electronic driving aids, the Carrera GT epitomized a driver-centric experience, delivering unrivaled auditory and performance thrills akin to those found on the racetrack. Produced for a short two years, just 644 Carrera GTs were sold through U.S. Porsche dealerships

This 2005 Carrera GT was constructed in the final year of production and was delivered new to Howard Cooper Porsche of Ann Arbor, Michigan with a purchase date noted in the service book as 22 December 2004 with 15 delivery miles/ 24 kms. Selected with XT Bucket Seats and finished in the Carrera GT’s official Communication Color of GT Silver Metallic over an Ascot Brown leather interior, this fantastic single-owner example features a clean CARFAX and, at time of cataloging, less than 24,000 miles/ 38,624 kms. GT Silver was a long-held bespoke color for the Carrera GT and certainly one of the most popular, echoing those giant-killing RS Spyders of the late 1950s and ’60s.

According to its CARFAX and ownership records, this Carrera GT features servicing while under single ownership by the consignor at Howard Cooper Porsche, later known as Germain Porsche and now Porsche Ann Arbor. One of the many benefits of a single-owner super sports car such as this is the familiarity between the official Porsche dealer and owner and the expected elevated level of trust between the two. Twenty visits to the selling dealer over the 19 years have ensured that this Carrera GT has remained in regular hands during those service visits, remaining at the ready for those special Michigan days that offer the most to both car and driver. Partial service records on file show a Major Maintenance in 2009 with a new windshield at 10,739 miles and two recorded maintenance visits in 2015 and 2017, the latter being a two-year service visit. Furthermore, it should be noted that all services have been conducted at the original selling dealer, Porsche Ann Arbor.

Offered with service records on file dating from 2007 to 2020, this single-owner Carrera GT is accompanied by an impressive number of delivery items including its original window sticker, owner’s manuals, hard top panel bags, centerlock socket, tools, and factory fitted indoor car cover. Furthermore, all Carrera GTs were delivered with a set of factory fitted luggage by Ruspa of Italy, color-coordinated to the selected interior color of the car. Over the years many of these sets have become disassociated with their cars, yet this Carrera GT retains a nearly complete set in Ascot Brown—an additional, and welcome benefit.

Created by specialist teams with a narrow focus and cloaked in secrecy, with little interference from the corner offices, the Porsche Carrera GT is an exquisite example of race-honed engineering brought to life on the road. Never before offered for sale, this single-owner Carrera GT, number 455, should make an enjoyable addition to those in search of the finest motorsport-derived super sports car of the 2000s. Just as Porsche intended. For the Silo, Jakob Greisen.

Internet bidding is not available for this lot. Please contact bid@broadarrowauctions.com for more information.

Feds False News Checker Tool To Use AI- At Risk Of Language & Political Bias

Ottawa-Funded Misinformation Detection Tool to Rely on Artificial Intelligence

Ottawa-Funded Misinformation Detection Tool to Rely on Artificial Intelligence
Canadian Heritage Minister Pascale St-Onge speaks to reporters on Parliament Hill after Bell Media announces job cuts, in Ottawa on Feb. 8, 2024. (The Canadian Press/Patrick Doyle)

A new federally funded tool being developed with the aim of helping Canadians detect online misinformation will rely on artificial intelligence (AI), Ottawa has announced.

Heritage Minister Pascale St-Onge said on July 29 that Ottawa is providing almost $300,000 cad to researchers at Université de Montréal (UdeM) to develop the tool.

“Polls confirm that most Canadians are very concerned about the rise of mis- and disinformation,” St-Onge wrote on social media. “We’re fighting for Canadians to get the facts” by supporting the university’s independent project, she added.

Canadian Heritage says the project will develop a website and web browser extension dedicated to detecting misinformation.

The department says the project will use large AI language models capable of detecting misinformation across different languages in various formats such as text or video, and contained within different sources of information.

“This technology will help implement effective behavioral nudges to mitigate the proliferation of ‘fake news’ stories in online communities,” says Canadian Heritage.

Related-

OpenAI, Google DeepMind Employees Warn of ‘Serious Risks’ Posed by AI Technology

OpenAI, Google DeepMind Employees Warn of ‘Serious Risks’ Posed by AI Technology

With the browser extension, users will be notified if they come across potential misinformation, which the department says will reduce the likelihood of the content being shared.

Project lead and UdeM professor Jean-François Godbout said in an email that the tool will rely mostly on AI-based systems such as OpenAI’s ChatGPT.

“The system uses mostly a large language model, such as ChatGPT, to verify the validity of a proposition or a statement by relying on its corpus (the data which served for its training),” Godbout wrote in French.

The political science professor added the system will also be able to consult “distinct and reliable external sources.” After considering all the information, the system will produce an evaluation to determine whether the content is true or false, he said, while qualifying its degree of certainty.

Godbout said the reasoning for the decision will be provided to the user, along with the references that were relied upon, and that in some cases the system could say there’s insufficient information to make a judgment.

Asked about concerns that the detection model could be tainted by AI shortcomings such as bias, Godbout said his previous research has demonstrated his sources are “not significantly ideologically biased.”

“That said, our system should rely on a variety of sources, and we continue to explore working with diversified and balanced sources,” he said. “We realize that generative AI models have their limits, but we believe they can be used to help Canadians obtain better information.”

The professor said that the fundamental research behind the project was conducted before receiving the federal grant, which only supports the development of a web application.

Bias Concerns

The reliance on AI to determine what is true or false could have some pitfalls, with large language models being criticized for having political biases.

Such concerns about the neutrality of AI have been raised by billionaire Elon Musk, who owns X and its AI chatbot Grok.

British and Brazilian researchers from the University of East Anglia published a study in January that sought to measure ChatGPT’s political bias.

“We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK,” they wrote. Researchers said there are real concerns that ChatGPT and other large language models in general can “extend or even amplify the existing challenges involving political processes posed by the Internet and social media.”

OpenAI says ChatGPT is “not free from biases and stereotypes, so users and educators should carefully review its content.”

Misinformation and Disinformation

The federal government’s initiatives to tackle misinformation and disinformation have been multifaceted.

The funds provided to the Université de Montréal are part of a larger program to shape online information, the Digital Citizen Initiative. The program supports researchers and civil society organizations that promote a “healthy information ecosystem,” according to Canadian Heritage.

The Liberal government has also passed major bills, such as C-11 and C-18, which impact the information environment.

Bill C-11 has revamped the Broadcasting Act, creating rules for the production and discoverability of Canadian content and giving increased regulatory powers to the CRTC over online content.

Bill C-18 created the obligation for large online platforms to share revenues with news organizations for the display of links. This legislation was promoted by then-Heritage Minister Pablo Rodriguez as a tool to strengthen news media in a “time of greater mistrust and disinformation.”

These two pieces of legislation were followed by Bill C-63 in February to enact the Online Harms Act. Along with seeking to better protect children online, it would create steep penalties for saying things deemed hateful on the web.

There is some confusion about what the latest initiative with UdeM specifically targets. Canadian Heritage says the project aims to counter misinformation, whereas the university says it’s aimed at disinformation. The two concepts are often used in the same sentence when officials signal an intent to crack down on content they deem inappropriate, but a key characteristic distinguishes the two.

The Canadian Centre for Cyber Security defines misinformation as “false information that is not intended to cause harm”—which means it could have been posted inadvertently.

Meanwhile, the Centre defines disinformation as being “intended to manipulate, cause damage and guide people, organizations and countries in the wrong direction.” It can be crafted by sophisticated foreign state actors seeking to gain politically.

Minister St-Onge’s office has not responded to a request for clarification as of this posts publication.

In describing its project to counter disinformation, UdeM said events like the Jan. 6 Capitol breach, the Brexit referendum, and the COVID-19 pandemic have “demonstrated the limits of current methods to detect fake news which have trouble following the volume and rapid evolution of disinformation.” For the Silo, Noe Chartier/ The Epoch Times.

The Canadian Press contributed to this report.

Virgin Galactic Completes New Spaceship Manufacturing Facility

Orange County, Calif. – Virgin Galactic Holdings, Inc. (NYSE: SPCE) (“Virgin Galactic” or the “Company”) recently announced the completion of its new manufacturing facility in Mesa, Arizona (Greater Phoenix area), where final assembly of its next-generation Delta spaceships is scheduled to take place starting in Q1 2025.

An initial team of Virgin Galactic technical operations and manufacturing personnel has begun preparing the facility to receive and install tooling, expected to arrive in Q4 2024. The facility will then begin to receive major subassemblies, including the wing, the fuselage, and the feathering system next year, as the team scales to build the first two ships of the Delta fleet. Once ground testing is complete, Virgin Galactic’s mothership will ferry completed spaceships to Spaceport America, New Mexico for flight test ahead of commercial operations, which are expected to begin in 2026.

The multiuse facility includes two hangars equipped with multiple bays, designed for maximum flexibility in building and testing space vehicles.

Work at the facility will be supported by the Company’s digital twin technology, which enables seamless integration between Virgin Galactic and suppliers through real-time collaboration, promoting strong governance and increased efficiency and reliability.

In May 2024, Virgin Galactic opened a ground testing facility in Southern California for Delta subsystems, including avionics, feather actuation, pneumatics, and hydraulics, using an Iron Bird test rig.

Design concept- Virgin Galactic’s MACH 3 Supersonic commercial passenger jet – a partnership with Rolls Royce (Concorde engines) could mean this design stands a real chance of being produced as well in the future.

Virgin Galactic’s Delta spaceships will seat up to six private passengers, and each is expected to be capable of flying up to eight missions per month, dramatically increasing access to space.

“The completion of our new manufacturing facility is an important milestone in the development of our fleet of next-generation spaceships, the key to our scale and profitability. Tooling will begin arriving in a matter of months to support spaceship final assembly, which we expect to commence in Q1 2025.”

Michael Colglazier • CEO of Virgin Galactic

His Programming Shocked Kids During Legendary Sesame Street Synthesizer Appearance

Clive Smith – recording artist, composer, performer, sound designer

You might have heard him on the soundtrack for the ’80’s cult film Liquid Sky. You might have come across his name on a whole lot of session-work and collaborations. Clive Smith, often credited as the ’Fairlight Programmer’. And you can see him below on the legendary 1983 Sesame Street-episode, in which Herbie Hancock is demonstrating the Fairlight. But that’s just the tip of the iceberg…

Whether it is your typically structured mainstream music, or the more textured, experimental kind: Clive Smith morphs fluently between both realms. “I’ve always been interested in texture, as well as in structural music composition.” He started out as a trumpet player in high school, is a trained musician and has taught himself to play guitar, bass-guitar and keyboards. “Those sparked to me a lot more. The trumpet, it never really sat with me as well as the ’rock instruments’.  But occasionally, I pick up the trumpet to keep my lips in shape, or to play it on some album. When I went to university, i took a multimedia-course, which was basically visual arts and sonic arts.

There was a VCS3, the ’Putney’, and I really fell in love with synths and the ability to create and craft your own sounds; to manipulate them. I was always interested in electronic sounds. Prior to the synths, I used tape. My father had an expensive tape recorder. I used to have lots of fun with it, recording all sorts of sounds, noises, trying to play it backwards.” He reckoned how John Lennon discovered that by accident. “I was very interested in those kind of things. “

Becoming the Fairlight expert

Clive came fresh out of college with a degree in musical composition. But… What to do next? ”One of my professors started this non profit organisation called Public Acces Synthesizer Studios. I started out as the associate director and later, I became director. In 1980, there was an Audio Engineering Society convention in New York City. There was this Australian company called Fairlight, showing this instrument; it was probably one of the first times it was shown in the US. Back then, it was just called the Fairlight CMI, for there were no Series II, IIx, etcetera yet. I was amazed by it. It took a little bit of doing, but the following year, we had one at PASS, on loan to us.

I fully immersed myself in it, trying to learn as much about it as I possibly could.  The great thing was, there were no specific rules on how to use the instrument. I took lots of time sampling and creating my own sounds and I thoroughly enjoyed the fact that there weren’t any boundaries someone else already had defined. That was extremely satisfying. ”

“As soon as I learned everything about the Fairlight, this Russian director, Slava Tsukerman, came in at PASS, as he wanted to create the soundtrack of what later would become Liquid Sky. He realized he couldn’t operate this computer himself, so he initially hired Brenda Hutchinson, who started working on the project with him. She was called away for a job on the West coast. So I took over and did the remaining two third of the project. I think there were three or four different types of classical pieces that the director came to us with.’ They programmed that into the Fairlight. You might call that a hell of a job.

Clive: ‘It was very much like old-fashioned computer programming, using a code for every note, every change, like velocity or note durations.” At that time, there was only the Series 1. So, no easy peasy sequencing, using Page R, which was introduced into the Series II, in 1982. ”There were two types of recording. The first one was non-real time, using Musical Composition Language. The second method was called Page 9. It recorded in real time, but there were no visuals; you couldn’t really see what you were doing on the screen. When you made a mistake, you had to start all over again.”

Quick and dirty

“Slava isn’t a musician himself. But he did have musical ideas and a clear vision of what he had in mind for the soundtrack. He’d just tap a rhythm, or hum a melody, moving his arms in a particular way, saying: I want this for that scene.”  And I would just play around with some ideas. When there was something he liked he said: “OK, lets record this.” We’d immediately be recording the ideas, right when they were fresh. Often, I wanted to do it again, for I thought it was too sloppy. I wanted to go back, perfecting it. But then he‘d say: “No, I like it. Lets make it quick and dirty. He liked the sort of clunkiness; the marriage between the computer high tech and the punk approach.

I had sampled lots of different percussive sounds. Some wooden, metal and glass wind chimes. I put them all together, and I think that’s what we ended up using for the creature sounds. They weren’t specifically made for the movie. I played it to Slava, he liked it. They fitted with his vision for the movie. So they ended up being part of the alien sounds.

Brenda had seen some of the footage. When I started working on it, I never saw any footage. So it was only his verbal description of things. So in a way, I wasn’t influenced at all by visuals. I was strictly translating what he was conveying to me. It wasn’t until the actual premiere that I saw how it all worked together. And it worked very well. Me and Brenda, we got the credits for composing, but it was his vision. He was coming across with moods. I only matched with what he was doing. And if he would have let me be alone with it, it would never have turned out that way.” 

Rocking it on Sesame Street

Because of Liquid Sky, the US branche of the Fairlight company asked if he would work for them. He left PASS and became one of their consultants, from 1983 ’till about 1989.  Clive: ”That was incredibly great. I had access to the equipment, I promoted their product doing demos, and I was doing session work on the side.” So, how did he end up with Herbie Hancock on Sesame Street? Clive: ”Alexander Williams, he did sort of what I was doing, on the West Coast. When Herbie Hancock purchased his Fairlight, he had Will training him on how to use the machine. When he wanted to capture his ideas, in a session, on the fly, Will was able to help him out with the technical side.

When Herbie visited the East Coast, I kind of did the same thing for him. Will and I knew each other, and it turned out Herbie and I had some mutual friends. So, that worked out nicely. I think, if I remember correctly, Herbie didn’t travel with his Fairlight, so he used mine for the Sesame Street-session. The show went pretty much as shown; the children were very excited about this new technology. Just like the kids are today. We didn’t do anything different. That clip was pretty much the entire take.

People were always very curious about it. And It’s very inviting; something that looked like a ‘60’s tv-screen, a bit of a retro sci-fi-look, a huge white keyboard, playing melodies with barking dogs… It looked accessible, more ‘friendly’ and less intimidating than a modular synth with patch chords, knobs and sliders.”

An early 1979 model Fairlight CMI.

“I’m really glad I got the opportunity to work with Herbie Hancock. Up until then, I never realised what an amazing musician he is. It was great to see the ideas running in his brain, coming out. Always, his first ideas were immediately great. Watching him listening to a musical piece he’d never heard before and then, coming up with this great keyboard part. Very enlightening to see. And he’s a very nice, very friendly down to earth kinda person. 

You know, formally trained musicians often want to play tunes on a synth using their keyboard technique. Herbie, he was very open to coming up with interesting sounds, being Interested in things that had some internal movement on the things he was playing. I think, that’s what we have in common: having this split personality between being a trained musician, using structured forms, and being able to work with textures, creating sounds, the approach a non-musician might have. The more creative approach by just going in and thinking: ’What would i like to happen?’.

Big bam boom

In 1984, I bought my own IIx; that’s when the session work really took off. In ‘86, the series III was released and I was able to purchase one fairly early in its existence. There are certain things that are unique to the IIx, but I could do so much more on the III. It expanded on the things I wanted to do. I started using them both.” And so, he was moving around New York, carrying around two fairly heavy machines. ”I was doing a lot of session work in New York, mostly in the avant-garde music scene. I was either playing in progressive rock bands, avant-garde rock bands or free jazz and noise bands. And all of a sudden, I was called in to do sessions with very mainstream artists. There was this buzz going ‘round about the Fairlight. People were looking for that extra spice to add to their music. So, I was hired to make a few noises on the track.”  Laughing: ”It felt a bit like being the odd one out.”

In 1984, he was asked to work on the Hall & Oates-record Big Bam Boom. “They were listening a lot to Sgt. Pepper’s. They wanted to take a different approach. They didn’t want to emulate The Beatles, but it was the whole idea that they needed to break out of their old approach and treat the studio in a different way, instead of archiving and capturing what they did playing live. That was essentially what it was. That’s why they brought in the Fairlight. They didn’t know exactly what it did, or what they thought it would do. But they might have thought it could be the ingredient pushing them into a new era. Of course, they did the pop music that they were known for, but in a slightly different way and I think it was successful. Their new approach did work out for them.”

“They didn’t have all the songs written yet; just some words, some of the choruses were done. A lot of things were formed in the studio. The way I worked with them was a little unorthodox. Next to the control room, in the Electric Ladyland studios, there’s the vocal booth. They took the door off that separated the booth from the control room. And they’d have me set up with the Fairlight and some speakers, letting me hear the same play-back the producer, Bob Clearmountain, was hearing.

They had me playing along with the music and every now and then they’d listen to what was coming out on my channel; what I was coming up with. When they liked it, they decided to put that on the record. I’d put something in, or Robbie Kilgore, the other keyboard player they hired. That’s how we worked for about three months. It was done very professionally, almost like a regular nine-to-five job. At 10 am, we’d come in, we’d have a short lunch break, and around 6 pm we were usually out of there. No drinking or drugs were allowed in the studio; they were very disciplined, especially Daryl. It was a very instructive experience and it got me a lot of jobs at sessions. It was a great opportunity.“

Programmer? Keyboard player? 

On album credits, guys like Clive were often referred to as ’Fairlight programmer’. Which makes you think: didn’t he play some decent notes at all on all these records? Clive: “They didn’t know what to make of it, so they called it Programmer. Which was fine with me. The lesser known music I worked on was where I got to play more. On some of those, I even play the guitar. People brought me in as the Fairlight programmer, but then they learned I play keyboards and guitar as well. So often, they’d ask: ’Oh, you play guitar? Bring yours tomorrow!’ and they’d let me lay down a couple of tracks as well.

Right before the final days of the series III had arrived, before the original Fairlight company went down, I had a midi guitar. I used to bring it to sessions, so I could play the Fairlight from the midi guitar. That was great, because I was able to do things I couldn’t do on the regular keyboard. Especially when it comes to bending pitches in a particular way; that sort of thing. Each string could have a different sound to it. So with the midi guitar, in a way, you had six keyboards with different sounds attached to each string. Sometimes you ended up with some very wonderful things. I’ve used that on the more obscure records, because people were more open to try different things than they were on mainstream recordings.”

Shaping and creating

Over the past few years, Clive has worked on many, many projects, providing music, or musical textures for a dozen tv-shows, doing session work, being a sound designer for Korg… And today, he’s working on a variety of interesting collaborations. ”There’s always something going on.vAt the moment, I’m doing sounds for PARMA. They approached me, because they liked a particular piece I’ve made in the past. So they asked if I had any more material like that. I’m actually working on that at the moment. I’ve finished about 25 minutes of music for it. And it will probably be 50 minutes, so I’m halfway through.

It’s fairly textural material, but tonal at the same time. Recently, I’ve become very interested in this composer Arvo Pärt who’s been around for a long time, but I became familiar with his work just recently. Some of the things I was touching on are similar to what he’s been doing for years. It inspired me to go further down this particular path. It almost felt as if we were aligned in some way. So, that’s the direction this particular suite of pieces is going to.’

“I’ve always been interested in texture. There is something about texture in the visual realm as well as in the sonic realm that I love. Getting inside of a sound, reconstructing it. With the textured soundscapes, I feel it’s communicating more directly with your subconscious. That’s the impact of art and music combining. It reaches you in ways that are difficult to articulate. it’s just… telling you a specific story.” 

One of the other things he’s been working on, is archiving some of his older Works. “I recently uncovered some recordings from the early days of the Fairlight, and I also recovered some old tapes. I’m trying to transfer them into ProTools, before I lose acces to it, not being able to play these things back. I discovered some unfinished things that I made. I might revisit them. I work on those things which strike me the most at that time, in between the session work I do.”

Theres no time like the present

Of course, there’s just one question left: does he still use his Fairlights? Clive: ”Unfortunately, my Series III is not completely operational right now. But I will probably be able to use it again very soon. My IIx on the other hand is completely functional. That is, everything except for the light pen. But, I don’t really need the light pen anyway.

Even today, when I purchase new hardware or software, it always comes down to: does it excite and inspire me in some way? Just because it’s new doesn’t mean it’s great. It has to surprise me. But, the other way around: I do have some vintage guitars and I like them. Not because they’re vintage, it’s because I have built a relationship with them over time.

I don’t want to be locked into a specific time-period. So, to me, it’s not about a specific age or about nostalgia. Having said that: the Fairlight grabbed me in a way that no other instrument did before that. And I do still love them.”

For the Silo, Mirjam van Kerkwijk. Read more from Mirjam about the revolutionary Fairlight here: http:// www.fortheloveofthefairlight.com or here: https://www.thesilo.ca/tag/fairlight-cmi/.

Researchers Discover New Mechanism Linking Diet and Cancer Risk

MGO, a glucose metabolite, can temporarily destroy the BRCA2 protein, reducing its levels in cells and inhibiting its tumor-preventing ability.

Via friends at epochtimes. You may have heard that sugar feeds cancer cells, and evidence supports that. However, the missing link in this narrative has been a thorough understanding of just “how” sugar feeds cancer—until now. A recent study published in Cell in April uncovers a new mechanism linking uncontrolled blood sugar and poor diet with cancer risk.

The research, performed at the National University of Singapore’s Cancer Science Institute of Singapore, and led by professor Ashok Venkitaraman and Li Ren Kong, a senior research fellow at the University of Singapore, found a chemical released when the body breaks down sugar also suppresses a gene expression that prevents the formation of tumors.

This discovery provides valuable insights into how one’s dietary habits can impact their risk of developing cancer and forges a clear path to understanding how to reverse that risk with food choices.

Methylglyoxal–A Temporary Off Switch

It was previously believed that cancer-preventing genes must be permanently deactivated before malignant tumors can form. However, this recent discovery suggests that a chemical, methylglyoxal (MGO), released whenever the body breaks down glucose, can temporarily switch off cancer-protecting mechanisms.

Mr. Kong, first author of the study, stated in a recent email: “It has been shown that diabetic and obese individuals have a higher risk of cancer, posing as a significant societal risk. Yet, the exact cause remains debatable.

“Our study now unearthed a clue that may explain the connection between cancer risk and diet, as well as common diseases like diabetes, which arise from poor diets.

“We found that an endogenously synthesized metabolite can cause faults in our DNA that are early warning signs of cancer development, by inhibiting a cancer-preventing gene (known as the BRCA2).”

BRCA2 is a gene that repairs DNA and helps make a protein that suppresses tumor growth and cancer cell proliferation. A BRCA2 gene mutation is associated primarily with a higher risk of developing breast and ovarian cancers, as well as other cancers. Those with a faulty copy of the BRCA2 gene are particularly susceptible to DNA damage from MGO.

However, the study showed that those without a predisposition to cancer also face an increased risk of developing the disease from elevated MGO levels. The study found that chronically elevated levels of blood sugar can result in a compounded increase in cancer risk.

“This study showcases the impact of methylglyoxal in inhibiting the function of tumour suppressor, such as BRCA2, suggesting that repeated episodes of poor diet or uncontrolled diabetes can ‘add up’ over time to increase cancer risk,” Mr. Kong wrote.

The Methylglyoxal and Cancer Relationship

MGO is a metabolite of glucose—a byproduct made when our cells break down sugar, mainly glucose and fructose, to create energy. MGO is capable of temporarily destroying the BRCA2 protein, leading to lower levels of the protein in the cells and thus inhibiting its ability to prevent tumor formation. The more sugar your body needs to break down, the higher the levels of this chemical, and the higher your risk of developing malignant tumors.

“Accumulation of methylglyoxal is found in cancer cells undergoing active metabolism,“ Mr. Kong said. ”People whose diet is poor may also experience higher than normal levels of methylglyoxal. The connection we unearthed may help to explain why diabetes, obesity, or poor diet can heighten cancer risk.”

MGO is challenging to measure on its own. Early detection of elevated levels is possible with a routine HbA1C blood test that measures your average blood sugar levels over the past two to three months and is typically used to diagnose diabetes. This new research may provide a mechanism for detecting early warning signs of developing cancer.

“In patients with prediabetes/diabetes, high methylglyoxal levels can usually be controlled with diet, exercise and/or medicines. We are aiming to propose the same for families with high risk of cancers, such as those with BRCA2 mutation,” Mr. Kong said.

More research is needed, but the study’s findings may open the door to new methods of mitigating cancer risk.

“It is important to take note that our work was carried out in cellular models, not in patients, so it would be premature to give specific advice to reduce risk on this basis. However, the new knowledge from our study could influence the directions of future research in this area, and eventually have implications for cancer prevention,” he said.

“For instance, poor diets rich in sugar or refined carbohydrates are known to cause blood glucose levels to spike. We are now looking at larger cancer cohorts to connect these dots.”

The Diet and Cancer Connection

Dr. Graham Simpson, medical director of Opt Health, stated in an email: “It’s genes loading the gun, but your lifestyle that pulls the trigger. Every bite of food you take is really information. It’s either going to turn on your longevity genes or it’s going to turn on your killer genes. So cancer is very much in large part self-induced by the individual diet.”

A 2018 study published by Cambridge University Press found an association between higher intakes of sugar-sweetened soft drinks and an increased risk of obesity-related cancers. Research published in the American Journal of Clinical Nutrition in 2020 concluded that sugars may be a risk factor for cancer, breast cancer in particular. Cancer cells are ravenous for sugar, consuming it at a rate 200 times that of normal cells.

Healthy Dietary Choices for Reducing Cancer Risk

A consensus on the best dietary approach for reducing cancer risk has yet to be determined, and further research is needed. However, the new findings of the Cell study on MGO support reducing sugar intake as a means to mitigate cancer risk. A study published in January in Diabetes & Metabolism shows that a Mediterranean diet style of eating may help reduce MGO levels.

In 2023, a study published in Cell determined that a ketogenic diet may be an effective nutritional intervention for cancer patients as it helped slow the growth of cancer cells in mice—while a review published in JAMA Oncology in 2022 found that the current evidence available supports a plant-enriched diet for reducing cancer risk.

Dr. Simpson stressed the importance of real food and healthy macronutrients with a low-carb intake for the health of our cells. “The mitochondria is the most important signaling molecule and energy-producing organelle that we have in our body. [Eat] lots of vegetables, healthy proteins, and healthy fats, fish, eggs, yogurt,” he said.

“Lots of green, above-ground vegetables, some fruits, everything that is naturally grown and is not processed.” For the Silo, Jennifer Sweenie.

Truths And Concerns- The Miracle Drug Ozempic

Ozempic: A Microcosm That Can Teach Us a Lot about Canadian Healthcare Markets Ozempic (and other GLP-1 medications) have been having their moment.

Headlines hail a “miracle drug” for weight loss , others say that’s too good to be true, and there’s even a South Park episode titled “the end of obesity.” It’s all new territory for medications for type-2 diabetes and weight loss treatment.


And all the media attention gives us a teaching moment to help illuminate the behind-the-scenes dynamics that affect international pharmaceutical markets, insurance companies, public healthcare systems and government finances.
This article summarizes the various issues that have been in the spotlight and additional posts linked in the supplemental section at the end of this article will go further behind the curtain, using Ozempic as an example, to explain the interconnected and complex economic factors and government machinery that play roles in determining the supply, demand and accessibility of pharmaceutical treatments and products, as well as broader economic responses.

First, some background.

GLP-1 receptor agonists (like Ozempic) have been used for more than 16 years to treat type 2 diabetes and for weight loss for the past nine years. Ozempic is Novo Nordisk’s brand name for a semaglutide marketed and sold for treating type 2 diabetes. Other medications in the same class include Trulicity (dulaglutide, GLP-1) and Mounjaro (tirzepatide, a dual GLP-1/GIP).

While Ozempic is heavily associated with weight loss in the media, it is NOT approved by the FDA or Health Canada as a weight-loss drug.

From the globex press release: “GlobexPharma® is thrilled to announce the launch of Ozempic Chewable Gummies for Kids®, a groundbreaking prescription treatment designed to combat obesity in children aged 1 to 5 years.”

Health Canada approved it in 2018 for adult patients with type 2 diabetes, noting that there was limited information on safety and efficacy for minors or people over age 75. The FDA has authorized it for similar purposes and also includes reducing the risk of heart attacks and strokes in type 2 diabetes patients with known heart disease.

Wegovy, a similar injectable medication containing higher amounts of semaglutide and made by the same company, is approved for weight loss in obese patients by the FDA and recently entered the Canadian market (it was approved in 2021, but only became available to consumers in May 2024). Saxenda (liraglutide, GLP1), is approved for weight management in obese pediatric patients over 12 years of age in Canada.


The class of medications is not new, their effectiveness for weight loss in non-obese patients, as well as their potential to improve fertility, reduce cardiac risks, and reduce the risk of kidney failure have all increased the attention and discussion of this class of medications.

Their growing weight-loss popularity has disrupted the market, and provides an opportunity to investigate many interrelated market dynamics including:

  • The incentives and potential for pharmaceutical companies to expand markets for existing products by finding new applications for them.
  • Similarly, off-label prescribing by physicians can provide patients access to treatments, even if a full-scale clinical trial has not been conducted.
  • Market expansion through new indications and off-label prescribing can create surges in demand that increase financial risks for public and private drug insurance plans.
  • Similarly, rapidly increasing demand increases the risk of drug shortages, at least until manufacturing capacity can expand to meet the new market demand.
  • Both shortages and financial risk for insurance companies can lead to restricting coverage and rationing supplies to prioritize particular patient groups.

The healthcare market and broader economy respond to these dynamics in sometimes unexpected or potentially counterproductive ways. For example, counterfeit or black market versions of the regulated medications, a proliferation of virtual services advertising directly to consumers that they can provide access, and patients failing to complete treatment due to costs or shortages.
There is evidence of wider economic responses as well.

For example, Nestlé is launching a new line of frozen pizzas and pastas enriched with protein, iron, and calcium designed for people taking appetite suppressing drugs.

That’s our landscape. For The Silo, Rosalie Wyonch.

Supplemental

Dig into the various strategies insurance providers and governments are using to manage financial risks and mitigate drug shortages.

Examine the counter-balancing industry and consumer responses that seek to maintain broad access or capitalize on the new and growing market.