Tag Archives: government

7000 Words About The Dubious Refragmentation Of The Economy

One advantage of being old is that you can see change happen in your lifetime.

A lot of the change I’ve seen is fragmentation. For example, US politics and now Canadian politics are much more polarized than they used to be. Culturally we have ever less common ground and though inclusiveness is preached by the media and the Left, special interest groups and policies have a polarizing effect. The creative class flocks to a handful of happy cities, abandoning the rest. And increasing economic inequality means the spread between rich and poor is growing too. I’d like to propose a hypothesis: that all these trends are instances of the same phenomenon. And moreover, that the cause is not some force that’s pulling us apart, but rather the erosion of forces that had been pushing us together.

Worse still, for those who worry about these trends, the forces that were pushing us together were an anomaly, a one-time combination of circumstances that’s unlikely to be repeated—and indeed, that we would not want to repeat.

Describe How a Mass Culture Developed in America - JeankruwHumphrey

The two forces were war (above all World War II), and the rise of large corporations.

The effects of World War II were both economic and social. Economically, it decreased variation in income. Like all modern armed forces, America’s were socialist economically. From each according to his ability, to each according to his need. More or less. Higher ranking members of the military got more (as higher ranking members of socialist societies always do), but what they got was fixed according to their rank. And the flattening effect wasn’t limited to those under arms, because the US economy was conscripted too. Between 1942 and 1945 all wages were set by the National War Labor Board. Like the military, they defaulted to flatness. And this national standardization of wages was so pervasive that its effects could still be seen years after the war ended. [1]

Business owners weren’t supposed to be making money either.

FDR said “not a single war millionaire” would be permitted. To ensure that, any increase in a company’s profits over prewar levels was taxed at 85%. And when what was left after corporate taxes reached individuals, it was taxed again at a marginal rate of 93%. [2]

Socially too the war tended to decrease variation. Over 16 million men and women from all sorts of different backgrounds were brought together in a way of life that was literally uniform. Service rates for men born in the early 1920s approached 80%. And working toward a common goal, often under stress, brought them still closer together.

Though strictly speaking World War II lasted less than 4 years for the USA, its effects lasted longer and cycled North towards Canada.

Wars make central governments more powerful, and World War II was an extreme case of this. In the US, as in all the other Allied countries, the federal government was slow to give up the new powers it had acquired. Indeed, in some respects the war didn’t end in 1945; the enemy just switched to the Soviet Union. In tax rates, federal power, defense spending, conscription, and nationalism the decades after the war looked more like wartime than prewar peacetime. [3] And the social effects lasted too. The kid pulled into the army from behind a mule team in West Virginia didn’t simply go back to the farm afterward. Something else was waiting for him, something that looked a lot like the army.

If total war was the big political story of the 20th century, the big economic story was the rise of new kind of company. And this too tended to produce both social and economic cohesion. [4]

The 20th century was the century of the big, national corporation. General Electric, General Foods, General Motors. Developments in finance, communications, transportation, and manufacturing enabled a new type of company whose goal was above all scale. Version 1 of this world was low-res: a Duplo world of a few giant companies dominating each big market. [5]

The late 19th and early 20th centuries had been a time of consolidation, led especially by J. P. Morgan. Thousands of companies run by their founders were merged into a couple hundred giant ones run by professional managers. Economies of scale ruled the day. It seemed to people at the time that this was the final state of things. John D. Rockefeller said in 1880

Image result for john d rockefeller

The day of combination is here to stay. Individualism has gone, never to return.

He turned out to be mistaken, but he seemed right for the next hundred years.

The consolidation that began in the late 19th century continued for most of the 20th. By the end of World War II, as Michael Lind writes, “the major sectors of the economy were either organized as government-backed cartels or dominated by a few oligopolistic corporations.”

For consumers this new world meant the same choices everywhere, but only a few of them. When I grew up there were only 2 or 3 of most things, and since they were all aiming at the middle of the market there wasn’t much to differentiate them.

One of the most important instances of this phenomenon was in TV.

Popular culture and daily life of Americans in the 1950s - WWJD

Here there were 3 choices: NBC, CBS, and ABC. Plus public TV for eggheads and communists (jk). The programs the 3 networks offered were indistinguishable. In fact, here there was a triple pressure toward the center. If one show did try something daring, local affiliates in conservative markets would make them stop. Plus since TVs were expensive whole families watched the same shows together, so they had to be suitable for everyone.

And not only did everyone get the same thing, they got it at the same time. It’s difficult to imagine now, but every night tens of millions of families would sit down together in front of their TV set watching the same show, at the same time, as their next door neighbors. What happens now with the Super Bowl used to happen every night. We were literally in sync. [6]

In a way mid-century TV culture was good. The view it gave of the world was like you’d find in a children’s book, and it probably had something of the effect that (parents hope) children’s books have in making people behave better. But, like children’s books, TV was also misleading. Dangerously misleading, for adults. In his autobiography, Robert MacNeil talks of seeing gruesome images that had just come in from Vietnam and thinking, we can’t show these to families while they’re having dinner.

I know how pervasive the common culture was, because I tried to opt out of it, and it was practically impossible to find alternatives.

When I was 13 I realized, more from internal evidence than any outside source, that the ideas we were being fed on TV were crap, and I stopped watching it. [7] But it wasn’t just TV. It seemed like everything around me was crap. The politicians all saying the same things, the consumer brands making almost identical products with different labels stuck on to indicate how prestigious they were meant to be, the balloon-frame houses with fake “colonial” skins, the cars with several feet of gratuitous metal on each end that started to fall apart after a couple years, the “red delicious” apples that were red but only nominally apples. And in retrospect, it was crap. [8]

But when I went looking for alternatives to fill this void, I found practically nothing. There was no Internet then. The only place to look was in the chain bookstore in our local shopping mall. [9] There I found a copy of The Atlantic. I wish I could say it became a gateway into a wider world, but in fact I found it boring and incomprehensible. Like a kid tasting whisky for the first time and pretending to like it, I preserved that magazine as carefully as if it had been a book. I’m sure I still have it somewhere. But though it was evidence that there was, somewhere, a world that wasn’t red delicious, I didn’t find it till college.

It wasn’t just as consumers that the big companies made us similar. They did as employers too. Within companies there were powerful forces pushing people toward a single model of how to look and act. IBM was particularly notorious for this, but they were only a little more extreme than other big companies. And the models of how to look and act varied little between companies. Meaning everyone within this world was expected to seem more or less the same. And not just those in the corporate world, but also everyone who aspired to it—which in the middle of the 20th century meant most people who weren’t already in it. For most of the 20th century, working-class people tried hard to look middle class. You can see it in old photos. Few adults aspired to look dangerous in 1950.

But the rise of national corporations didn’t just compress us culturally. It compressed us economically too, and on both ends.

Along with giant national corporations, we got giant national labor unions. And in the mid 20th century the corporations cut deals with the unions where they paid over market price for labor. Partly because the unions were monopolies. [10] Partly because, as components of oligopolies themselves, the corporations knew they could safely pass the cost on to their customers, because their competitors would have to as well. And partly because in mid-century most of the giant companies were still focused on finding new ways to milk economies of scale. Just as startups rightly pay AWS a premium over the cost of running their own servers so they can focus on growth, many of the big national corporations were willing to pay a premium for labor. [11]

As well as pushing incomes up from the bottom, by overpaying unions, the big companies of the 20th century also pushed incomes down at the top, by underpaying their top management. Economist J. K. Galbraith wrote in 1967 that “There are few corporations in which it would be suggested that executive salaries are at a maximum.” [12]

Speaking Out Meant Standing Alone

To some extent this was an illusion.

Much of the de facto pay of executives never showed up on their income tax returns, because it took the form of perks. The higher the rate of income tax, the more pressure there was to pay employees upstream of it. (In the UK, where taxes were even higher than in the US, companies would even pay their kids’ private school tuitions.) One of the most valuable things the big companies of the mid 20th century gave their employees was job security, and this too didn’t show up in tax returns or income statistics. So the nature of employment in these organizations tended to yield falsely low numbers about economic inequality. But even accounting for that, the big companies paid their best people less than market price. There was no market; the expectation was that you’d work for the same company for decades if not your whole career. [13]

Your work was so illiquid there was little chance of getting market price. But that same illiquidity also encouraged you not to seek it. If the company promised to employ you till you retired and give you a pension afterward, you didn’t want to extract as much from it this year as you could. You needed to take care of the company so it could take care of you. Especially when you’d been working with the same group of people for decades. If you tried to squeeze the company for more money, you were squeezing the organization that was going to take care of them. Plus if you didn’t put the company first you wouldn’t be promoted, and if you couldn’t switch ladders, promotion on this one was the only way up. [14]

To someone who’d spent several formative years in the armed forces, this situation didn’t seem as strange as it does to us now. From their point of view, as big company executives, they were high-ranking officers. They got paid a lot more than privates. They got to have expense account lunches at the best restaurants and fly around on the company’s Gulfstreams. It probably didn’t occur to most of them to ask if they were being paid market price.

The ultimate way to get market price is to work for yourself, by starting your own company. That seems obvious to any ambitious person now. But in the mid 20th century it was an alien concept. Not because starting one’s own company seemed too ambitious, but because it didn’t seem ambitious enough. Even as late as the 1970s, when I grew up, the ambitious plan was to get lots of education at prestigious institutions, and then join some other prestigious institution and work one’s way up the hierarchy. Your prestige was the prestige of the institution you belonged to. People did start their own businesses of course, but educated people rarely did, because in those days there was practically zero concept of starting what we now call a startup: a business that starts small and grows big. That was much harder to do in the mid 20th century. Starting one’s own business meant starting a business that would start small and stay small. Which in those days of big companies often meant scurrying around trying to avoid being trampled by elephants. It was more prestigious to be one of the executive class riding the elephant.

By the 1970s, no one stopped to wonder where the big prestigious companies had come from in the first place.

Famous 1970s Logos: The Best 70s Logo Design Examples

It seemed like they’d always been there, like the chemical elements. And indeed, there was a double wall between ambitious kids in the 20th century and the origins of the big companies. Many of the big companies were roll-ups that didn’t have clear founders. And when they did, the founders didn’t seem like us. Nearly all of them had been uneducated, in the sense of not having been to college. They were what Shakespeare called rude mechanicals. College trained one to be a member of the professional classes. Its graduates didn’t expect to do the sort of grubby menial work that Andrew Carnegie or Henry Ford started out doing. [15]

And in the 20th century there were more and more college graduates. They increased from about 2% of the population in 1900 to about 25% in 2000. In the middle of the century our two big forces intersect, in the form of the GI Bill, which sent 2.2 million World War II veterans to college. Few thought of it in these terms, but the result of making college the canonical path for the ambitious was a world in which it was socially acceptable to work for Henry Ford, but not to be Henry Ford. [16]

I remember this world well. I came of age just as it was starting to break up. In my childhood it was still dominant. Not quite so dominant as it had been. We could see from old TV shows and yearbooks and the way adults acted that people in the 1950s and 60s had been even more conformist than us. The mid-century model was already starting to get old. But that was not how we saw it at the time. We would at most have said that one could be a bit more daring in 1975 than 1965. And indeed, things hadn’t changed much yet.

But change was coming soon.

And when the Duplo economy started to disintegrate, it disintegrated in several different ways at once. Vertically integrated companies literally dis-integrated because it was more efficient to. Incumbents faced new competitors as (a) markets went global and (b) technical innovation started to trump economies of scale, turning size from an asset into a liability. Smaller companies were increasingly able to survive as formerly narrow channels to consumers broadened. Markets themselves started to change faster, as whole new categories of products appeared. And last but not least, the federal government, which had previously smiled upon J. P. Morgan’s world as the natural state of things, began to realize it wasn’t the last word after all.

What J. P. Morgan was to the horizontal axis, Henry Ford was to the vertical. He wanted to do everything himself. The giant plant he built at River Rouge between 1917 and 1928 literally took in iron ore at one end and sent cars out the other. 100,000 people worked there. At the time it seemed the future. But that is not how car companies operate today. Now much of the design and manufacturing happens in a long supply chain, whose products the car companies ultimately assemble and sell. The reason car companies operate this way is that it works better. Each company in the supply chain focuses on what they know best. And they each have to do it well or they can be swapped out for another supplier.

Why didn’t Henry Ford realize that networks of cooperating companies work better than a single big company?

One reason is that supplier networks take a while to evolve. In 1917, doing everything himself seemed to Ford the only way to get the scale he needed. And the second reason is that if you want to solve a problem using a network of cooperating companies, you have to be able to coordinate their efforts, and you can do that much better with computers. Computers reduce the transaction costs that Coase argued are the raison d’etre of corporations. That is a fundamental change.

In the early 20th century, big companies were synonymous with efficiency. In the late 20th century they were synonymous with inefficiency. To some extent this was because the companies themselves had become sclerotic. But it was also because our standards were higher.

It wasn’t just within existing industries that change occurred. The industries themselves changed. It became possible to make lots of new things, and sometimes the existing companies weren’t the ones who did it best.

Microcomputers are a classic example.

Ms Dos 1.25 (1982)(Microsoft) Game

The market was pioneered by upstarts like Apple, Radio Shack and Atari. When it got big enough, IBM decided it was worth paying attention to. At the time IBM completely dominated the computer industry. They assumed that all they had to do, now that this market was ripe, was to reach out and pick it. Most people at the time would have agreed with them. But what happened next illustrated how much more complicated the world had become. IBM did launch a microcomputer. Though quite successful, it did not crush Apple. But even more importantly, IBM itself ended up being supplanted by a supplier coming in from the side—from software, which didn’t even seem to be the same business. IBM’s big mistake was to accept a non-exclusive license for DOS. It must have seemed a safe move at the time. No other computer manufacturer had ever been able to outsell them. What difference did it make if other manufacturers could offer DOS too? The result of that miscalculation was an explosion of inexpensive PC clones. Microsoft now owned the PC standard, and the customer. And the microcomputer business ended up being Apple vs Microsoft.

Basically, Apple bumped IBM and then Microsoft stole its wallet. That sort of thing did not happen to big companies in mid-century. But it was going to happen increasingly often in the future.

Change happened mostly by itself in the computer business. In other industries, legal obstacles had to be removed first. Many of the mid-century oligopolies had been anointed by the federal government with policies (and in wartime, large orders) that kept out competitors. This didn’t seem as dubious to government officials at the time as it sounds to us. They felt a two-party system ensured sufficient competition in politics. It ought to work for business too.

Gradually the government realized that anti-competitive policies were doing more harm than good, and during the Carter administration started to remove them.

The word used for this process was misleadingly narrow: deregulation. What was really happening was de-oligopolization. It happened to one industry after another. Two of the most visible to consumers were air travel and long-distance phone service, which both became dramatically cheaper after deregulation.

Deregulation also contributed to the wave of hostile takeovers in the 1980s. In the old days the only limit on the inefficiency of companies, short of actual bankruptcy, was the inefficiency of their competitors. Now companies had to face absolute rather than relative standards. Any public company that didn’t generate sufficient returns on its assets risked having its management replaced with one that would. Often the new managers did this by breaking companies up into components that were more valuable separately. [17]

Version 1 of the national economy consisted of a few big blocks whose relationships were negotiated in back rooms by a handful of executives, politicians, regulators, and labor leaders. Version 2 was higher resolution: there were more companies, of more different sizes, making more different things, and their relationships changed faster. In this world there were still plenty of back room negotiations, but more was left to market forces. Which further accelerated the fragmentation.

It’s a little misleading to talk of versions when describing a gradual process, but not as misleading as it might seem. There was a lot of change in a few decades, and what we ended up with was qualitatively different. The companies in the S&P 500 in 1958 had been there an average of 61 years. By 2012 that number was 18 years. [18]

The breakup of the Duplo economy happened simultaneously with the spread of computing power. To what extent were computers a precondition? It would take a book to answer that. Obviously the spread of computing power was a precondition for the rise of startups. I suspect it was for most of what happened in finance too. But was it a precondition for globalization or the LBO wave? I don’t know, but I wouldn’t discount the possibility. It may be that the refragmentation was driven by computers in the way the industrial revolution was driven by steam engines. Whether or not computers were a precondition, they have certainly accelerated it.

The new fluidity of companies changed people’s relationships with their employers. Why climb a corporate ladder that might be yanked out from under you? Ambitious people started to think of a career less as climbing a single ladder than as a series of jobs that might be at different companies. More movement (or even potential movement) between companies introduced more competition in salaries. Plus as companies became smaller it became easier to estimate how much an employee contributed to the company’s revenue. Both changes drove salaries toward market price. And since people vary dramatically in productivity, paying market price meant salaries started to diverge.

By no coincidence it was in the early 1980s that the term “yuppie” was coined. That word is not much used now, because the phenomenon it describes is so taken for granted, but at the time it was a label for something novel. Yuppies were young professionals who made lots of money. To someone in their twenties today, this wouldn’t seem worth naming. Why wouldn’t young professionals make lots of money? But until the 1980s being underpaid early in your career was part of what it meant to be a professional. Young professionals were paying their dues, working their way up the ladder. The rewards would come later. What was novel about yuppies was that they wanted market price for the work they were doing now.

The first yuppies did not work for startups.

AM2407 Spark blog: 1980s - The Yuppie

That was still in the future. Nor did they work for big companies. They were professionals working in fields like law, finance, and consulting. But their example rapidly inspired their peers. Once they saw that new BMW 325i, they wanted one too.

Underpaying people at the beginning of their career only works if everyone does it. Once some employer breaks ranks, everyone else has to, or they can’t get good people. And once started this process spreads through the whole economy, because at the beginnings of people’s careers they can easily switch not merely employers but industries.

But not all young professionals benefitted. You had to produce to get paid a lot. It was no coincidence that the first yuppies worked in fields where it was easy to measure that.

More generally, an idea was returning whose name sounds old-fashioned precisely because it was so rare for so long: that you could make your fortune. As in the past there were multiple ways to do it. Some made their fortunes by creating wealth, and others by playing zero-sum games. But once it became possible to make one’s fortune, the ambitious had to decide whether or not to. A physicist who chose physics over Wall Street in 1990 was making a sacrifice that a physicist in 1960 wasn’t.

The idea even flowed back into big companies. CEOs of big companies make more now than they used to, and I think much of the reason is prestige. In 1960, corporate CEOs had immense prestige. They were the winners of the only economic game in town. But if they made as little now as they did then, in real dollar terms, they’d seem like small fry compared to professional athletes and whiz kids making millions from startups and hedge funds. They don’t like that idea, so now they try to get as much as they can, which is more than they had been getting. [19]

Meanwhile a similar fragmentation was happening at the other end of the economic scale. As big companies’ oligopolies became less secure, they were less able to pass costs on to customers and thus less willing to overpay for labor. And as the Duplo world of a few big blocks fragmented into many companies of different sizes—some of them overseas—it became harder for unions to enforce their monopolies. As a result workers’ wages also tended toward market price. Which (inevitably, if unions had been doing their job) tended to be lower. Perhaps dramatically so, if automation had decreased the need for some kind of work.

And just as the mid-century model induced social as well as economic cohesion, its breakup brought social as well as economic fragmentation. People started to dress and act differently. Those who would later be called the “creative class” became more mobile. People who didn’t care much for religion felt less pressure to go to church for appearances’ sake, while those who liked it a lot opted for increasingly colorful forms. Some switched from meat loaf to tofu, and others to Hot Pockets. Some switched from driving Ford sedans to driving small imported cars, and others to driving SUVs. Kids who went to private schools or wished they did started to dress “preppy,” and kids who wanted to seem rebellious made a conscious effort to look disreputable. In a hundred ways people spread apart. [20]

Almost four decades later, fragmentation is still increasing.

Has it been net good or bad? I don’t know; the question may be unanswerable. Not entirely bad though. We take for granted the forms of fragmentation we like, and worry only about the ones we don’t. But as someone who caught the tail end of mid-century conformism, I can tell you it was no utopia. [21]

My goal here is not to say whether fragmentation has been good or bad, just to explain why it’s happening. With the centripetal forces of total war and 20th century oligopoly mostly gone, what will happen next? And more specifically, is it possible to reverse some of the fragmentation we’ve seen?

If it is, it will have to happen piecemeal. You can’t reproduce mid-century cohesion the way it was originally produced. It would be insane to go to war just to induce more national unity. And once you understand the degree to which the economic history of the 20th century was a low-res version 1, it’s clear you can’t reproduce that either.

20th century cohesion was something that happened at least in a sense naturally. The war was due mostly to external forces, and the Duplo economy was an evolutionary phase. If you want cohesion now, you’d have to induce it deliberately. And it’s not obvious how. I suspect the best we’ll be able to do is address the symptoms of fragmentation. But that may be enough.

The form of fragmentation people worry most about lately is economic inequality, and if you want to eliminate that you’re up against a truly formidable headwind—one that has been in operation since the stone age: technology. Technology is a lever. It magnifies work. And the lever not only grows increasingly long, but the rate at which it grows is itself increasing.

Which in turn means the variation in the amount of wealth people can create has not only been increasing, but accelerating.

The unusual conditions that prevailed in the mid 20th century masked this underlying trend. The ambitious had little choice but to join large organizations that made them march in step with lots of other people—literally in the case of the armed forces, figuratively in the case of big corporations. Even if the big corporations had wanted to pay people proportionate to their value, they couldn’t have figured out how. But that constraint has gone now. Ever since it started to erode in the 1970s, we’ve seen the underlying forces at work again. [22]

Not everyone who gets rich now does it by creating wealth, certainly. But a significant number do, and the Baumol Effect means all their peers get dragged along too. [23] And as long as it’s possible to get rich by creating wealth, the default tendency will be for economic inequality to increase. Even if you eliminate all the other ways to get rich. You can mitigate this with subsidies at the bottom and taxes at the top, but unless taxes are high enough to discourage people from creating wealth, you’re always going to be fighting a losing battle against increasing variation in productivity. [24]

That form of fragmentation, like the others, is here to stay. Or rather, back to stay. Nothing is forever, but the tendency toward fragmentation should be more forever than most things, precisely because it’s not due to any particular cause. It’s simply a reversion to the mean. When Rockefeller said individualism was gone, he was right for a hundred years. It’s back now, and that’s likely to be true for longer.

I worry that if we don’t acknowledge this, we’re headed for trouble.

If we think 20th century cohesion disappeared because of few policy tweaks, we’ll be deluded into thinking we can get it back (minus the bad parts, somehow) with a few countertweaks. And then we’ll waste our time trying to eliminate fragmentation, when we’d be better off thinking about how to mitigate its consequences.

Notes

[1] Lester Thurow, writing in 1975, said the wage differentials prevailing at the end of World War II had become so embedded that they “were regarded as ‘just’ even after the egalitarian pressures of World War II had disappeared. Basically, the same differentials exist to this day, thirty years later.” But Goldin and Margo think market forces in the postwar period also helped preserve the wartime compression of wages—specifically increased demand for unskilled workers, and oversupply of educated ones.

(Oddly enough, the American custom of having employers pay for health insurance derives from efforts by businesses to circumvent NWLB wage controls in order to attract workers.)

[2] As always, tax rates don’t tell the whole story. There were lots of exemptions, especially for individuals. And in World War II the tax codes were so new that the government had little acquired immunity to tax avoidance. If the rich paid high taxes during the war it was more because they wanted to than because they had to.

After the war, federal tax receipts as a percentage of GDP were about the same as they are now.

In fact, for the entire period since the war, tax receipts have stayed close to 18% of GDP, despite dramatic changes in tax rates. The lowest point occurred when marginal income tax rates were highest: 14.1% in 1950. Looking at the data, it’s hard to avoid the conclusion that tax rates have had little effect on what people actually paid.

[3] Though in fact the decade preceding the war had been a time of unprecedented federal power, in response to the Depression. Which is not entirely a coincidence, because the Depression was one of the causes of the war. In many ways the New Deal was a sort of dress rehearsal for the measures the federal government took during wartime. The wartime versions were much more drastic and more pervasive though. As Anthony Badger wrote, “for many Americans the decisive change in their experiences came not with the New Deal but with World War II.”

[4] I don’t know enough about the origins of the world wars to say, but it’s not inconceivable they were connected to the rise of big corporations. If that were the case, 20th century cohesion would have a single cause.

[5] More precisely, there was a bimodal economy consisting, in Galbraith’s words, of “the world of the technically dynamic, massively capitalized and highly organized corporations on the one hand and the hundreds of thousands of small and traditional proprietors on the other.” Money, prestige, and power were concentrated in the former, and there was near zero crossover.

[6] I wonder how much of the decline in families eating together was due to the decline in families watching TV together afterward.

[7] I know when this happened because it was the season Dallas premiered. Everyone else was talking about what was happening on Dallas, and I had no idea what they meant.

[8] I didn’t realize it till I started doing research for this essay, but the meretriciousness of the products I grew up with is a well-known byproduct of oligopoly. When companies can’t compete on price, they compete on tailfins.

[9] Monroeville Mall was at the time of its completion in 1969 the largest in the country. In the late 1970s the movie Dawn of the Dead was shot there. Apparently the mall was not just the location of the movie, but its inspiration; the crowds of shoppers drifting through this huge mall reminded George Romero of zombies. My first job was scooping ice cream in the Baskin-Robbins.

[10] Labor unions were exempted from antitrust laws by the Clayton Antitrust Act in 1914 on the grounds that a person’s work is not “a commodity or article of commerce.” I wonder if that means service companies are also exempt.

[11] The relationships between unions and unionized companies can even be symbiotic, because unions will exert political pressure to protect their hosts. According to Michael Lind, when politicians tried to attack the A&P supermarket chain because it was putting local grocery stores out of business, “A&P successfully defended itself by allowing the unionization of its workforce in 1938, thereby gaining organized labor as a constituency.” I’ve seen this phenomenon myself: hotel unions are responsible for more of the political pressure against Airbnb than hotel companies.

[12] Galbraith was clearly puzzled that corporate executives would work so hard to make money for other people (the shareholders) instead of themselves. He devoted much of The New Industrial State to trying to figure this out.

His theory was that professionalism had replaced money as a motive, and that modern corporate executives were, like (good) scientists, motivated less by financial rewards than by the desire to do good work and thereby earn the respect of their peers. There is something in this, though I think lack of movement between companies combined with self-interest explains much of observed behavior.

[13] Galbraith (p. 94) says a 1952 study of the 800 highest paid executives at 300 big corporations found that three quarters of them had been with their company for more than 20 years.

[14] It seems likely that in the first third of the 20th century executive salaries were low partly because companies then were more dependent on banks, who would have disapproved if executives got too much. This was certainly true in the beginning. The first big company CEOs were J. P. Morgan’s hired hands.

Companies didn’t start to finance themselves with retained earnings till the 1920s. Till then they had to pay out their earnings in dividends, and so depended on banks for capital for expansion. Bankers continued to sit on corporate boards till the Glass-Steagall act in 1933.

By mid-century big companies funded 3/4 of their growth from earnings. But the early years of bank dependence, reinforced by the financial controls of World War II, must have had a big effect on social conventions about executive salaries. So it may be that the lack of movement between companies was as much the effect of low salaries as the cause.

Incidentally, the switch in the 1920s to financing growth with retained earnings was one cause of the 1929 crash. The banks now had to find someone else to lend to, so they made more margin loans.

[15] Even now it’s hard to get them to. One of the things I find hardest to get into the heads of would-be startup founders is how important it is to do certain kinds of menial work early in the life of a company. Doing things that don’t scale is to how Henry Ford got started as a high-fiber diet is to the traditional peasant’s diet: they had no choice but to do the right thing, while we have to make a conscious effort.

[16] Founders weren’t celebrated in the press when I was a kid. “Our founder” meant a photograph of a severe-looking man with a walrus mustache and a wing collar who had died decades ago. The thing to be when I was a kid was an executive. If you weren’t around then it’s hard to grasp the cachet that term had. The fancy version of everything was called the “executive” model.

[17] The wave of hostile takeovers in the 1980s was enabled by a combination of circumstances: court decisions striking down state anti-takeover laws, starting with the Supreme Court’s 1982 decision in Edgar v. MITE Corp.; the Reagan administration’s comparatively sympathetic attitude toward takeovers; the Depository Institutions Act of 1982, which allowed banks and savings and loans to buy corporate bonds; a new SEC rule issued in 1982 (rule 415) that made it possible to bring corporate bonds to market faster; the creation of the junk bond business by Michael Milken; a vogue for conglomerates in the preceding period that caused many companies to be combined that never should have been; a decade of inflation that left many public companies trading below the value of their assets; and not least, the increasing complacency of managements.

[18] Foster, Richard. “Creative Destruction Whips through Corporate America.” Innosight, February 2012.

[19] CEOs of big companies may be overpaid. I don’t know enough about big companies to say. But it is certainly not impossible for a CEO to make 200x as much difference to a company’s revenues as the average employee. Look at what Steve Jobs did for Apple when he came back as CEO. It would have been a good deal for the board to give him 95% of the company. Apple’s market cap the day Steve came back in July 1997 was 1.73 billion. 5% of Apple now (January 2016) would be worth about 30 billion. And it would not be if Steve hadn’t come back; Apple probably wouldn’t even exist anymore.

Merely including Steve in the sample might be enough to answer the question of whether public company CEOs in the aggregate are overpaid. And that is not as facile a trick as it might seem, because the broader your holdings, the more the aggregate is what you care about.

[20] The late 1960s were famous for social upheaval. But that was more rebellion (which can happen in any era if people are provoked sufficiently) than fragmentation. You’re not seeing fragmentation unless you see people breaking off to both left and right.

[21] Globally the trend has been in the other direction. While the US is becoming more fragmented, the world as a whole is becoming less fragmented, and mostly in good ways.

[22] There were a handful of ways to make a fortune in the mid 20th century. The main one was drilling for oil, which was open to newcomers because it was not something big companies could dominate through economies of scale. How did individuals accumulate large fortunes in an era of such high taxes? Giant tax loopholes defended by two of the most powerful men in Congress, Sam Rayburn and Lyndon Johnson.

But becoming a Texas oilman was not in 1950 something one could aspire to the way starting a startup or going to work on Wall Street were in 2000, because (a) there was a strong local component and (b) success depended so much on luck.

[23] The Baumol Effect induced by startups is very visible in Silicon Valley. Google will pay people millions of dollars a year to keep them from leaving to start or join startups.

[24] I’m not claiming variation in productivity is the only cause of economic inequality in the US. But it’s a significant cause, and it will become as big a cause as it needs to, in the sense that if you ban other ways to get rich, people who want to get rich will use this route instead.

Thanks to Sam Altman, Trevor Blackwell, Paul Buchheit, Patrick Collison, Ron Conway, Chris Dixon, Benedict Evans, Richard Florida, Ben Horowitz, Jessica Livingston, Robert Morris, Tim O’Reilly, Geoff Ralston, Max Roser, Alexia Tsotsis, and Qasar Younis for reading drafts of this. Max also told me about several valuable sources. Essay from http://paulgraham.com/re.html

Bibliography

Allen, Frederick Lewis. The Big Change. Harper, 1952.

Averitt, Robert. The Dual Economy. Norton, 1968.

Badger, Anthony. The New Deal. Hill and Wang, 1989.

Bainbridge, John. The Super-Americans. Doubleday, 1961.

Beatty, Jack. Collossus. Broadway, 2001.

Brinkley, Douglas. Wheels for the World. Viking, 2003.

Brownleee, W. Elliot. Federal Taxation in America. Cambridge, 1996.

Chandler, Alfred. The Visible Hand. Harvard, 1977.

Chernow, Ron. The House of Morgan. Simon & Schuster, 1990.

Chernow, Ron. Titan: The Life of John D. Rockefeller. Random House, 1998.

Galbraith, John. The New Industrial State. Houghton Mifflin, 1967.

Goldin, Claudia and Robert A. Margo. “The Great Compression: The Wage Structure in the United States at Mid-Century.” NBER Working Paper 3817, 1991.

Gordon, John. An Empire of Wealth. HarperCollins, 2004.

Klein, Maury. The Genesis of Industrial America, 1870-1920. Cambridge, 2007.

Lind, Michael. Land of Promise. HarperCollins, 2012.

Mickelthwaite, John, and Adrian Wooldridge. The Company. Modern Library, 2003.

Nasaw, David. Andrew Carnegie. Penguin, 2006.

Sobel, Robert. The Age of Giant Corporations. Praeger, 1993.

Thurow, Lester. Generating Inequality: Mechanisms of Distribution. Basic Books, 1975.

Witte, John. The Politics and Development of the Federal Income Tax. Wisconsin, 1985.

 

Former Russian President Medvedev Said Off Air That Aliens Visit Our Planet

A few years ago, Dimitry Anatolyevich Medvedev was the tenth Prime Minister of Russia- the incumbent since 2012.

Before that, he previously served as the third President of Russia, from 2008 to 2012. The Russian Prime Minister has the second highest position in the government, after President Vladimir Putin. Medvedev, the Prime Minister of the Russian Federation comment confirms aliens are here.

After the recording of a 2015 television program, in off-air comments, he was less guarded.

He did not realize that his microphone was still open and stated: “I believe in Father Frost. But not too deeply.he said in a jovial reply to a question about Russia’s equivalent of Santa Claus.

Prime Minister Medvedev Aliens Interview

A journalist asked whether the president is handed secret files on aliens when receiving the briefcase needed to activate Russia’s nuclear arsenal.

Medvedev the former Russian President stated,Along with the briefcase with nuclear codes, the president of the country is given a special ‘top secret’ folder. This folder in its entirety contains information about aliens who visited our planet,” Mr. Medvedev answered playfully.

Along with this, you are given a report of the absolutely secret special service that exercises control over aliens on the territory of our country … More detailed information on this topic you can get from a well-known movie called ‘Men In Black’ … I will not tell you how many of them are among us because it may cause panic,” he says.

https://youtu.be/dYXlXCq9QTk

None of the television stations that interviewed Mr. Medvedev broadcast the off-air comments, but they were delivered to Reuters as a pool signal and some were shown on YouTube.

Mr. Medvedev worked closely with Vladimir Putin in St Petersburg in the 1990s and it was Mr. Putin who ushered Mr. Medvedev, now 47, into power in 2008. In case of President’s Putin’s death, resignation or impeachment, the Prime Minister becomes the temporary president until new presidential elections are held.  http://beforeitsnews.com/beyond-science/2012/12/breaking-russian-prime-minister-confirms-aliens-are-here-2440158.html  For the Silo, via Major (ret.) George A. Filer -Filer’s Files 49-2015

People In Japan, Spain, France Are Unhappy With Government COVID Response

Paris, France May, 2020 — Sentiment over whether governments are doing a good job of containing the coronavirus pandemic has swung in a number of countries over a month, according to the latest Ipsos poll.

A majority of people in nine out of 13 countries feel their government is doing a good job of containing the spread of COVID-19 in a survey of nearly 26,000 respondents conducted on April 23 to 26. People in India (87%), Australia (84%), Canada (81%), Germany (75%) and Italy (61%) are most likely to say this, while those in Japan (62%) and the hard-hit European countries of Spain (60%) and France (51%), and Russia (47%) were most likely to cite a poor job by their government.

May 4, chart 1.jpg

But when you look back to polling results from more than a month ago – March 19 to 21 – sentiment has fallen the most in some key countries at the center of the outbreaks. Japan is at the top with their view of government response down 18 percentage points, followed by France (-13), Brazil and Russia (-9), Italy (-5), and the United States (-4).

On the other end, a positive view of the government’s work has increased in Germany (+26 points), Australia (+24 points), Mexico (+13), and Canada (+10) since March. Sentiment in the United Kingdom has remained unchanged with three in five people (59%) happy with the government response.

May 4, chart 2.png

In terms of how people view the work of an international agency – the World Health Organization – a majority of people in 11 out of the 13 countries surveyed think the WHO is doing a good job in containing the spread of COVID-19, but this sentiment has fallen since March in nine out of 12 countries where Ipsos has tracked opinions.

People in the emerging markets of Mexico (78%), India and Brazil (75%) are most likely to be satisfied with the WHO’s response, followed by the developed nations of Canada (69%) and the U.K. (64%). The only two countries where more people disagree with the job done by the WHO are Japan (63%) and Russia (42%).

Other Ipsos research shows when respondents are asked about the performance of institutions in preventing the spread of the virus other aspects of the issue, including the economic effects of shutdowns, also influence how they evaluate institutional performance relative to containing the spread of COVID-19.

May 4, chart 3.png

The countries seeing the biggest decline in regard for the WHO’s response are France (-14), India and Russia (-12) and Australia (-9). Germany (+13) and Japan (+2) are the only countries where satisfaction with the WHO’s role has increased.

May 4, chart 4.png

https://www.ipsos.com/en/view-governments-job-contain-covid-19-falls-hard-hit-countries-rises-others

About the Study

These are the results of an Ipsos survey conducted April 23rd to 26th, 2020 on the Global Advisor online platform among 25,992 adults aged 18-74 in Canada and the United States and 16-74 in Australia, Brazil, France, Germany, Italy, India, Japan, Mexico, Russia, Spain and the United Kingdom. Where available, tracking results from previous studies, conducted through April and March and selected results from February are referenced by date.

The sample for April 23rd to 26th consists of approximately 2,000 individuals in each country. The samples in Australia, Canada, France, Germany, Italy, Japan, Spain, the U.K. and the U.S. can be taken as representative of these countries’ general adult population over age 16 or 18 (as above) and under the age of 75. The sample in Brazil, India, Mexico, Russia, and is more urban, more educated and/or more affluent than the general population and should be viewed as reflecting the views of the more “connected” segment of the population. The data is weighted so that each market’s sample composition best reflects the demographic profile of the adult population according to the most recent census data.

Where results do not sum to 100 or the ‘difference’ appears to be +/-1 more/less than the actual, this may be due to rounding, multiple responses or the exclusion of don’t knows or not stated responses. The precision of Ipsos online polls are calculated using a credibility interval with a poll of 1,000 accurate to +/- 3.5 percentage points. For more information on the Ipsos use of credibility intervals, please visit the Ipsos website (www.Ipsos.com).

For the Silo, Darrell Bricker, PhD CEO, Ipsos Global Public Affairs

About Ipsos

Ipsos is the world’s third largest market research company, present in 90 markets and employing more than 18,000 people.

Our passionately curious research professionals, analysts and scientists have built unique multi-specialist capabilities that provide true understanding and powerful insights into the actions, opinions and motivations of citizens, consumers, patients, customers or employees. We serve more than 5000 clients across the world with 75 business solutions.

Founded in France in 1975, Ipsos is listed on the Euronext Paris since July 1st, 1999. The company is part of the SBF 120 and the Mid-60 index and is eligible for the Deferred Settlement Service (SRD).

ISIN code FR0000073298, Reuters ISOS.PA, Bloomberg IPS:FP www.ipsos.com
Featured image: Philippe Lopez/AFP/Getty Images/CNN

Ministers Call On More Countries To Put Forward Stepped-Up Climate Crisis Education Plan

Madrid, December 2019–Two pioneering countries this week committed to stepped-up climate and environmental education in order to equip a new generation with the knowledge, awareness and skills needed to navigate the emerging challenges of the 21st C.

Italy and Mexico, speaking at a press conference at COP25 in Madrid, urged other countries to follow suit in order to make strong, environmental education a world-wide phenomenon.

They proposed Earth Day in April next year as one of a series of milestones in 2020 where like-minded nations could announce higher ambition on climate and environmental education.

The plan is to have a critical mass of countries committed to the environmental and climate education agenda by the time of the UN climate conference (COP26) taking place in Glasgow, UK in November.

Lorenzo Fioramonti, Italy’s Minister of Education, Innovation and Research, said: “Young people are demanding that governments take climate change far more seriously. There are many areas of society where we must act, and act with increased ambition: compulsory education on these topics needs to be a key part of this national and international response to the big issues of our time”.

The Italian Education Minister said that Earth Day 2020 represents one of the key moments in this important year to recognize the centrality of climate and environmental education including through  ‘Teach-Ins’ as one way of raising awareness among the young.

He said he hoped other countries would take the same opportunity mark Earth Day’s 50th anniversary and the birth of the environmental movement.

Teach-ins, in which students organize debates and propose solutions to environmental challenges, was a key feature of the first Earth Day where over 20 million young people and citizens protested in 1970, triggering in the process new laws and action by the then US administration.

Martha Delgado, Vice Minister of Global Affairs in Mexico’s Ministry of Foreign Affairs, said her country had now incorporated mandatory environmental education into Mexico’s constitution as the first step in a new comprehensive plan.

“Mexico is now committed to mandatory environmental education at home, but we are also committed to promoting environmental education internationally. The challenges we are facing are national but also global. Young people everywhere need the knowledge to fully respond to what is unfolding on in our world,” he said.

“Great transformations can only be achieved through knowledge, awareness and the sense of collaboration. We are convinced that environmental education is the route to meeting Sustainable Development Goals, an essential tool to fight the climate crisis and can prompt a profound cultural change to contribute to our planet’s sustainability,” said Vice-Minister Delgado.

Kathleen Rogers, President of the Earth Day Network, said environmental literacy had been at the core of Earth Day since its inception in 1970 but that governments had not gone far enough.

“Young people, through movements such as Fridays for Future, have been asking governments to tell the truth about the climate and environmental emergencies that we are now facing—‘telling the truth’ needs to happen in the schools and universities, and needs to happen now,” she said.

Rogers said that while globally climate and environmental education exist across the spectrum, from decades of formal implementation to continued exclusion of the topic as a whole, it is time to make these critical subjects compulsory and to link them to civic education so that students will develop both the knowledge and the civic skills they need to fully engage in the solutions to climate change.

 “These are the core elements of transforming our societies so that a new, far better informed and active generation can emerge to ensure governments truly respond to the challenges of our time,” she said.

Patricia Espinosa, Executive Secretary of the UN Framework Convention on Climate Change (UNFCCC), welcomed the announcements of Italy and Mexico as a key contribution to realizing the aims of the Paris Climate Change Agreement.

Under the Agreement governments are looking to enhance their Nationally Determined Contribution (NDCs) in 2020 as contribution to climate action but also the Sustainable Development Goals.

From Madrid and COP25, focus will also be on the Congregation for Catholic Institutes of the Holy See which has chosen Earth Day 2020 as a preparatory meeting for the Global Compact on Education that Pope Francis will launch on May 14th.  It aims to promote a global commitment that also teaches new generations respect for humankind and nature.

Earth Day Italia is working together with the Italian Ministry for the Environment and the Festival for Sustainable Education to support this initiative. For the Silo, Denice Zeck.

Current State of Cybersecurity Doesn’t Work

“You pay your money, as the saying goes, and you take your chances.” says Falkowitz, CEO of Area 1 Security. “More and more these days, it seems like this ‘policy’ is the rule rather than the exception, in everything from health care insurance to the commuter parking lot. Even though you’ve paid for the product or service, no one’s really responsible for some reason when you suffer damages while consuming whatever it is you bought. Or worse yet, you somehow find that whatever you bought doesn’t really do what you bought it for. And there’s an asterisk somewhere in the fine print to explain why. Unfortunately, nowhere is this more prevalent than in today’s cybersecurity industry.”

Despite the billions spent on cybersecurity we continue to suffer the most debilitating and expensive breaches imaginable, and some that cannot be imagined under any circumstances. Yet experts predict the worst is still to come. Cybercrime has moved from data theft and website defacement to a trajectory that includes data manipulation, data loss and eventually, if something is not done to change the economics of being a bad guy on the internet, threats to the stability of society itself.

“Cybersecurity companies seem to be content to collect their millions with the caveat that they can’t really offer protection in exchange,” continues Falkowitz. “Their customers likewise collect mountains of data on their customers and are appropriately contrite when that data is stolen or misused but the apology is not accompanied by compensation. Even the government can’t protect itself, or its citizens even if they’re attacked by another nation-state.”

Phishing

The excuses and the explanations are familiar: Cybersecurity is too complicated. Hackers are too clever. Attacks are unprecedented.

“Nonsense. Every bit of it.” says Falkowitz. “Cybersecurity is no more complicated than hundreds of other things we do routinely, from sending astronauts into space or open heart surgery. Hackers are human, just more persistent about how to fool the rest of us. And attacks are based on the same tried and true methods—phishing—they’ve been using for decades.”

Oren J. Falkowitz is the co-founder and CEO of Silicon Valley’s Area 1 Security. Oren held senior positions at the NSA and United States Cyber Command (USCYBERCOM) where he focused on Computer Network Operations & Big Data and is a predominant cybersecurity industry thought-leader committed to keeping high-level national security conversations relevant.  For the Silo, by Jennifer Vickery. 

Supplemental- What exactly is phishing?

Silo Reader Says All Products And Services Should Compete In A “Free Market”

Letters to the Silo

The [ image that is shown with Toby Barrett’s recent letter ] is certainly worth a thousand words. Government is consuming too much.

But why do we continue to feed government? Why would we participate in any political or economic system that is not serving our best interests?

Government is basically unproductive and can only give what it takes. Bigger government takes more and gives less.

The fruits of our labour are controlled, confiscated, and redistributed through taxation, inflation, interest, and government spending. There is also a systemic shortage of official currency, which leads to a shortage of paid employment, and the jobs that are available might be completely unproductive. We are forced to compete for currency that is systemically scarce, even though there are plenty of worthwhile activities that can be done and there are plenty of people who are willing and able to do productive work.

There seems to be an increasing level of dissatisfaction with government and the political process, but there certainly isn’t a consensus in defining the problem or offering a solution that will sufficiently address all of our concerns or satisfy everyone. This poses a challenge, but it also presents us with an opportunity to carefully examine the form and function of government, and explore a full range of possible alternatives.

If we have freedom of choice and a free market then we should be able to individually select the goods and services that we wish to purchase from a variety of producers and providers, who should be able to compete for customers based on the quality and price of their products and services. All products and services, including government programs and services, should be able to compete in a free market.

Trade and exchange should be voluntary and mutually beneficial. We should not be forced to pay for anything that we don’t want or don’t use, and we should not have to do business with anyone who consistently offers poor quality goods and services or who does not pay their legitimate debts.

If we have economic freedom then we should be able to negotiate agreeable prices, accept or refuse any form of payment, control the allocation of our credit, and use any method or medium of exchange. We should not be compelled to use a systemically scarce currency that is created as interest-bearing debt.

If the purpose of an economic system is to facilitate the production and exchange of goods and services then it should be possible to create numerous ways to serve this purpose, with various concurrent systems operating in any location. This would give us more control over our time, labour, skills, and resources.

If government is a provider of services then it should compete for customers based on the quality and price of any services that it is actually willing and able to provide, including education, health care, and defence. If government services were the best ones available then we would presumably choose to use them. Our wealth should not be confiscated and redistributed to pay for anything that we don’t want or don’t use.

We can already seek membership in various communities, organizations or other groups, based on our own political, religious, social, recreational, or business interests. If we have freedom of association and political freedom then we should even be able to choose a apolitical system and type of government, without having to move to a different place, and without imposing or choice on anyone else. This would give us the option to hire people to manage our affairs and make decisions on our behalf, but we would not be represented or lead without imposing our consent.

Crony Capitalism Warren Buffett

Any imposed political system or government is a method of control. Political freedom does not exist if an individual is forced to accept the decisions of any other individual or group, even if it calls itself a majority.

Imposed political systems and territorial governments with their restrictive geopolitical boundaries can be replaced with a variety of voluntary communities, mutual benefit associations, and autonomous protective groups, with overlapping membership in any location. Multiple communities can exist in any geographic region, without any imposed territorial monopolies for the provision of services.

Individual participation in any economic or political system should be entirely voluntary, based on choice and consent, rather than coercion and compulsion. No person is an island, but everyone should essentially be able to individually decide how he or she would like to organize and manage his or her economic and political activities.

Government is a human invention that has changed over time and will continue to change, but the direction of this change will be determined by the way we think and the choices we make.

Diverse methods and arrangements can co-exist simultaneously in any location to facilitate the production, provision, distribution, and exchange of goods and services, for the mutual benefit of all voluntary participants, at their own risk and expense. James Clayton

Note- boldfacing was not indicated in the original submitted letter to the Silo.

Wagon Wheel Corn Maze