Tag Archives: essay

7000 Words About The Dubious Refragmentation Of The Economy

One advantage of being old is that you can see change happen in your lifetime.

A lot of the change I’ve seen is fragmentation. For example, US politics and now Canadian politics are much more polarized than they used to be. Culturally we have ever less common ground and though inclusiveness is preached by the media and the Left, special interest groups and policies have a polarizing effect. The creative class flocks to a handful of happy cities, abandoning the rest. And increasing economic inequality means the spread between rich and poor is growing too. I’d like to propose a hypothesis: that all these trends are instances of the same phenomenon. And moreover, that the cause is not some force that’s pulling us apart, but rather the erosion of forces that had been pushing us together.

Worse still, for those who worry about these trends, the forces that were pushing us together were an anomaly, a one-time combination of circumstances that’s unlikely to be repeated—and indeed, that we would not want to repeat.

Describe How a Mass Culture Developed in America - JeankruwHumphrey

The two forces were war (above all World War II), and the rise of large corporations.

The effects of World War II were both economic and social. Economically, it decreased variation in income. Like all modern armed forces, America’s were socialist economically. From each according to his ability, to each according to his need. More or less. Higher ranking members of the military got more (as higher ranking members of socialist societies always do), but what they got was fixed according to their rank. And the flattening effect wasn’t limited to those under arms, because the US economy was conscripted too. Between 1942 and 1945 all wages were set by the National War Labor Board. Like the military, they defaulted to flatness. And this national standardization of wages was so pervasive that its effects could still be seen years after the war ended. [1]

Business owners weren’t supposed to be making money either.

FDR said “not a single war millionaire” would be permitted. To ensure that, any increase in a company’s profits over prewar levels was taxed at 85%. And when what was left after corporate taxes reached individuals, it was taxed again at a marginal rate of 93%. [2]

Socially too the war tended to decrease variation. Over 16 million men and women from all sorts of different backgrounds were brought together in a way of life that was literally uniform. Service rates for men born in the early 1920s approached 80%. And working toward a common goal, often under stress, brought them still closer together.

Though strictly speaking World War II lasted less than 4 years for the USA, its effects lasted longer and cycled North towards Canada.

Wars make central governments more powerful, and World War II was an extreme case of this. In the US, as in all the other Allied countries, the federal government was slow to give up the new powers it had acquired. Indeed, in some respects the war didn’t end in 1945; the enemy just switched to the Soviet Union. In tax rates, federal power, defense spending, conscription, and nationalism the decades after the war looked more like wartime than prewar peacetime. [3] And the social effects lasted too. The kid pulled into the army from behind a mule team in West Virginia didn’t simply go back to the farm afterward. Something else was waiting for him, something that looked a lot like the army.

If total war was the big political story of the 20th century, the big economic story was the rise of new kind of company. And this too tended to produce both social and economic cohesion. [4]

The 20th century was the century of the big, national corporation. General Electric, General Foods, General Motors. Developments in finance, communications, transportation, and manufacturing enabled a new type of company whose goal was above all scale. Version 1 of this world was low-res: a Duplo world of a few giant companies dominating each big market. [5]

The late 19th and early 20th centuries had been a time of consolidation, led especially by J. P. Morgan. Thousands of companies run by their founders were merged into a couple hundred giant ones run by professional managers. Economies of scale ruled the day. It seemed to people at the time that this was the final state of things. John D. Rockefeller said in 1880

Image result for john d rockefeller

The day of combination is here to stay. Individualism has gone, never to return.

He turned out to be mistaken, but he seemed right for the next hundred years.

The consolidation that began in the late 19th century continued for most of the 20th. By the end of World War II, as Michael Lind writes, “the major sectors of the economy were either organized as government-backed cartels or dominated by a few oligopolistic corporations.”

For consumers this new world meant the same choices everywhere, but only a few of them. When I grew up there were only 2 or 3 of most things, and since they were all aiming at the middle of the market there wasn’t much to differentiate them.

One of the most important instances of this phenomenon was in TV.

Popular culture and daily life of Americans in the 1950s - WWJD

Here there were 3 choices: NBC, CBS, and ABC. Plus public TV for eggheads and communists (jk). The programs the 3 networks offered were indistinguishable. In fact, here there was a triple pressure toward the center. If one show did try something daring, local affiliates in conservative markets would make them stop. Plus since TVs were expensive whole families watched the same shows together, so they had to be suitable for everyone.

And not only did everyone get the same thing, they got it at the same time. It’s difficult to imagine now, but every night tens of millions of families would sit down together in front of their TV set watching the same show, at the same time, as their next door neighbors. What happens now with the Super Bowl used to happen every night. We were literally in sync. [6]

In a way mid-century TV culture was good. The view it gave of the world was like you’d find in a children’s book, and it probably had something of the effect that (parents hope) children’s books have in making people behave better. But, like children’s books, TV was also misleading. Dangerously misleading, for adults. In his autobiography, Robert MacNeil talks of seeing gruesome images that had just come in from Vietnam and thinking, we can’t show these to families while they’re having dinner.

I know how pervasive the common culture was, because I tried to opt out of it, and it was practically impossible to find alternatives.

When I was 13 I realized, more from internal evidence than any outside source, that the ideas we were being fed on TV were crap, and I stopped watching it. [7] But it wasn’t just TV. It seemed like everything around me was crap. The politicians all saying the same things, the consumer brands making almost identical products with different labels stuck on to indicate how prestigious they were meant to be, the balloon-frame houses with fake “colonial” skins, the cars with several feet of gratuitous metal on each end that started to fall apart after a couple years, the “red delicious” apples that were red but only nominally apples. And in retrospect, it was crap. [8]

But when I went looking for alternatives to fill this void, I found practically nothing. There was no Internet then. The only place to look was in the chain bookstore in our local shopping mall. [9] There I found a copy of The Atlantic. I wish I could say it became a gateway into a wider world, but in fact I found it boring and incomprehensible. Like a kid tasting whisky for the first time and pretending to like it, I preserved that magazine as carefully as if it had been a book. I’m sure I still have it somewhere. But though it was evidence that there was, somewhere, a world that wasn’t red delicious, I didn’t find it till college.

It wasn’t just as consumers that the big companies made us similar. They did as employers too. Within companies there were powerful forces pushing people toward a single model of how to look and act. IBM was particularly notorious for this, but they were only a little more extreme than other big companies. And the models of how to look and act varied little between companies. Meaning everyone within this world was expected to seem more or less the same. And not just those in the corporate world, but also everyone who aspired to it—which in the middle of the 20th century meant most people who weren’t already in it. For most of the 20th century, working-class people tried hard to look middle class. You can see it in old photos. Few adults aspired to look dangerous in 1950.

But the rise of national corporations didn’t just compress us culturally. It compressed us economically too, and on both ends.

Along with giant national corporations, we got giant national labor unions. And in the mid 20th century the corporations cut deals with the unions where they paid over market price for labor. Partly because the unions were monopolies. [10] Partly because, as components of oligopolies themselves, the corporations knew they could safely pass the cost on to their customers, because their competitors would have to as well. And partly because in mid-century most of the giant companies were still focused on finding new ways to milk economies of scale. Just as startups rightly pay AWS a premium over the cost of running their own servers so they can focus on growth, many of the big national corporations were willing to pay a premium for labor. [11]

As well as pushing incomes up from the bottom, by overpaying unions, the big companies of the 20th century also pushed incomes down at the top, by underpaying their top management. Economist J. K. Galbraith wrote in 1967 that “There are few corporations in which it would be suggested that executive salaries are at a maximum.” [12]

Speaking Out Meant Standing Alone

To some extent this was an illusion.

Much of the de facto pay of executives never showed up on their income tax returns, because it took the form of perks. The higher the rate of income tax, the more pressure there was to pay employees upstream of it. (In the UK, where taxes were even higher than in the US, companies would even pay their kids’ private school tuitions.) One of the most valuable things the big companies of the mid 20th century gave their employees was job security, and this too didn’t show up in tax returns or income statistics. So the nature of employment in these organizations tended to yield falsely low numbers about economic inequality. But even accounting for that, the big companies paid their best people less than market price. There was no market; the expectation was that you’d work for the same company for decades if not your whole career. [13]

Your work was so illiquid there was little chance of getting market price. But that same illiquidity also encouraged you not to seek it. If the company promised to employ you till you retired and give you a pension afterward, you didn’t want to extract as much from it this year as you could. You needed to take care of the company so it could take care of you. Especially when you’d been working with the same group of people for decades. If you tried to squeeze the company for more money, you were squeezing the organization that was going to take care of them. Plus if you didn’t put the company first you wouldn’t be promoted, and if you couldn’t switch ladders, promotion on this one was the only way up. [14]

To someone who’d spent several formative years in the armed forces, this situation didn’t seem as strange as it does to us now. From their point of view, as big company executives, they were high-ranking officers. They got paid a lot more than privates. They got to have expense account lunches at the best restaurants and fly around on the company’s Gulfstreams. It probably didn’t occur to most of them to ask if they were being paid market price.

The ultimate way to get market price is to work for yourself, by starting your own company. That seems obvious to any ambitious person now. But in the mid 20th century it was an alien concept. Not because starting one’s own company seemed too ambitious, but because it didn’t seem ambitious enough. Even as late as the 1970s, when I grew up, the ambitious plan was to get lots of education at prestigious institutions, and then join some other prestigious institution and work one’s way up the hierarchy. Your prestige was the prestige of the institution you belonged to. People did start their own businesses of course, but educated people rarely did, because in those days there was practically zero concept of starting what we now call a startup: a business that starts small and grows big. That was much harder to do in the mid 20th century. Starting one’s own business meant starting a business that would start small and stay small. Which in those days of big companies often meant scurrying around trying to avoid being trampled by elephants. It was more prestigious to be one of the executive class riding the elephant.

By the 1970s, no one stopped to wonder where the big prestigious companies had come from in the first place.

Famous 1970s Logos: The Best 70s Logo Design Examples

It seemed like they’d always been there, like the chemical elements. And indeed, there was a double wall between ambitious kids in the 20th century and the origins of the big companies. Many of the big companies were roll-ups that didn’t have clear founders. And when they did, the founders didn’t seem like us. Nearly all of them had been uneducated, in the sense of not having been to college. They were what Shakespeare called rude mechanicals. College trained one to be a member of the professional classes. Its graduates didn’t expect to do the sort of grubby menial work that Andrew Carnegie or Henry Ford started out doing. [15]

And in the 20th century there were more and more college graduates. They increased from about 2% of the population in 1900 to about 25% in 2000. In the middle of the century our two big forces intersect, in the form of the GI Bill, which sent 2.2 million World War II veterans to college. Few thought of it in these terms, but the result of making college the canonical path for the ambitious was a world in which it was socially acceptable to work for Henry Ford, but not to be Henry Ford. [16]

I remember this world well. I came of age just as it was starting to break up. In my childhood it was still dominant. Not quite so dominant as it had been. We could see from old TV shows and yearbooks and the way adults acted that people in the 1950s and 60s had been even more conformist than us. The mid-century model was already starting to get old. But that was not how we saw it at the time. We would at most have said that one could be a bit more daring in 1975 than 1965. And indeed, things hadn’t changed much yet.

But change was coming soon.

And when the Duplo economy started to disintegrate, it disintegrated in several different ways at once. Vertically integrated companies literally dis-integrated because it was more efficient to. Incumbents faced new competitors as (a) markets went global and (b) technical innovation started to trump economies of scale, turning size from an asset into a liability. Smaller companies were increasingly able to survive as formerly narrow channels to consumers broadened. Markets themselves started to change faster, as whole new categories of products appeared. And last but not least, the federal government, which had previously smiled upon J. P. Morgan’s world as the natural state of things, began to realize it wasn’t the last word after all.

What J. P. Morgan was to the horizontal axis, Henry Ford was to the vertical. He wanted to do everything himself. The giant plant he built at River Rouge between 1917 and 1928 literally took in iron ore at one end and sent cars out the other. 100,000 people worked there. At the time it seemed the future. But that is not how car companies operate today. Now much of the design and manufacturing happens in a long supply chain, whose products the car companies ultimately assemble and sell. The reason car companies operate this way is that it works better. Each company in the supply chain focuses on what they know best. And they each have to do it well or they can be swapped out for another supplier.

Why didn’t Henry Ford realize that networks of cooperating companies work better than a single big company?

One reason is that supplier networks take a while to evolve. In 1917, doing everything himself seemed to Ford the only way to get the scale he needed. And the second reason is that if you want to solve a problem using a network of cooperating companies, you have to be able to coordinate their efforts, and you can do that much better with computers. Computers reduce the transaction costs that Coase argued are the raison d’etre of corporations. That is a fundamental change.

In the early 20th century, big companies were synonymous with efficiency. In the late 20th century they were synonymous with inefficiency. To some extent this was because the companies themselves had become sclerotic. But it was also because our standards were higher.

It wasn’t just within existing industries that change occurred. The industries themselves changed. It became possible to make lots of new things, and sometimes the existing companies weren’t the ones who did it best.

Microcomputers are a classic example.

Ms Dos 1.25 (1982)(Microsoft) Game

The market was pioneered by upstarts like Apple, Radio Shack and Atari. When it got big enough, IBM decided it was worth paying attention to. At the time IBM completely dominated the computer industry. They assumed that all they had to do, now that this market was ripe, was to reach out and pick it. Most people at the time would have agreed with them. But what happened next illustrated how much more complicated the world had become. IBM did launch a microcomputer. Though quite successful, it did not crush Apple. But even more importantly, IBM itself ended up being supplanted by a supplier coming in from the side—from software, which didn’t even seem to be the same business. IBM’s big mistake was to accept a non-exclusive license for DOS. It must have seemed a safe move at the time. No other computer manufacturer had ever been able to outsell them. What difference did it make if other manufacturers could offer DOS too? The result of that miscalculation was an explosion of inexpensive PC clones. Microsoft now owned the PC standard, and the customer. And the microcomputer business ended up being Apple vs Microsoft.

Basically, Apple bumped IBM and then Microsoft stole its wallet. That sort of thing did not happen to big companies in mid-century. But it was going to happen increasingly often in the future.

Change happened mostly by itself in the computer business. In other industries, legal obstacles had to be removed first. Many of the mid-century oligopolies had been anointed by the federal government with policies (and in wartime, large orders) that kept out competitors. This didn’t seem as dubious to government officials at the time as it sounds to us. They felt a two-party system ensured sufficient competition in politics. It ought to work for business too.

Gradually the government realized that anti-competitive policies were doing more harm than good, and during the Carter administration started to remove them.

The word used for this process was misleadingly narrow: deregulation. What was really happening was de-oligopolization. It happened to one industry after another. Two of the most visible to consumers were air travel and long-distance phone service, which both became dramatically cheaper after deregulation.

Deregulation also contributed to the wave of hostile takeovers in the 1980s. In the old days the only limit on the inefficiency of companies, short of actual bankruptcy, was the inefficiency of their competitors. Now companies had to face absolute rather than relative standards. Any public company that didn’t generate sufficient returns on its assets risked having its management replaced with one that would. Often the new managers did this by breaking companies up into components that were more valuable separately. [17]

Version 1 of the national economy consisted of a few big blocks whose relationships were negotiated in back rooms by a handful of executives, politicians, regulators, and labor leaders. Version 2 was higher resolution: there were more companies, of more different sizes, making more different things, and their relationships changed faster. In this world there were still plenty of back room negotiations, but more was left to market forces. Which further accelerated the fragmentation.

It’s a little misleading to talk of versions when describing a gradual process, but not as misleading as it might seem. There was a lot of change in a few decades, and what we ended up with was qualitatively different. The companies in the S&P 500 in 1958 had been there an average of 61 years. By 2012 that number was 18 years. [18]

The breakup of the Duplo economy happened simultaneously with the spread of computing power. To what extent were computers a precondition? It would take a book to answer that. Obviously the spread of computing power was a precondition for the rise of startups. I suspect it was for most of what happened in finance too. But was it a precondition for globalization or the LBO wave? I don’t know, but I wouldn’t discount the possibility. It may be that the refragmentation was driven by computers in the way the industrial revolution was driven by steam engines. Whether or not computers were a precondition, they have certainly accelerated it.

The new fluidity of companies changed people’s relationships with their employers. Why climb a corporate ladder that might be yanked out from under you? Ambitious people started to think of a career less as climbing a single ladder than as a series of jobs that might be at different companies. More movement (or even potential movement) between companies introduced more competition in salaries. Plus as companies became smaller it became easier to estimate how much an employee contributed to the company’s revenue. Both changes drove salaries toward market price. And since people vary dramatically in productivity, paying market price meant salaries started to diverge.

By no coincidence it was in the early 1980s that the term “yuppie” was coined. That word is not much used now, because the phenomenon it describes is so taken for granted, but at the time it was a label for something novel. Yuppies were young professionals who made lots of money. To someone in their twenties today, this wouldn’t seem worth naming. Why wouldn’t young professionals make lots of money? But until the 1980s being underpaid early in your career was part of what it meant to be a professional. Young professionals were paying their dues, working their way up the ladder. The rewards would come later. What was novel about yuppies was that they wanted market price for the work they were doing now.

The first yuppies did not work for startups.

AM2407 Spark blog: 1980s - The Yuppie

That was still in the future. Nor did they work for big companies. They were professionals working in fields like law, finance, and consulting. But their example rapidly inspired their peers. Once they saw that new BMW 325i, they wanted one too.

Underpaying people at the beginning of their career only works if everyone does it. Once some employer breaks ranks, everyone else has to, or they can’t get good people. And once started this process spreads through the whole economy, because at the beginnings of people’s careers they can easily switch not merely employers but industries.

But not all young professionals benefitted. You had to produce to get paid a lot. It was no coincidence that the first yuppies worked in fields where it was easy to measure that.

More generally, an idea was returning whose name sounds old-fashioned precisely because it was so rare for so long: that you could make your fortune. As in the past there were multiple ways to do it. Some made their fortunes by creating wealth, and others by playing zero-sum games. But once it became possible to make one’s fortune, the ambitious had to decide whether or not to. A physicist who chose physics over Wall Street in 1990 was making a sacrifice that a physicist in 1960 wasn’t.

The idea even flowed back into big companies. CEOs of big companies make more now than they used to, and I think much of the reason is prestige. In 1960, corporate CEOs had immense prestige. They were the winners of the only economic game in town. But if they made as little now as they did then, in real dollar terms, they’d seem like small fry compared to professional athletes and whiz kids making millions from startups and hedge funds. They don’t like that idea, so now they try to get as much as they can, which is more than they had been getting. [19]

Meanwhile a similar fragmentation was happening at the other end of the economic scale. As big companies’ oligopolies became less secure, they were less able to pass costs on to customers and thus less willing to overpay for labor. And as the Duplo world of a few big blocks fragmented into many companies of different sizes—some of them overseas—it became harder for unions to enforce their monopolies. As a result workers’ wages also tended toward market price. Which (inevitably, if unions had been doing their job) tended to be lower. Perhaps dramatically so, if automation had decreased the need for some kind of work.

And just as the mid-century model induced social as well as economic cohesion, its breakup brought social as well as economic fragmentation. People started to dress and act differently. Those who would later be called the “creative class” became more mobile. People who didn’t care much for religion felt less pressure to go to church for appearances’ sake, while those who liked it a lot opted for increasingly colorful forms. Some switched from meat loaf to tofu, and others to Hot Pockets. Some switched from driving Ford sedans to driving small imported cars, and others to driving SUVs. Kids who went to private schools or wished they did started to dress “preppy,” and kids who wanted to seem rebellious made a conscious effort to look disreputable. In a hundred ways people spread apart. [20]

Almost four decades later, fragmentation is still increasing.

Has it been net good or bad? I don’t know; the question may be unanswerable. Not entirely bad though. We take for granted the forms of fragmentation we like, and worry only about the ones we don’t. But as someone who caught the tail end of mid-century conformism, I can tell you it was no utopia. [21]

My goal here is not to say whether fragmentation has been good or bad, just to explain why it’s happening. With the centripetal forces of total war and 20th century oligopoly mostly gone, what will happen next? And more specifically, is it possible to reverse some of the fragmentation we’ve seen?

If it is, it will have to happen piecemeal. You can’t reproduce mid-century cohesion the way it was originally produced. It would be insane to go to war just to induce more national unity. And once you understand the degree to which the economic history of the 20th century was a low-res version 1, it’s clear you can’t reproduce that either.

20th century cohesion was something that happened at least in a sense naturally. The war was due mostly to external forces, and the Duplo economy was an evolutionary phase. If you want cohesion now, you’d have to induce it deliberately. And it’s not obvious how. I suspect the best we’ll be able to do is address the symptoms of fragmentation. But that may be enough.

The form of fragmentation people worry most about lately is economic inequality, and if you want to eliminate that you’re up against a truly formidable headwind—one that has been in operation since the stone age: technology. Technology is a lever. It magnifies work. And the lever not only grows increasingly long, but the rate at which it grows is itself increasing.

Which in turn means the variation in the amount of wealth people can create has not only been increasing, but accelerating.

The unusual conditions that prevailed in the mid 20th century masked this underlying trend. The ambitious had little choice but to join large organizations that made them march in step with lots of other people—literally in the case of the armed forces, figuratively in the case of big corporations. Even if the big corporations had wanted to pay people proportionate to their value, they couldn’t have figured out how. But that constraint has gone now. Ever since it started to erode in the 1970s, we’ve seen the underlying forces at work again. [22]

Not everyone who gets rich now does it by creating wealth, certainly. But a significant number do, and the Baumol Effect means all their peers get dragged along too. [23] And as long as it’s possible to get rich by creating wealth, the default tendency will be for economic inequality to increase. Even if you eliminate all the other ways to get rich. You can mitigate this with subsidies at the bottom and taxes at the top, but unless taxes are high enough to discourage people from creating wealth, you’re always going to be fighting a losing battle against increasing variation in productivity. [24]

That form of fragmentation, like the others, is here to stay. Or rather, back to stay. Nothing is forever, but the tendency toward fragmentation should be more forever than most things, precisely because it’s not due to any particular cause. It’s simply a reversion to the mean. When Rockefeller said individualism was gone, he was right for a hundred years. It’s back now, and that’s likely to be true for longer.

I worry that if we don’t acknowledge this, we’re headed for trouble.

If we think 20th century cohesion disappeared because of few policy tweaks, we’ll be deluded into thinking we can get it back (minus the bad parts, somehow) with a few countertweaks. And then we’ll waste our time trying to eliminate fragmentation, when we’d be better off thinking about how to mitigate its consequences.

Notes

[1] Lester Thurow, writing in 1975, said the wage differentials prevailing at the end of World War II had become so embedded that they “were regarded as ‘just’ even after the egalitarian pressures of World War II had disappeared. Basically, the same differentials exist to this day, thirty years later.” But Goldin and Margo think market forces in the postwar period also helped preserve the wartime compression of wages—specifically increased demand for unskilled workers, and oversupply of educated ones.

(Oddly enough, the American custom of having employers pay for health insurance derives from efforts by businesses to circumvent NWLB wage controls in order to attract workers.)

[2] As always, tax rates don’t tell the whole story. There were lots of exemptions, especially for individuals. And in World War II the tax codes were so new that the government had little acquired immunity to tax avoidance. If the rich paid high taxes during the war it was more because they wanted to than because they had to.

After the war, federal tax receipts as a percentage of GDP were about the same as they are now.

In fact, for the entire period since the war, tax receipts have stayed close to 18% of GDP, despite dramatic changes in tax rates. The lowest point occurred when marginal income tax rates were highest: 14.1% in 1950. Looking at the data, it’s hard to avoid the conclusion that tax rates have had little effect on what people actually paid.

[3] Though in fact the decade preceding the war had been a time of unprecedented federal power, in response to the Depression. Which is not entirely a coincidence, because the Depression was one of the causes of the war. In many ways the New Deal was a sort of dress rehearsal for the measures the federal government took during wartime. The wartime versions were much more drastic and more pervasive though. As Anthony Badger wrote, “for many Americans the decisive change in their experiences came not with the New Deal but with World War II.”

[4] I don’t know enough about the origins of the world wars to say, but it’s not inconceivable they were connected to the rise of big corporations. If that were the case, 20th century cohesion would have a single cause.

[5] More precisely, there was a bimodal economy consisting, in Galbraith’s words, of “the world of the technically dynamic, massively capitalized and highly organized corporations on the one hand and the hundreds of thousands of small and traditional proprietors on the other.” Money, prestige, and power were concentrated in the former, and there was near zero crossover.

[6] I wonder how much of the decline in families eating together was due to the decline in families watching TV together afterward.

[7] I know when this happened because it was the season Dallas premiered. Everyone else was talking about what was happening on Dallas, and I had no idea what they meant.

[8] I didn’t realize it till I started doing research for this essay, but the meretriciousness of the products I grew up with is a well-known byproduct of oligopoly. When companies can’t compete on price, they compete on tailfins.

[9] Monroeville Mall was at the time of its completion in 1969 the largest in the country. In the late 1970s the movie Dawn of the Dead was shot there. Apparently the mall was not just the location of the movie, but its inspiration; the crowds of shoppers drifting through this huge mall reminded George Romero of zombies. My first job was scooping ice cream in the Baskin-Robbins.

[10] Labor unions were exempted from antitrust laws by the Clayton Antitrust Act in 1914 on the grounds that a person’s work is not “a commodity or article of commerce.” I wonder if that means service companies are also exempt.

[11] The relationships between unions and unionized companies can even be symbiotic, because unions will exert political pressure to protect their hosts. According to Michael Lind, when politicians tried to attack the A&P supermarket chain because it was putting local grocery stores out of business, “A&P successfully defended itself by allowing the unionization of its workforce in 1938, thereby gaining organized labor as a constituency.” I’ve seen this phenomenon myself: hotel unions are responsible for more of the political pressure against Airbnb than hotel companies.

[12] Galbraith was clearly puzzled that corporate executives would work so hard to make money for other people (the shareholders) instead of themselves. He devoted much of The New Industrial State to trying to figure this out.

His theory was that professionalism had replaced money as a motive, and that modern corporate executives were, like (good) scientists, motivated less by financial rewards than by the desire to do good work and thereby earn the respect of their peers. There is something in this, though I think lack of movement between companies combined with self-interest explains much of observed behavior.

[13] Galbraith (p. 94) says a 1952 study of the 800 highest paid executives at 300 big corporations found that three quarters of them had been with their company for more than 20 years.

[14] It seems likely that in the first third of the 20th century executive salaries were low partly because companies then were more dependent on banks, who would have disapproved if executives got too much. This was certainly true in the beginning. The first big company CEOs were J. P. Morgan’s hired hands.

Companies didn’t start to finance themselves with retained earnings till the 1920s. Till then they had to pay out their earnings in dividends, and so depended on banks for capital for expansion. Bankers continued to sit on corporate boards till the Glass-Steagall act in 1933.

By mid-century big companies funded 3/4 of their growth from earnings. But the early years of bank dependence, reinforced by the financial controls of World War II, must have had a big effect on social conventions about executive salaries. So it may be that the lack of movement between companies was as much the effect of low salaries as the cause.

Incidentally, the switch in the 1920s to financing growth with retained earnings was one cause of the 1929 crash. The banks now had to find someone else to lend to, so they made more margin loans.

[15] Even now it’s hard to get them to. One of the things I find hardest to get into the heads of would-be startup founders is how important it is to do certain kinds of menial work early in the life of a company. Doing things that don’t scale is to how Henry Ford got started as a high-fiber diet is to the traditional peasant’s diet: they had no choice but to do the right thing, while we have to make a conscious effort.

[16] Founders weren’t celebrated in the press when I was a kid. “Our founder” meant a photograph of a severe-looking man with a walrus mustache and a wing collar who had died decades ago. The thing to be when I was a kid was an executive. If you weren’t around then it’s hard to grasp the cachet that term had. The fancy version of everything was called the “executive” model.

[17] The wave of hostile takeovers in the 1980s was enabled by a combination of circumstances: court decisions striking down state anti-takeover laws, starting with the Supreme Court’s 1982 decision in Edgar v. MITE Corp.; the Reagan administration’s comparatively sympathetic attitude toward takeovers; the Depository Institutions Act of 1982, which allowed banks and savings and loans to buy corporate bonds; a new SEC rule issued in 1982 (rule 415) that made it possible to bring corporate bonds to market faster; the creation of the junk bond business by Michael Milken; a vogue for conglomerates in the preceding period that caused many companies to be combined that never should have been; a decade of inflation that left many public companies trading below the value of their assets; and not least, the increasing complacency of managements.

[18] Foster, Richard. “Creative Destruction Whips through Corporate America.” Innosight, February 2012.

[19] CEOs of big companies may be overpaid. I don’t know enough about big companies to say. But it is certainly not impossible for a CEO to make 200x as much difference to a company’s revenues as the average employee. Look at what Steve Jobs did for Apple when he came back as CEO. It would have been a good deal for the board to give him 95% of the company. Apple’s market cap the day Steve came back in July 1997 was 1.73 billion. 5% of Apple now (January 2016) would be worth about 30 billion. And it would not be if Steve hadn’t come back; Apple probably wouldn’t even exist anymore.

Merely including Steve in the sample might be enough to answer the question of whether public company CEOs in the aggregate are overpaid. And that is not as facile a trick as it might seem, because the broader your holdings, the more the aggregate is what you care about.

[20] The late 1960s were famous for social upheaval. But that was more rebellion (which can happen in any era if people are provoked sufficiently) than fragmentation. You’re not seeing fragmentation unless you see people breaking off to both left and right.

[21] Globally the trend has been in the other direction. While the US is becoming more fragmented, the world as a whole is becoming less fragmented, and mostly in good ways.

[22] There were a handful of ways to make a fortune in the mid 20th century. The main one was drilling for oil, which was open to newcomers because it was not something big companies could dominate through economies of scale. How did individuals accumulate large fortunes in an era of such high taxes? Giant tax loopholes defended by two of the most powerful men in Congress, Sam Rayburn and Lyndon Johnson.

But becoming a Texas oilman was not in 1950 something one could aspire to the way starting a startup or going to work on Wall Street were in 2000, because (a) there was a strong local component and (b) success depended so much on luck.

[23] The Baumol Effect induced by startups is very visible in Silicon Valley. Google will pay people millions of dollars a year to keep them from leaving to start or join startups.

[24] I’m not claiming variation in productivity is the only cause of economic inequality in the US. But it’s a significant cause, and it will become as big a cause as it needs to, in the sense that if you ban other ways to get rich, people who want to get rich will use this route instead.

Thanks to Sam Altman, Trevor Blackwell, Paul Buchheit, Patrick Collison, Ron Conway, Chris Dixon, Benedict Evans, Richard Florida, Ben Horowitz, Jessica Livingston, Robert Morris, Tim O’Reilly, Geoff Ralston, Max Roser, Alexia Tsotsis, and Qasar Younis for reading drafts of this. Max also told me about several valuable sources. Essay from http://paulgraham.com/re.html

Bibliography

Allen, Frederick Lewis. The Big Change. Harper, 1952.

Averitt, Robert. The Dual Economy. Norton, 1968.

Badger, Anthony. The New Deal. Hill and Wang, 1989.

Bainbridge, John. The Super-Americans. Doubleday, 1961.

Beatty, Jack. Collossus. Broadway, 2001.

Brinkley, Douglas. Wheels for the World. Viking, 2003.

Brownleee, W. Elliot. Federal Taxation in America. Cambridge, 1996.

Chandler, Alfred. The Visible Hand. Harvard, 1977.

Chernow, Ron. The House of Morgan. Simon & Schuster, 1990.

Chernow, Ron. Titan: The Life of John D. Rockefeller. Random House, 1998.

Galbraith, John. The New Industrial State. Houghton Mifflin, 1967.

Goldin, Claudia and Robert A. Margo. “The Great Compression: The Wage Structure in the United States at Mid-Century.” NBER Working Paper 3817, 1991.

Gordon, John. An Empire of Wealth. HarperCollins, 2004.

Klein, Maury. The Genesis of Industrial America, 1870-1920. Cambridge, 2007.

Lind, Michael. Land of Promise. HarperCollins, 2012.

Mickelthwaite, John, and Adrian Wooldridge. The Company. Modern Library, 2003.

Nasaw, David. Andrew Carnegie. Penguin, 2006.

Sobel, Robert. The Age of Giant Corporations. Praeger, 1993.

Thurow, Lester. Generating Inequality: Mechanisms of Distribution. Basic Books, 1975.

Witte, John. The Politics and Development of the Federal Income Tax. Wisconsin, 1985.

 

The Strength Of The Past And Its Great Might

Within the last generation, archaeology has undergone a major transformation, developing from an independent small-scale activity, based upon museums and a few university departments, into a large-scale state organization based upon national legislation.

Dreamer by Thomas Dodd Photography

This has entailed an increase in resources on an unprecedented scale, and has drastically changed the profile of archaeology, which is now firmly fixed within the political and national domains. Moreover, decision making within the discipline has shifted from museums and university departments towards various new national agencies for the conservation and protection of the cultural heritage.

The consequences of this development for the discipline as a whole had remained largely unnoticed until …..click here to read the complete electronic essay by Kristian Kristiansen University of Gothenburg.

Also available via our friends at academia.edu

The Royal Ontario Museum Publishes Cloth That Changed The World: The Art And Fashion Of Indian Chintz

New book explores the story of India’s richly coloured textiles ahead of ROM original exhibition

Photography by Tina Weltz

TORONTO — The Royal Ontario Museum (ROM) is pleased to announce the publication of Cloth that Changed the World: The Art and Fashion of Indian Chintz on December 2, 2019. The collection of essays explores the far-reaching influence this vividly printed and painted cotton cloth has had on the world, from its origins 5,000 years ago to its place in fashion and home décor today. The volume is the official companion to the ROM-original exhibition The Cloth that Changed the World: India’s Painted and Printed Cottons, which runs from April 4 to September 27, 2020 in Toronto.

The scholarly and beautifully illustrated publication draws from the Royal Ontario Museum’s own Indian chintz collection, which ranks as one of the best in the world. Featuring extensive new research, this multidisciplinary book traces the story of chintz and the indelible footprint it has left on the world. The publication combines vivid field photography of artisans at work with striking images from the ROM’s world-class collection, as well as images from India’s fashion runways and the work of top designers embracing this heritage textile today.

“The world would be a drab place without India,” says Sarah Fee, editor, Cloth that Changed the World and ROM Senior Curator of Eastern Hemisphere Fashion and Textiles. “Our blue jeans and printed T-shirts trace much of their lineage back to the ingenuity of India’s cotton printers and dyers. This exhibition and companion book celebrate how India ‘clothed the world’ in exuberantly coloured cottons for thousands of years. It explores the art’s resiliency in the face of modern industrial imitation and shares the exciting stories of reviving natural dyes and hand skills in India today.”

Contributing writers include leading experts Ruth Barnes, Rosemary Crill, Steven Cohen, Deepali Dewan, Max Dionisio, Eiluned Edwards, Sarah Fee, Maria João Ferreira, Sylvia Houghteling, Peter Lee, Hanna Martinsen, Deborah A. Metsger, Alexandra Palmer, Divia Patel, Giorgio Riello, Rajarshi Sengupta, Philip Sykas, and João Teles e Cunha, and a preface by Sven Beckert, Harvard University’s Laird Bell Professor of History.

Next spring, the book comes to life in the ROM’s Patricia Harris Gallery of Textiles and Costumes, where ROM-original exhibition The Cloth that Changed the World: India’s Painted and Printed Cottons will take visitors on a journey through the ROM’s world-renowned collection of chintz, on public display for the first time in over 50 years.

The striking exhibition will explore thought-provoking themes, including the ingenuity, skill and technique of Indian artisans; the adaptation of chintz for international markets; and the environmental impact of the global textile industry over time. With a focus on attire and home furnishings, the exhibition features 80 objects spanning 10 centuries and four continents. Religious and court banners for India, monumental gilded wall hangings for elite homes in Europe and Thailand, and luxury women’s dress for England showcase the versatility and far-reaching desire for Indian Chintz.

About Sarah Fee (Curator and Editor)

Dr. Sarah Fee is Senior Curator of Eastern Hemisphere fashion and textiles at the Royal Ontario Museum. She has degrees in Anthropology and African studies from Oxford University and the School of Oriental Studies, Paris, and in 2002, guest-curated an exhibition on Madagascar for the Smithsonian’s National Museum of African Art. Today, she continues to focus on Malagasy historic textiles and fashions, in addition to those of Zanzibar and Western India. A research associate at the Musée du Quai Branly, Paris, and the Indian Ocean World Centre at McGill University, Fee also teaches at the University of Toronto’s Department of Art. Fee is a past Board Member of the Textile Society of America, and currently sits on the editorial board of the Textile Museum Journal (TMJ).

About the Publication

Cloth that Changed the World: The Art and Fashion of Indian Chintz
Editor: Sarah Fee
Available at the ROM store starting December 2, 2019.
9 x 12, 272 pages, 300 colour illustrations.
$50.00.
Royal Ontario Museum and Yale University Press.

ROM SOCIAL MEDIA 

Join the Conversation: @ROMtoronto
Like: ROM Facebook
Watch: ROM YouTube

Sarah Fee
@aTextilesCurator

ABOUT THE ROM

Founded in 1914, the Royal Ontario Museum showcases art, culture and nature from around the world and across the ages. Among the top 10 cultural institutions in North America, Canada’s largest and most comprehensive museum is home to a world-class collection of 13 million art objects and natural history specimens, featured in 40 gallery and exhibition spaces. As the country’s preeminent field research institute and an international leader in new and original findings, the ROM plays a vital role in advancing our understanding of the artistic, cultural and natural world. Combining its original heritage architecture with the contemporary Daniel Libeskind-designed Michael Lee-Chin Crystal, the ROM serves as a national landmark, and a dynamic cultural destination in the heart of Toronto for all to enjoy.

ON NOW


It’s Alive! Classic Horror and Sci-Fi Art from the Kirk Hammett Collection
Gods in My Home: Chinese New Year with Ancestor Portraits and Deity Prints

COMING SOON


November 16, 2019 | Bloodsuckers: Legends to Leeches
March 7, 2020 | Egyptian Mummies: Exploring Ancient Lives


À L’AFFICHE


Sauve qui peut! L’art des grands films d’horreur et de science-fiction de la collection de Kirk Hammett
Accueillir les divinités : Portraits d’ancêtres et estampes de dieux pour le nouvel an chinois

À VENIR


Le 16 novembre 2019 | Soif de sang
Le 7 mars 2020 | Les momies égyptiennes : À la rencontre des Anciens

How Societies Become Consumer Cultures Through Housing

Alfred Marshall’s (Principles of Economics, 1891) view of housing still goes right to the heart of what makes housing and built environment an important anthropological topic. No artifact is so clearly multi-functional, simultaneously a utilitarian object of absolute necessity, and an item of symbolic material culture, a text of almost unending complexity.

In every house the economic, social and symbolic dimensions of behavior come together. This may be why the analysis of housing has had such a wide appeal in disciplines as diverse as social psychology, folklore, economics and engineering. Anthropologists themselves have shown a new willingness to consider the house as a key artifact in understanding the articulation of economic and social change during economic development.

An ethnocentric home.

From the perspective of our own contemporary society, surrounded by houses of all shapes and sizes, where wealth and luxury are synonymous with housing, this seems obvious and commonplace. The 1980’s television show “Lifestyles of the Rich and Famous” and journals like “Architectural Review” are odes to the home as a shrine and symbol of wealth. But just as clearly, there are societies where all the houses look alike, even though all the people are not alike. Perhaps then, the assumption that there is something natural and obvious about spending on the house and home market as a marker of prestige is ethnocentric. Why the house instead of something else?

A number of anthropological approaches attempt to place the house in a theoretical context which answer this question by relating housing to social, economic, and psychological variation and change. For example, a utilitarian approach that views the house partially as a workspace links changes in the elaboration of houses to changes in the kinds of work done in the household (Braudel 1973:201). Or if the house is seen as a reflection of how all household activities are organized and divided, then the shape of the house will change as activities are modified, differentiated, or recombined (Kent 1983, 1984).

Utilitarian houses.

An even more utilitarian perspective relates the form of the house to climate, technology and the kinds of building materials that are available (Duly 1979).  For the Silo, Richard R. Wilk.

Read on..click here and read the full PDF document on your device.

Supplemental- Complete Text  Principles of Economics (London: Macmillan and Co. 8th ed. 1920).
Author: Alfred Marshall
About This Title: This is the 8th edition of what is regarded to be the first “modern” economics textbook, leading in various editions from the 19th into the 20th century. The final 8th edition was Marshall’s most-used and most-cited.

Streaming Companies Spotify And Labels Sony Making The Money Not Artists

Potter Box

Definition:

Is it ethical for media streaming companies, such as Spotify, to take advantage of IP loopholes, which are known to negatively impact artist revenues?

Values:

>Balance & Fairness

>Legal Values

Loyalties:

a. Duty to service

b. Duty to subscribers

c. Duty to shareholders

d. Duty to Intellectual Property

e. Duty to Art & Commerce

Principles:

Aristotle’s Mean: “Moral virtue is a middle state determined by practical wisdom.” Virtuous people will arrive at a fair and reasonable agreement for the legitimate claims of both sides somewhere in the middle of two extreme claims.The two sides must negotiate a compromise in good faith. “Generally speaking,in extremely complicated situations with layers of ambiguity and uncertainty, Aristotle’s principle has the most intellectual appeal.”

BASIC CONCEPT:

Negotiated compromise.

>>Streaming Media Company

For the Purpose of analyzing an isolated streaming media company, Spotify will be examined through the lens of the potter box. Spotify is a streaming service with cross-platform availability that specializes in music, and generates income from its 20 million premium and 75 million free users, respectively. Spotify boasts an extensive catalogue for free and for a nominal fee. Spotify’s extensive catalogue is made possible due to established agreements between various record labels and media companies. Agreements that are known to negatively affect artists, while benefiting both Spotify & Record labels, plague the music industry. Payout deals between Spotify and record companies range from royalty payout to equity deals.

Spotify does not want to make adjustments to the model of its free service, because if their users are not able to find it on Spotify, they will utilize other streaming services such as youtube, which is likely to have the content. They have identified this free offering as being their driving force for getting new subscribers to the service. New subscribers that turn into increased revenue for record labels, as 70% of revenue from $10 per month subscriptions and advertisements are paid to record labels, artists, and song publishers.

>>Artists—Influence: Art/Media Creators

The Artists on Spotify collectively stream over 30 million songs across 58 different markets. Despite collectively making up a heart from which Spotify thrives, Artists receive 6.8% of streaming revenue, the smallest share of the pie.

Artists receive 10.9% of the post tax payout between artists, labels, and songwriters/publishers. Many artists including Adele, Taylor Swift, the Beatles, and Coldplay have opted for keeping music off of Spotify.

Spotify Breakdown Chart1

Spotify does not have direct agreements with most artists. The streaming company has agreements with labels, whom are responsible for not only securing licenses to music, but to are also responsible for payouts to artists. So essentially, Spotify pays labels, and the label is empowered with payout to artists. The problem is not that Spotify refuses to fairly pay for royalties; it is the trickling down of payment from labels to the respective artists. Spotify has wholesale access to music catalogues from record labels, which makes it hard to fairly split royalty payments amongst artists that are under contractual agreements with respective label.

Even with leaked contract between Spotify and Sony Music available, it is still unclear how much of payouts to record labels actually get to the hands of the artist. It is clear that Sony Music is getting a hefty payout annually, but the question is still whether or not these hefty payouts are passed on to the artists.

>>Major Record Companies

The music catalogue on Spotify is mostly populated by content from major record labels that include Sony Music, Universal Music Group, EMI, Warner Music, Merlin, and The Orchard. Self-published artists as well as artists from independent labels also help makeup Spotify’s catalogue.

Record companies have begun to further question Spotify’s free model since Taylor Swift and other artists have proactively opposed Spotify’s extensive free offerings to users. Streaming consumers of music increased by 54% between 2013 and 2014 according to the Nielsen SoundScan. Major record companies are often made better deals, which disproportionately disadvantage independent artists and labels.

Executives at major record label such as Universal Music and Warner Music have made statements about the extensive free offering of its licensed music is not sustainable long-term. It was suggested that there needs to be a more clear differentiation between content available to free and premium users. Bjork has suggested that Spotify should not allow access for certain content right when it comes out, but should allow for content to go through certain rounds of monetization before ending up on Spotify, similar to Netflix’s rollout method for its content. Major record labels are currently in the process of renegotiating agreements, and are mostly pushing for adjustments to free service offered.

Their goal is to have the “freemium” model disappear as time persists.

What is the current policy?

A legal agreement between Sony, the second largest record company in the world, and Spotify recently leaked, which further intensifies questions about fair payouts for artists. The contract confirmed that major record companies benefit from the success of the streaming service Spotify. The contract details advance payments of over $40 million, with a $9 million advertising credit. Sony has declined to comment on the leaked contract, as it was illegally obtained. Labels routinely keep advances for themselves according to an industry insider.

The leaked contract detailed agreements between Sony Music and Spotify, but not between Sony Music and artists. Such fruits of private agreements don’t necessarily trickle down to the artist, because in most cases they are not even aware of an under the table deal unless a leak has occurred. Unstated under the table deals are not ethical, because artists do not benefit from funds received on account of their intellectual property. The International Music Managers Forum urges European and American authorities alike to use the Sony leak as an example of why more transparence is necessary.

Artists are not being fairly compensated for use of their intellectual property.

Streaming companies have established a revenue arrangement with major Record Companies that often does not favor artists. The obvious shortfalls with existing policy include the lack of transparency when it comes to agreements between record labels and Spotify. There are no systems of checks and balances for ensuring that labels adequately and fairly share Spotify revenue with artists. There needs to be a streamlined system that puts everything on the table in clear view, for fair agreements between artists, label, and streaming company to be arrived at. Current policy also allows Spotify to take up to 15% off the top from revenue generated from ad sales.

What needs to be changed?

Spotify seems to be fairly paying for royalties, but the flow of cash does not always get to the artists. Substitute apps; try to compete with Spotify, by challenging the freemium model. Other apps such as Tidal aim to provide audience with exclusive content that they won’t find anywhere else. The problem is that apps such as Tidal market themselves as a music-streaming app by the artists for the artists. Nowhere in that equation is the interest of the average potential consumer considered. Artists may receive more money per stream, but the service is double the price of Spotify. Record companies, and artists alike, are moving away from Spotify’s freemium model. The digitization of music is not the problem, as most artists and labels generally trust certain digital services such as itunes, because it translates into revenue for artists with no veil or strings attached. Extensive free offerings seem to be the major issue that involved parties have with Spotify, but it is the only thing that drives traffic according to Spotify. The freemium offerings need to be changed in some way, but in a way that is non-disruptive to Spotify’s commerce. Since Spotify pays its fair share of royalties, a more streamlined agreement between record labels and artists should be established and transparent, as should deals made between Spotify and record labels.

Major record labels need to stop double dipping. Not only do they receive cash advances & royalties, but they also benefit from Spotify’s overall revenue stream as they have equity in Spotify of up to 18%. Billboard magazine interviewed two dozen record executives and they agreed that they were confused as to what Spotify was replacing, whether being a substitute for sales or piracy. Examples of setting limitations of the freemium service have showed signs of slowing down subscription growth rate. Spotify has stated that if artists are not fairly compensated from stream revenue, then it is a result of recording contracts and or label accounting practices. Some major record labels are fine with Spotify using their music to build business, because of their equity; they are looking ahead for profit from a future IPO. The artists would not benefit in the same manner, despite their content being the driving force for the app in the first place.

Click me! Future Art Sound
Click me! Future Art Sound

Scenarios

In a time of changing platforms and distribution methods, consumer trends has undoubtedly been in transition. The radio still accounts for an estimated 35% of music consumption, followed by CD consumption at 20%, free streaming at 19%, and paid streaming at 1%. Multichannel consumers, mostly millennia’s, account for 66% of music consumers. A multichannel consumer may pay for a streaming subscription, and make a physical and/or a virtual music purchase. The most common multichannel consumption combination is free streaming coupled with CD listening which accounts for 49% of multichannel listeners, followed by free streaming coupled with music downloads which accounts for 44%. Millennia’s are also known to engage in both free streaming and downloading.

Evidence

During the first quarter of 2014, Pharrell Williams garnered 43 million Pandora streams, which only paid him $2,700 as a songwriter. A statement from Pandora indicates that all rights holders were paid upwards of $150,000 within the first 3 months, and that the real issue is the financial dispute between labels and publishers. Pandora also indicated in the statement that labels are free to split royalties between themselves and artists, however they see fit. Clearly there needs to be more transparency for the cash flow between streaming company, label, and the artist.

Spotify returns 70% of its revenue to rights holders, with information about each artist to aid in the royalty split process. Streaming companies are engaged in fair due diligence where payment of royalties are concerned. The evidence is as follows:

Spotify Breakdown Chart2

Actionable Policy

The music industry needs a streamlined agreement between streaming companies, record labels, publishers, and artists. It is imperative that there is increased transparency, especially where cash flow is concerned. Artists should be able to see all cash and data exchanged between streaming company and label. Royalty holders need to publicly split funds amongst themselves and artists. Record labels need to be accountable to both their artists and streaming company, because an artist that feels swindled can create bad blood between the artist and the label and/or the artist and streaming company.

Recommendation

>>Actionable

1. Artists are cut into equity deals based on audience pull to streaming service per qtr

>>Streaming Services should provide analytics with specific data to aid audience pull observation for given artists

2.Major Labels are transparent with cash flow of compensation from Streaming Companies

Is it ethical for media streaming companies, such as Spotify, to take advantage of IP loopholes, which are known to negatively impact artist revenues?

Judgment:

It is ethical for streaming services to take advantage of IP loopholes, which are known to negatively impact artist revenues. Music platform are changing, and as such, better agreements need to be drafted to complement this change. Streaming companies have shown the numbers, and they are paying for royalties, which is essentially paying for the use of the music in their catalogue. Music streaming is an emerging market, which record companies themselves are invested in. The common mode of music monetization is moving away from CD sales, and that is undeniable. Music downloads take up a lot of data, so streaming is the most practical way for consumers to enjoy their music.

The freemium model of Spotify should not be eliminated, but it should certainly be reconsidered, or at least limited in music access. Premium, new, and sought after music should not be as accessible as music that has already exited the promotion stage. Their needs to be some sort of compromise between record labels and Spotify, to better differentiate between premium content and freemium content. Spotify does not want to compromise the availability of its music on either platform, and labels reserve right to pull any of their artists from Spotify as they wish. Spotify should do a better job differentiating free content from premium content, it’s only fair. Spotify should not compromise to the point that it becomes impractical, but should compromise in a way that is cost-effective for all parties. If this were to be attained, streaming companies, record labels, and artists would be happy, circumventing social dilemma.  Jordan Muthra The New School University, M.A., Media Studies, Graduate Student

Click here to read PDF version: The_Ethics_of_Streaming_Music

 

Elementary Forces By Timothy deVries

Apocalypse, Timothy deVries (2015) Acrylic on Panel, 30 x 30 inches
Apocalypse, Timothy deVries (2015) Acrylic on Panel, 30 x 30 inches

Click to buy
Click to buy

What is a corner? The corner represents a symbolic value. Children are told to stand in the corner when they are disobedient. The corner is a place where one meditates on one’s shortcomings. One can be ‘backed into a corner’ and left with few options or one can retreat into a corner for safety. Animals corner their prey. Corners are places where things get lost and are found. Corners are neglected and swept in the spring. Unfortunate artists can paint themselves into a corner if they are not aware of the space around them and the area beneath their feet. Corners are forgotten with the bustle of activity in the centre of the room.

Gilles Deleuze’s book on Francis Bacon contains a short chapter in which he describes some of the possible reasons for why Bacon consistently displayed his figures against a “round area or ring.”1 Deleuze asserts that the main reason for utilizing this “simple technique” is to create a “place” and to isolate the Figure.2 There is a progression and fatefulness in assigning the Figure to this place. Deleuze claims that the round area or ring relates the Figure to the setting and, in so doing, posits the Figure or painting as a kind of fact or isolated reality.3

Bird on a Wire, Timothy deVries (2015) Acrylic on Panel, 18 x 18 inches
Bird on a Wire, Timothy deVries (2015) Acrylic on Panel, 18 x 18 inches

The horizon, ring, corner or wall is a painterly convention frequently revisited by contemporary artists. Although many painters have excluded these settings in favour of fields (e.g. a field of pure colour, or a field of refuse), such settings are useful constructs for displaying objects of value or inducing value within objects. Fields are distinct from settings in that they form a systematic or total (rather than operative or local) context for objects. Conversely, settings function by separating the object from its context so that the viewer can have an unmediated experience of that object. The setting recedes ‘into the background’ as a decorative relief or incidental support.

Jesus Bids Us Shine LyricsCorners are specific settings that feature the intersection of three planes (i.e. two walls and a floor). The intersection forms a point. The corner can function simply as the intersection of three planes or as a construct that creates depth and dimensionality. This bivalent nature hints at its duplicity as a setting. It creates a false depth. In this respect, the play of surfaces conspires to become a point of convergence or vanishing point. As a convergence of three surfaces it is a point of ‘agreement’, or perhaps a type of foreclosure; three colours and three lines converge to form a dimensional whole. The duplicity of the corner consists of its character as both a play of surfaces and as a convergence of three lines. The duplicity consists in the fact that the corner realizes both the idea of form and the Form itself through both a convergence (of surface and line) and a construction (of dimensionality).

Two of the most significant questions a painter may ask is, “What must I paint?” and “What is the painting about?” The idea of form contained in a painting is inevitably ‘about’ a sensation or perception. The painter’s nervous system is trained not only to recognize particular sensations and perceptions but to actualize them in the materiality of paint.4 Painters practice their art as a way of learning to live with a given set of perceptions and sensations. The act of representation in painting is therefore second to the sensations and perceptions which inaugurate it.

Black Cat and the Jawbone of an Ass, Timothy deVries (2007) Acrylic on Canvas, 46 x 59 inches
Black Cat and the Jawbone of an Ass, Timothy deVries (2007) Acrylic on Canvas, 46 x 59 inches

The critic’s judgment (i.e. the critique) is the genesis of painterly sensations and perceptions. Critique is the limit of art and limits art to what it alone can do. It functions as a form of violence that is inflicted, observed or endured and occurs when one form overcomes another or when a form is ‘deformed’ by a superior consciousness. The deformation heralds a new and hitherto unappreciated beauty. It is the beauty of a projection or displacement of the painter’s subjective point of view into the materiality of paint. The transformation of sensations and perceptions through the pure and practical reason of the painter reflects the painter’s critique of power. What power? The power of judgment. The critique is therefore absorbed into the very colour of the picture.

Ludwig Wittgenstein’s picture theory holds great explanatory appeal in these cases because it contains propositions regarding the logic and structure of a picture. The painter labels the painting with a title because it represents a state of affairs. There is a close correspondence between the fact represented by the title and the pictorial content of the painting. It is in this sense that the picture functions as a relation between the physical or material world and the thoughts of the painter. Within this context, pictures are criticological constructs. Their titles are statements or propositions that are endowed with sense. As a function of these statements the painting’s pictorial components correspond almost identically with a set of defined elementary forces.

The corner can therefore play several conscious roles within a painting. They are the place of an encounter between a convergence and a defined space. These corners embody a perception. Moreover, corners function as a limit. As the limit of pictorial space they set up a picture plane that functions as a limit to logical thought. By using corners in this way, painters can represent unusual objects with a degree of normative ‘factuality’ – even if they are only representations. Finally, corners function as a place or setting. These corners are settings in which something can take place as well as a destination for various ideas. They instantiate and materialize Form in unanticipated ways. For the Silo, Timothy deVries http://www.timothydevries.ca/

 

  1. Gilles Deleuze, Francis Bacon: The Logic of Sensation (New York: Continuum, 2003), 1
  2. Deleuze, Francis Bacon, 2
  3. Deleuze, Francis Bacon, 2
  4. Deleuze, Francis Bacon, 52

 

Click to view on I-tunes
Click to view on I-tunes

Window Fishing Or The Night We Caught Beatlemania

Window Fishing

A Silo Canuck Book Review

I’ve never particularly been a Beatle’s fan. I like some of their songs. I like a number of them very much, but if I was asked the now proverbial question, “The Beatles or The Rolling Stones?” I would probably say, Oh, I don’t know, maybe The Who? The body of work of Mark Knopfler. Massive Attack were massive for me.

But I was not a child of the sixties, “an age of assassins,” John B. Lee writes in his poignant and powerfully executed preface, when “[o]ur childhood martyred almost all the heroes that we’d had.” John F. Kennedy. Robert F. Kennedy. Martin Luther King (Malcolm X, not mentioned but later, yes). “The list is overlong,” Lee says. “It will not end.” I understand more fully than ever these life-shattering moments, for Americans and Canadians alike; for so many  Across the Universe . Into this near death of hope came The Beatles. The Beatles came to America, came on a Sunday night in January 1964 to The Ed Sullivan show and, and as Lee exclaims with no exclamation mark, “sang my life awake.”

It’s not a perfect looking book. Yet as I read, the grainy cover photo (by an unknown photographer) of four dapper mop-tops fishing out the window of their Seattle hotel—they literally weren’t allowed to leave—starts to resonate. It’s imperfection could be viewed as integral, evoking a time in music when moments of “perfect imperfection,” as Michael Shatte calls them in his essay, were more common in pop; “happy accidents” which would not be tolerated in this era of hyper-produced top-forty songs, when singers voices are routinely, digitally “auto-tuned” in the studio, and we get used to being disappointed when we hear them live. Then there’s lip-synching. I don’t need to go on. There is great music being made by great musicians right now. But that’s not what we’re here to talk about. This is about a particular moment in pop-music history, in cultural history, and many of the moments that followed.

PaulMcCartneyBlur

The book is selected and edited by John B. Lee, a Canadian poet and writer who has published more than fifty books and received over 70 prestigious awards for his work. If you haven’t heard of him don’t feel too bad. He tells me openly there is little money in poetry, reminding me it’s not about that anyway. If it was it probably wouldn’t be poetry.

If you haven’t read him it might be time to start: his verse and prose catch the beauty of rural life, farm life, family life, hockey, human sexuality—life. Just Google him. He’s from home, you know. Right around here, right around me, the Poet Laureate of Brantford, Ontario and Norfolk County, home as well to Alexander Graham Bell and Wayne Gretzky, a poet of sport. Like McEnroe was one of the poets of my youth, making tennis beautiful, thrilling, creative; revolutionary. How I tried to emulate him…

Window Fishing Cover

Window Fishing is about a time of Revolution, evolutions in culture, and about growing up in the thick of it all. I wasn’t here yet, but as I read this book I learn. It is a literary volume. The cover photo and torn ticket stub on the back page are its only images. Or are they? Because black words on white paper are also images. And the book’s words, artistically rendered, conjure images as well as ideas. It is poetry, and prose poetry, and personal essays; fine writing by a collection of fine writers.

I learn that for most of the men, who were boys then, pubescent, the Beatles were all about music: musical discovery, even ecstasy. And style too. There was style.

For the women who write about the phenomenon of Beatlemania, there was music too. Absolutely. But there was something else. Something profound: the awakening of sexuality. Even a kind of love. Suddenly I understand all the screaming and crying, the fainting. For emerging, young (straight) women, the Beatles were more than musical. They were also beautiful. Sexy. As Susan Whelehan puts it in her essay: “John. He was mine and I was his…I was going to be his FOREVER. And I am.”

While many parents of the day may have dismissed The Fab Four at first as a silly “boy-band,” we might say now, shaking their longish (for the time), round hair-cuts—singing “Ooooo!” and “Yeah Yeah Yeah!”—fact is from the beginning The Beatles were always at the very least competent, and obviously compelling, musicians. Writes Honey Novick in her probing, poetic essay: “You could actually dance to their music.” And we know they became more and more sophisticated as they progressed through their careers, eventually making challenging, often satisfying real art-music, the way Radiohead did for me in my 20’s.

All this beautiful literature about The Beatles and the 1960’s has inspired me to listen, finally, seriously, to the music. Even if you thought, at the time, “Yeah Yeah Yeah” was just bubblegum for kids, consider the lyrics. One friend to another: “You think you lost your love/Well I saw her yesterday. She says it’s you she’s thinkin’ of/And she told me what to say: She says she loves you.” She loves you man. Yeah! (Yeah! Yeah!). What more is there to celebrate? Ecstatically.

If you were there, or if you want to learn, or if you care about music or culture or the 1960’s or just literature, embrace the “perfect imperfection” of this unique and potent book. Some of the poems made me close my eyes and shut the pages. To savour, digest. Bruce Meyer made me cry. I was 8 years old when Lennon was shot. Assassinated. It made no impact on me then. I wasn’t really there yet. The book put me there, as close as I can ever come.  For the Silo, Alan Gibson.