Tag Archives: Apple

Apple Launches Ambitious iOS18

Apple has just officially launched iOS 18, a major update that is set to redefine the iPhone experience with enhanced personalization, new capabilities, and seamless integration for gaming and entertainment. Available for download starting today, iOS 18 promises to bring a whole new level of interaction and performance for iPhone users starting with the older generation Xs and up to the latest 16 Pro.

Key Features of iOS 18

– Revamped User Interface (UI): iOS 18 introduces a more customizable interface, with dynamic widgets, a redesigned lock screen, and adaptive app layouts, making the experience more personal than ever.

– AI-Powered Personalization: Leveraging advanced AI, the system learns from user preferences, adjusting app recommendations, notifications, and even battery optimization to match individual habits. This feature promises a smarter, more intuitive user experience.

– Interactive Widgets: For the first time, iOS users can engage with widgets directly from the home screen. Whether it’s checking game stats or adjusting settings, this allows for quicker access to essential functions.

– Performance Boost: Apple has integrated the A18 Bionic chip into the latest iPhones, resulting in faster processing speeds, optimized graphics, and better multitasking capabilities. This is especially beneficial for gaming, where users will notice smoother graphics and higher frame rates.

– Expanded Gaming Capabilities: iOS 18 introduces Game Mode, a dedicated feature that optimizes performance while playing online games by reducing background processes and enhancing network stability. Users will experience faster response times and less lag, which is critical for real-time multiplayer games. This is coupled with Enhanced AR support, bringing online games to life in ways previously unseen.

– Cross-Device Connectivity: With iOS 18, gaming moves seamlessly across Apple devices. Game progress can now be synced across iPhone, iPad, and Mac, allowing users to pick up where they left off, regardless of the device they’re using.

Comparison with Previous iOS Versions

– Performance Gains: Compared to iOS 17, iOS 18 offers up to 30% faster app launches and a 20% increase in battery life for gamers. This is a result of the new battery optimization algorithms and improved processor efficiency.

– Gaming Improvements: The introduction of Game Mode and AR-enhanced experiences is a leap from previous iterations, positioning iOS 18 as the ideal platform for mobile gaming enthusiasts. Apple’s new Metal 3 framework [MetalFX upscaling first seen in an impressive 2022 demo showcasing Resident Evil Village running as a Mac port- see video below CP] also enhances the gaming experience with more detailed textures and faster loading times.

Impact on Online Gaming

iOS 18’s Game Mode and improved connectivity features ensure that mobile gamers can enjoy uninterrupted, high-performance gameplay on online platforms. This feature helps tackle a longstanding challenge of latency and performance dips during competitive play.

“iOS 18 transforms the online gaming landscape by making iPhone not just a communication device, but a full-fledged gaming console in the palm of your hand. With its advanced features, smoother gameplay, and seamless cross-device experience, iOS 18 is a game-changer,” said Vadim Khrulev, Founder and CEO at thesolitaire.com.

With iOS 18, Apple is once again pushing the boundaries of what is possible on a mobile device. For gamers, content creators, and everyday users, the new OS offers unmatched personalization, performance, and cross-platform experiences. As the gaming industry continues to expand, iOS 18 ensures that iPhone remains at the forefront of the digital entertainment world.

7000 Words About The Dubious Refragmentation Of The Economy

One advantage of being old is that you can see change happen in your lifetime.

A lot of the change I’ve seen is fragmentation. For example, US politics and now Canadian politics are much more polarized than they used to be. Culturally we have ever less common ground and though inclusiveness is preached by the media and the Left, special interest groups and policies have a polarizing effect. The creative class flocks to a handful of happy cities, abandoning the rest. And increasing economic inequality means the spread between rich and poor is growing too. I’d like to propose a hypothesis: that all these trends are instances of the same phenomenon. And moreover, that the cause is not some force that’s pulling us apart, but rather the erosion of forces that had been pushing us together.

Worse still, for those who worry about these trends, the forces that were pushing us together were an anomaly, a one-time combination of circumstances that’s unlikely to be repeated—and indeed, that we would not want to repeat.

Describe How a Mass Culture Developed in America - JeankruwHumphrey

The two forces were war (above all World War II), and the rise of large corporations.

The effects of World War II were both economic and social. Economically, it decreased variation in income. Like all modern armed forces, America’s were socialist economically. From each according to his ability, to each according to his need. More or less. Higher ranking members of the military got more (as higher ranking members of socialist societies always do), but what they got was fixed according to their rank. And the flattening effect wasn’t limited to those under arms, because the US economy was conscripted too. Between 1942 and 1945 all wages were set by the National War Labor Board. Like the military, they defaulted to flatness. And this national standardization of wages was so pervasive that its effects could still be seen years after the war ended. [1]

Business owners weren’t supposed to be making money either.

FDR said “not a single war millionaire” would be permitted. To ensure that, any increase in a company’s profits over prewar levels was taxed at 85%. And when what was left after corporate taxes reached individuals, it was taxed again at a marginal rate of 93%. [2]

Socially too the war tended to decrease variation. Over 16 million men and women from all sorts of different backgrounds were brought together in a way of life that was literally uniform. Service rates for men born in the early 1920s approached 80%. And working toward a common goal, often under stress, brought them still closer together.

Though strictly speaking World War II lasted less than 4 years for the USA, its effects lasted longer and cycled North towards Canada.

Wars make central governments more powerful, and World War II was an extreme case of this. In the US, as in all the other Allied countries, the federal government was slow to give up the new powers it had acquired. Indeed, in some respects the war didn’t end in 1945; the enemy just switched to the Soviet Union. In tax rates, federal power, defense spending, conscription, and nationalism the decades after the war looked more like wartime than prewar peacetime. [3] And the social effects lasted too. The kid pulled into the army from behind a mule team in West Virginia didn’t simply go back to the farm afterward. Something else was waiting for him, something that looked a lot like the army.

If total war was the big political story of the 20th century, the big economic story was the rise of new kind of company. And this too tended to produce both social and economic cohesion. [4]

The 20th century was the century of the big, national corporation. General Electric, General Foods, General Motors. Developments in finance, communications, transportation, and manufacturing enabled a new type of company whose goal was above all scale. Version 1 of this world was low-res: a Duplo world of a few giant companies dominating each big market. [5]

The late 19th and early 20th centuries had been a time of consolidation, led especially by J. P. Morgan. Thousands of companies run by their founders were merged into a couple hundred giant ones run by professional managers. Economies of scale ruled the day. It seemed to people at the time that this was the final state of things. John D. Rockefeller said in 1880

Image result for john d rockefeller

The day of combination is here to stay. Individualism has gone, never to return.

He turned out to be mistaken, but he seemed right for the next hundred years.

The consolidation that began in the late 19th century continued for most of the 20th. By the end of World War II, as Michael Lind writes, “the major sectors of the economy were either organized as government-backed cartels or dominated by a few oligopolistic corporations.”

For consumers this new world meant the same choices everywhere, but only a few of them. When I grew up there were only 2 or 3 of most things, and since they were all aiming at the middle of the market there wasn’t much to differentiate them.

One of the most important instances of this phenomenon was in TV.

Popular culture and daily life of Americans in the 1950s - WWJD

Here there were 3 choices: NBC, CBS, and ABC. Plus public TV for eggheads and communists (jk). The programs the 3 networks offered were indistinguishable. In fact, here there was a triple pressure toward the center. If one show did try something daring, local affiliates in conservative markets would make them stop. Plus since TVs were expensive whole families watched the same shows together, so they had to be suitable for everyone.

And not only did everyone get the same thing, they got it at the same time. It’s difficult to imagine now, but every night tens of millions of families would sit down together in front of their TV set watching the same show, at the same time, as their next door neighbors. What happens now with the Super Bowl used to happen every night. We were literally in sync. [6]

In a way mid-century TV culture was good. The view it gave of the world was like you’d find in a children’s book, and it probably had something of the effect that (parents hope) children’s books have in making people behave better. But, like children’s books, TV was also misleading. Dangerously misleading, for adults. In his autobiography, Robert MacNeil talks of seeing gruesome images that had just come in from Vietnam and thinking, we can’t show these to families while they’re having dinner.

I know how pervasive the common culture was, because I tried to opt out of it, and it was practically impossible to find alternatives.

When I was 13 I realized, more from internal evidence than any outside source, that the ideas we were being fed on TV were crap, and I stopped watching it. [7] But it wasn’t just TV. It seemed like everything around me was crap. The politicians all saying the same things, the consumer brands making almost identical products with different labels stuck on to indicate how prestigious they were meant to be, the balloon-frame houses with fake “colonial” skins, the cars with several feet of gratuitous metal on each end that started to fall apart after a couple years, the “red delicious” apples that were red but only nominally apples. And in retrospect, it was crap. [8]

But when I went looking for alternatives to fill this void, I found practically nothing. There was no Internet then. The only place to look was in the chain bookstore in our local shopping mall. [9] There I found a copy of The Atlantic. I wish I could say it became a gateway into a wider world, but in fact I found it boring and incomprehensible. Like a kid tasting whisky for the first time and pretending to like it, I preserved that magazine as carefully as if it had been a book. I’m sure I still have it somewhere. But though it was evidence that there was, somewhere, a world that wasn’t red delicious, I didn’t find it till college.

It wasn’t just as consumers that the big companies made us similar. They did as employers too. Within companies there were powerful forces pushing people toward a single model of how to look and act. IBM was particularly notorious for this, but they were only a little more extreme than other big companies. And the models of how to look and act varied little between companies. Meaning everyone within this world was expected to seem more or less the same. And not just those in the corporate world, but also everyone who aspired to it—which in the middle of the 20th century meant most people who weren’t already in it. For most of the 20th century, working-class people tried hard to look middle class. You can see it in old photos. Few adults aspired to look dangerous in 1950.

But the rise of national corporations didn’t just compress us culturally. It compressed us economically too, and on both ends.

Along with giant national corporations, we got giant national labor unions. And in the mid 20th century the corporations cut deals with the unions where they paid over market price for labor. Partly because the unions were monopolies. [10] Partly because, as components of oligopolies themselves, the corporations knew they could safely pass the cost on to their customers, because their competitors would have to as well. And partly because in mid-century most of the giant companies were still focused on finding new ways to milk economies of scale. Just as startups rightly pay AWS a premium over the cost of running their own servers so they can focus on growth, many of the big national corporations were willing to pay a premium for labor. [11]

As well as pushing incomes up from the bottom, by overpaying unions, the big companies of the 20th century also pushed incomes down at the top, by underpaying their top management. Economist J. K. Galbraith wrote in 1967 that “There are few corporations in which it would be suggested that executive salaries are at a maximum.” [12]

Speaking Out Meant Standing Alone

To some extent this was an illusion.

Much of the de facto pay of executives never showed up on their income tax returns, because it took the form of perks. The higher the rate of income tax, the more pressure there was to pay employees upstream of it. (In the UK, where taxes were even higher than in the US, companies would even pay their kids’ private school tuitions.) One of the most valuable things the big companies of the mid 20th century gave their employees was job security, and this too didn’t show up in tax returns or income statistics. So the nature of employment in these organizations tended to yield falsely low numbers about economic inequality. But even accounting for that, the big companies paid their best people less than market price. There was no market; the expectation was that you’d work for the same company for decades if not your whole career. [13]

Your work was so illiquid there was little chance of getting market price. But that same illiquidity also encouraged you not to seek it. If the company promised to employ you till you retired and give you a pension afterward, you didn’t want to extract as much from it this year as you could. You needed to take care of the company so it could take care of you. Especially when you’d been working with the same group of people for decades. If you tried to squeeze the company for more money, you were squeezing the organization that was going to take care of them. Plus if you didn’t put the company first you wouldn’t be promoted, and if you couldn’t switch ladders, promotion on this one was the only way up. [14]

To someone who’d spent several formative years in the armed forces, this situation didn’t seem as strange as it does to us now. From their point of view, as big company executives, they were high-ranking officers. They got paid a lot more than privates. They got to have expense account lunches at the best restaurants and fly around on the company’s Gulfstreams. It probably didn’t occur to most of them to ask if they were being paid market price.

The ultimate way to get market price is to work for yourself, by starting your own company. That seems obvious to any ambitious person now. But in the mid 20th century it was an alien concept. Not because starting one’s own company seemed too ambitious, but because it didn’t seem ambitious enough. Even as late as the 1970s, when I grew up, the ambitious plan was to get lots of education at prestigious institutions, and then join some other prestigious institution and work one’s way up the hierarchy. Your prestige was the prestige of the institution you belonged to. People did start their own businesses of course, but educated people rarely did, because in those days there was practically zero concept of starting what we now call a startup: a business that starts small and grows big. That was much harder to do in the mid 20th century. Starting one’s own business meant starting a business that would start small and stay small. Which in those days of big companies often meant scurrying around trying to avoid being trampled by elephants. It was more prestigious to be one of the executive class riding the elephant.

By the 1970s, no one stopped to wonder where the big prestigious companies had come from in the first place.

Famous 1970s Logos: The Best 70s Logo Design Examples

It seemed like they’d always been there, like the chemical elements. And indeed, there was a double wall between ambitious kids in the 20th century and the origins of the big companies. Many of the big companies were roll-ups that didn’t have clear founders. And when they did, the founders didn’t seem like us. Nearly all of them had been uneducated, in the sense of not having been to college. They were what Shakespeare called rude mechanicals. College trained one to be a member of the professional classes. Its graduates didn’t expect to do the sort of grubby menial work that Andrew Carnegie or Henry Ford started out doing. [15]

And in the 20th century there were more and more college graduates. They increased from about 2% of the population in 1900 to about 25% in 2000. In the middle of the century our two big forces intersect, in the form of the GI Bill, which sent 2.2 million World War II veterans to college. Few thought of it in these terms, but the result of making college the canonical path for the ambitious was a world in which it was socially acceptable to work for Henry Ford, but not to be Henry Ford. [16]

I remember this world well. I came of age just as it was starting to break up. In my childhood it was still dominant. Not quite so dominant as it had been. We could see from old TV shows and yearbooks and the way adults acted that people in the 1950s and 60s had been even more conformist than us. The mid-century model was already starting to get old. But that was not how we saw it at the time. We would at most have said that one could be a bit more daring in 1975 than 1965. And indeed, things hadn’t changed much yet.

But change was coming soon.

And when the Duplo economy started to disintegrate, it disintegrated in several different ways at once. Vertically integrated companies literally dis-integrated because it was more efficient to. Incumbents faced new competitors as (a) markets went global and (b) technical innovation started to trump economies of scale, turning size from an asset into a liability. Smaller companies were increasingly able to survive as formerly narrow channels to consumers broadened. Markets themselves started to change faster, as whole new categories of products appeared. And last but not least, the federal government, which had previously smiled upon J. P. Morgan’s world as the natural state of things, began to realize it wasn’t the last word after all.

What J. P. Morgan was to the horizontal axis, Henry Ford was to the vertical. He wanted to do everything himself. The giant plant he built at River Rouge between 1917 and 1928 literally took in iron ore at one end and sent cars out the other. 100,000 people worked there. At the time it seemed the future. But that is not how car companies operate today. Now much of the design and manufacturing happens in a long supply chain, whose products the car companies ultimately assemble and sell. The reason car companies operate this way is that it works better. Each company in the supply chain focuses on what they know best. And they each have to do it well or they can be swapped out for another supplier.

Why didn’t Henry Ford realize that networks of cooperating companies work better than a single big company?

One reason is that supplier networks take a while to evolve. In 1917, doing everything himself seemed to Ford the only way to get the scale he needed. And the second reason is that if you want to solve a problem using a network of cooperating companies, you have to be able to coordinate their efforts, and you can do that much better with computers. Computers reduce the transaction costs that Coase argued are the raison d’etre of corporations. That is a fundamental change.

In the early 20th century, big companies were synonymous with efficiency. In the late 20th century they were synonymous with inefficiency. To some extent this was because the companies themselves had become sclerotic. But it was also because our standards were higher.

It wasn’t just within existing industries that change occurred. The industries themselves changed. It became possible to make lots of new things, and sometimes the existing companies weren’t the ones who did it best.

Microcomputers are a classic example.

Ms Dos 1.25 (1982)(Microsoft) Game

The market was pioneered by upstarts like Apple, Radio Shack and Atari. When it got big enough, IBM decided it was worth paying attention to. At the time IBM completely dominated the computer industry. They assumed that all they had to do, now that this market was ripe, was to reach out and pick it. Most people at the time would have agreed with them. But what happened next illustrated how much more complicated the world had become. IBM did launch a microcomputer. Though quite successful, it did not crush Apple. But even more importantly, IBM itself ended up being supplanted by a supplier coming in from the side—from software, which didn’t even seem to be the same business. IBM’s big mistake was to accept a non-exclusive license for DOS. It must have seemed a safe move at the time. No other computer manufacturer had ever been able to outsell them. What difference did it make if other manufacturers could offer DOS too? The result of that miscalculation was an explosion of inexpensive PC clones. Microsoft now owned the PC standard, and the customer. And the microcomputer business ended up being Apple vs Microsoft.

Basically, Apple bumped IBM and then Microsoft stole its wallet. That sort of thing did not happen to big companies in mid-century. But it was going to happen increasingly often in the future.

Change happened mostly by itself in the computer business. In other industries, legal obstacles had to be removed first. Many of the mid-century oligopolies had been anointed by the federal government with policies (and in wartime, large orders) that kept out competitors. This didn’t seem as dubious to government officials at the time as it sounds to us. They felt a two-party system ensured sufficient competition in politics. It ought to work for business too.

Gradually the government realized that anti-competitive policies were doing more harm than good, and during the Carter administration started to remove them.

The word used for this process was misleadingly narrow: deregulation. What was really happening was de-oligopolization. It happened to one industry after another. Two of the most visible to consumers were air travel and long-distance phone service, which both became dramatically cheaper after deregulation.

Deregulation also contributed to the wave of hostile takeovers in the 1980s. In the old days the only limit on the inefficiency of companies, short of actual bankruptcy, was the inefficiency of their competitors. Now companies had to face absolute rather than relative standards. Any public company that didn’t generate sufficient returns on its assets risked having its management replaced with one that would. Often the new managers did this by breaking companies up into components that were more valuable separately. [17]

Version 1 of the national economy consisted of a few big blocks whose relationships were negotiated in back rooms by a handful of executives, politicians, regulators, and labor leaders. Version 2 was higher resolution: there were more companies, of more different sizes, making more different things, and their relationships changed faster. In this world there were still plenty of back room negotiations, but more was left to market forces. Which further accelerated the fragmentation.

It’s a little misleading to talk of versions when describing a gradual process, but not as misleading as it might seem. There was a lot of change in a few decades, and what we ended up with was qualitatively different. The companies in the S&P 500 in 1958 had been there an average of 61 years. By 2012 that number was 18 years. [18]

The breakup of the Duplo economy happened simultaneously with the spread of computing power. To what extent were computers a precondition? It would take a book to answer that. Obviously the spread of computing power was a precondition for the rise of startups. I suspect it was for most of what happened in finance too. But was it a precondition for globalization or the LBO wave? I don’t know, but I wouldn’t discount the possibility. It may be that the refragmentation was driven by computers in the way the industrial revolution was driven by steam engines. Whether or not computers were a precondition, they have certainly accelerated it.

The new fluidity of companies changed people’s relationships with their employers. Why climb a corporate ladder that might be yanked out from under you? Ambitious people started to think of a career less as climbing a single ladder than as a series of jobs that might be at different companies. More movement (or even potential movement) between companies introduced more competition in salaries. Plus as companies became smaller it became easier to estimate how much an employee contributed to the company’s revenue. Both changes drove salaries toward market price. And since people vary dramatically in productivity, paying market price meant salaries started to diverge.

By no coincidence it was in the early 1980s that the term “yuppie” was coined. That word is not much used now, because the phenomenon it describes is so taken for granted, but at the time it was a label for something novel. Yuppies were young professionals who made lots of money. To someone in their twenties today, this wouldn’t seem worth naming. Why wouldn’t young professionals make lots of money? But until the 1980s being underpaid early in your career was part of what it meant to be a professional. Young professionals were paying their dues, working their way up the ladder. The rewards would come later. What was novel about yuppies was that they wanted market price for the work they were doing now.

The first yuppies did not work for startups.

AM2407 Spark blog: 1980s - The Yuppie

That was still in the future. Nor did they work for big companies. They were professionals working in fields like law, finance, and consulting. But their example rapidly inspired their peers. Once they saw that new BMW 325i, they wanted one too.

Underpaying people at the beginning of their career only works if everyone does it. Once some employer breaks ranks, everyone else has to, or they can’t get good people. And once started this process spreads through the whole economy, because at the beginnings of people’s careers they can easily switch not merely employers but industries.

But not all young professionals benefitted. You had to produce to get paid a lot. It was no coincidence that the first yuppies worked in fields where it was easy to measure that.

More generally, an idea was returning whose name sounds old-fashioned precisely because it was so rare for so long: that you could make your fortune. As in the past there were multiple ways to do it. Some made their fortunes by creating wealth, and others by playing zero-sum games. But once it became possible to make one’s fortune, the ambitious had to decide whether or not to. A physicist who chose physics over Wall Street in 1990 was making a sacrifice that a physicist in 1960 wasn’t.

The idea even flowed back into big companies. CEOs of big companies make more now than they used to, and I think much of the reason is prestige. In 1960, corporate CEOs had immense prestige. They were the winners of the only economic game in town. But if they made as little now as they did then, in real dollar terms, they’d seem like small fry compared to professional athletes and whiz kids making millions from startups and hedge funds. They don’t like that idea, so now they try to get as much as they can, which is more than they had been getting. [19]

Meanwhile a similar fragmentation was happening at the other end of the economic scale. As big companies’ oligopolies became less secure, they were less able to pass costs on to customers and thus less willing to overpay for labor. And as the Duplo world of a few big blocks fragmented into many companies of different sizes—some of them overseas—it became harder for unions to enforce their monopolies. As a result workers’ wages also tended toward market price. Which (inevitably, if unions had been doing their job) tended to be lower. Perhaps dramatically so, if automation had decreased the need for some kind of work.

And just as the mid-century model induced social as well as economic cohesion, its breakup brought social as well as economic fragmentation. People started to dress and act differently. Those who would later be called the “creative class” became more mobile. People who didn’t care much for religion felt less pressure to go to church for appearances’ sake, while those who liked it a lot opted for increasingly colorful forms. Some switched from meat loaf to tofu, and others to Hot Pockets. Some switched from driving Ford sedans to driving small imported cars, and others to driving SUVs. Kids who went to private schools or wished they did started to dress “preppy,” and kids who wanted to seem rebellious made a conscious effort to look disreputable. In a hundred ways people spread apart. [20]

Almost four decades later, fragmentation is still increasing.

Has it been net good or bad? I don’t know; the question may be unanswerable. Not entirely bad though. We take for granted the forms of fragmentation we like, and worry only about the ones we don’t. But as someone who caught the tail end of mid-century conformism, I can tell you it was no utopia. [21]

My goal here is not to say whether fragmentation has been good or bad, just to explain why it’s happening. With the centripetal forces of total war and 20th century oligopoly mostly gone, what will happen next? And more specifically, is it possible to reverse some of the fragmentation we’ve seen?

If it is, it will have to happen piecemeal. You can’t reproduce mid-century cohesion the way it was originally produced. It would be insane to go to war just to induce more national unity. And once you understand the degree to which the economic history of the 20th century was a low-res version 1, it’s clear you can’t reproduce that either.

20th century cohesion was something that happened at least in a sense naturally. The war was due mostly to external forces, and the Duplo economy was an evolutionary phase. If you want cohesion now, you’d have to induce it deliberately. And it’s not obvious how. I suspect the best we’ll be able to do is address the symptoms of fragmentation. But that may be enough.

The form of fragmentation people worry most about lately is economic inequality, and if you want to eliminate that you’re up against a truly formidable headwind—one that has been in operation since the stone age: technology. Technology is a lever. It magnifies work. And the lever not only grows increasingly long, but the rate at which it grows is itself increasing.

Which in turn means the variation in the amount of wealth people can create has not only been increasing, but accelerating.

The unusual conditions that prevailed in the mid 20th century masked this underlying trend. The ambitious had little choice but to join large organizations that made them march in step with lots of other people—literally in the case of the armed forces, figuratively in the case of big corporations. Even if the big corporations had wanted to pay people proportionate to their value, they couldn’t have figured out how. But that constraint has gone now. Ever since it started to erode in the 1970s, we’ve seen the underlying forces at work again. [22]

Not everyone who gets rich now does it by creating wealth, certainly. But a significant number do, and the Baumol Effect means all their peers get dragged along too. [23] And as long as it’s possible to get rich by creating wealth, the default tendency will be for economic inequality to increase. Even if you eliminate all the other ways to get rich. You can mitigate this with subsidies at the bottom and taxes at the top, but unless taxes are high enough to discourage people from creating wealth, you’re always going to be fighting a losing battle against increasing variation in productivity. [24]

That form of fragmentation, like the others, is here to stay. Or rather, back to stay. Nothing is forever, but the tendency toward fragmentation should be more forever than most things, precisely because it’s not due to any particular cause. It’s simply a reversion to the mean. When Rockefeller said individualism was gone, he was right for a hundred years. It’s back now, and that’s likely to be true for longer.

I worry that if we don’t acknowledge this, we’re headed for trouble.

If we think 20th century cohesion disappeared because of few policy tweaks, we’ll be deluded into thinking we can get it back (minus the bad parts, somehow) with a few countertweaks. And then we’ll waste our time trying to eliminate fragmentation, when we’d be better off thinking about how to mitigate its consequences.

Notes

[1] Lester Thurow, writing in 1975, said the wage differentials prevailing at the end of World War II had become so embedded that they “were regarded as ‘just’ even after the egalitarian pressures of World War II had disappeared. Basically, the same differentials exist to this day, thirty years later.” But Goldin and Margo think market forces in the postwar period also helped preserve the wartime compression of wages—specifically increased demand for unskilled workers, and oversupply of educated ones.

(Oddly enough, the American custom of having employers pay for health insurance derives from efforts by businesses to circumvent NWLB wage controls in order to attract workers.)

[2] As always, tax rates don’t tell the whole story. There were lots of exemptions, especially for individuals. And in World War II the tax codes were so new that the government had little acquired immunity to tax avoidance. If the rich paid high taxes during the war it was more because they wanted to than because they had to.

After the war, federal tax receipts as a percentage of GDP were about the same as they are now.

In fact, for the entire period since the war, tax receipts have stayed close to 18% of GDP, despite dramatic changes in tax rates. The lowest point occurred when marginal income tax rates were highest: 14.1% in 1950. Looking at the data, it’s hard to avoid the conclusion that tax rates have had little effect on what people actually paid.

[3] Though in fact the decade preceding the war had been a time of unprecedented federal power, in response to the Depression. Which is not entirely a coincidence, because the Depression was one of the causes of the war. In many ways the New Deal was a sort of dress rehearsal for the measures the federal government took during wartime. The wartime versions were much more drastic and more pervasive though. As Anthony Badger wrote, “for many Americans the decisive change in their experiences came not with the New Deal but with World War II.”

[4] I don’t know enough about the origins of the world wars to say, but it’s not inconceivable they were connected to the rise of big corporations. If that were the case, 20th century cohesion would have a single cause.

[5] More precisely, there was a bimodal economy consisting, in Galbraith’s words, of “the world of the technically dynamic, massively capitalized and highly organized corporations on the one hand and the hundreds of thousands of small and traditional proprietors on the other.” Money, prestige, and power were concentrated in the former, and there was near zero crossover.

[6] I wonder how much of the decline in families eating together was due to the decline in families watching TV together afterward.

[7] I know when this happened because it was the season Dallas premiered. Everyone else was talking about what was happening on Dallas, and I had no idea what they meant.

[8] I didn’t realize it till I started doing research for this essay, but the meretriciousness of the products I grew up with is a well-known byproduct of oligopoly. When companies can’t compete on price, they compete on tailfins.

[9] Monroeville Mall was at the time of its completion in 1969 the largest in the country. In the late 1970s the movie Dawn of the Dead was shot there. Apparently the mall was not just the location of the movie, but its inspiration; the crowds of shoppers drifting through this huge mall reminded George Romero of zombies. My first job was scooping ice cream in the Baskin-Robbins.

[10] Labor unions were exempted from antitrust laws by the Clayton Antitrust Act in 1914 on the grounds that a person’s work is not “a commodity or article of commerce.” I wonder if that means service companies are also exempt.

[11] The relationships between unions and unionized companies can even be symbiotic, because unions will exert political pressure to protect their hosts. According to Michael Lind, when politicians tried to attack the A&P supermarket chain because it was putting local grocery stores out of business, “A&P successfully defended itself by allowing the unionization of its workforce in 1938, thereby gaining organized labor as a constituency.” I’ve seen this phenomenon myself: hotel unions are responsible for more of the political pressure against Airbnb than hotel companies.

[12] Galbraith was clearly puzzled that corporate executives would work so hard to make money for other people (the shareholders) instead of themselves. He devoted much of The New Industrial State to trying to figure this out.

His theory was that professionalism had replaced money as a motive, and that modern corporate executives were, like (good) scientists, motivated less by financial rewards than by the desire to do good work and thereby earn the respect of their peers. There is something in this, though I think lack of movement between companies combined with self-interest explains much of observed behavior.

[13] Galbraith (p. 94) says a 1952 study of the 800 highest paid executives at 300 big corporations found that three quarters of them had been with their company for more than 20 years.

[14] It seems likely that in the first third of the 20th century executive salaries were low partly because companies then were more dependent on banks, who would have disapproved if executives got too much. This was certainly true in the beginning. The first big company CEOs were J. P. Morgan’s hired hands.

Companies didn’t start to finance themselves with retained earnings till the 1920s. Till then they had to pay out their earnings in dividends, and so depended on banks for capital for expansion. Bankers continued to sit on corporate boards till the Glass-Steagall act in 1933.

By mid-century big companies funded 3/4 of their growth from earnings. But the early years of bank dependence, reinforced by the financial controls of World War II, must have had a big effect on social conventions about executive salaries. So it may be that the lack of movement between companies was as much the effect of low salaries as the cause.

Incidentally, the switch in the 1920s to financing growth with retained earnings was one cause of the 1929 crash. The banks now had to find someone else to lend to, so they made more margin loans.

[15] Even now it’s hard to get them to. One of the things I find hardest to get into the heads of would-be startup founders is how important it is to do certain kinds of menial work early in the life of a company. Doing things that don’t scale is to how Henry Ford got started as a high-fiber diet is to the traditional peasant’s diet: they had no choice but to do the right thing, while we have to make a conscious effort.

[16] Founders weren’t celebrated in the press when I was a kid. “Our founder” meant a photograph of a severe-looking man with a walrus mustache and a wing collar who had died decades ago. The thing to be when I was a kid was an executive. If you weren’t around then it’s hard to grasp the cachet that term had. The fancy version of everything was called the “executive” model.

[17] The wave of hostile takeovers in the 1980s was enabled by a combination of circumstances: court decisions striking down state anti-takeover laws, starting with the Supreme Court’s 1982 decision in Edgar v. MITE Corp.; the Reagan administration’s comparatively sympathetic attitude toward takeovers; the Depository Institutions Act of 1982, which allowed banks and savings and loans to buy corporate bonds; a new SEC rule issued in 1982 (rule 415) that made it possible to bring corporate bonds to market faster; the creation of the junk bond business by Michael Milken; a vogue for conglomerates in the preceding period that caused many companies to be combined that never should have been; a decade of inflation that left many public companies trading below the value of their assets; and not least, the increasing complacency of managements.

[18] Foster, Richard. “Creative Destruction Whips through Corporate America.” Innosight, February 2012.

[19] CEOs of big companies may be overpaid. I don’t know enough about big companies to say. But it is certainly not impossible for a CEO to make 200x as much difference to a company’s revenues as the average employee. Look at what Steve Jobs did for Apple when he came back as CEO. It would have been a good deal for the board to give him 95% of the company. Apple’s market cap the day Steve came back in July 1997 was 1.73 billion. 5% of Apple now (January 2016) would be worth about 30 billion. And it would not be if Steve hadn’t come back; Apple probably wouldn’t even exist anymore.

Merely including Steve in the sample might be enough to answer the question of whether public company CEOs in the aggregate are overpaid. And that is not as facile a trick as it might seem, because the broader your holdings, the more the aggregate is what you care about.

[20] The late 1960s were famous for social upheaval. But that was more rebellion (which can happen in any era if people are provoked sufficiently) than fragmentation. You’re not seeing fragmentation unless you see people breaking off to both left and right.

[21] Globally the trend has been in the other direction. While the US is becoming more fragmented, the world as a whole is becoming less fragmented, and mostly in good ways.

[22] There were a handful of ways to make a fortune in the mid 20th century. The main one was drilling for oil, which was open to newcomers because it was not something big companies could dominate through economies of scale. How did individuals accumulate large fortunes in an era of such high taxes? Giant tax loopholes defended by two of the most powerful men in Congress, Sam Rayburn and Lyndon Johnson.

But becoming a Texas oilman was not in 1950 something one could aspire to the way starting a startup or going to work on Wall Street were in 2000, because (a) there was a strong local component and (b) success depended so much on luck.

[23] The Baumol Effect induced by startups is very visible in Silicon Valley. Google will pay people millions of dollars a year to keep them from leaving to start or join startups.

[24] I’m not claiming variation in productivity is the only cause of economic inequality in the US. But it’s a significant cause, and it will become as big a cause as it needs to, in the sense that if you ban other ways to get rich, people who want to get rich will use this route instead.

Thanks to Sam Altman, Trevor Blackwell, Paul Buchheit, Patrick Collison, Ron Conway, Chris Dixon, Benedict Evans, Richard Florida, Ben Horowitz, Jessica Livingston, Robert Morris, Tim O’Reilly, Geoff Ralston, Max Roser, Alexia Tsotsis, and Qasar Younis for reading drafts of this. Max also told me about several valuable sources. Essay from http://paulgraham.com/re.html

Bibliography

Allen, Frederick Lewis. The Big Change. Harper, 1952.

Averitt, Robert. The Dual Economy. Norton, 1968.

Badger, Anthony. The New Deal. Hill and Wang, 1989.

Bainbridge, John. The Super-Americans. Doubleday, 1961.

Beatty, Jack. Collossus. Broadway, 2001.

Brinkley, Douglas. Wheels for the World. Viking, 2003.

Brownleee, W. Elliot. Federal Taxation in America. Cambridge, 1996.

Chandler, Alfred. The Visible Hand. Harvard, 1977.

Chernow, Ron. The House of Morgan. Simon & Schuster, 1990.

Chernow, Ron. Titan: The Life of John D. Rockefeller. Random House, 1998.

Galbraith, John. The New Industrial State. Houghton Mifflin, 1967.

Goldin, Claudia and Robert A. Margo. “The Great Compression: The Wage Structure in the United States at Mid-Century.” NBER Working Paper 3817, 1991.

Gordon, John. An Empire of Wealth. HarperCollins, 2004.

Klein, Maury. The Genesis of Industrial America, 1870-1920. Cambridge, 2007.

Lind, Michael. Land of Promise. HarperCollins, 2012.

Mickelthwaite, John, and Adrian Wooldridge. The Company. Modern Library, 2003.

Nasaw, David. Andrew Carnegie. Penguin, 2006.

Sobel, Robert. The Age of Giant Corporations. Praeger, 1993.

Thurow, Lester. Generating Inequality: Mechanisms of Distribution. Basic Books, 1975.

Witte, John. The Politics and Development of the Federal Income Tax. Wisconsin, 1985.

 

New Doc About A Pioneer Of Modern Design

Architect Eliot Noyes was one of the leading pioneers of modern design during the mid-century, post-war boom in the United States. Educated by Walter Gropius at Harvard, Noyes did more than anyone to align the Modernist design ethos to the needs of ascendant corporate America. His impact on companies like IBM and Mobil Oil paved the way for Apple and many of the other design-conscious brands we know today.

Modernism, Inc. follows Noyes’ career up to a time when disruptive designers of the early ’70s began pushing back against the Modernist view of progress. As he did in Eames: The Architect and the Painter (co-directed with Bill Jersey), filmmaker Jason Cohn uses the story of a mid-century icon to raise contemporary questions about the role of a designer in today’s world.
Jason Cohn will appear in person at both the New York and Los Angeles openings.
“Well-researched, well-crafted and fascinating.
Noyes created and forged some of the most respected, meaningful, and influential industrial design programs in the United States. The rise and attraction of modernism on the heels of the postwar economic boom birthed contemporary design and philosophy that continue to exist and flourish today, influencing mega businesses, including Apple. The idea of beauty with utility, which was generated from the Bauhaus art movement, continues to remain vital today, and for this, we thank Noyes.”
-Sabina Dana Plassa, Film Threat

“Noyes held the belief that design was not an afterthought, but rather something that needed to be infused into every aspect of a company’s thought process. The result helped IBM become the technical design juggernaut of the early personal computing era—and paved the way for companies like Apple to design products for the 21st century. Interviews with those who worked with him, as well as those in his family who knew him intimately, paint the picture of a man who was truly ahead of his time.”-Sara Harowitz, The Georgia Straight
MODERNISM, INC. Directed by Jason Cohn 79 minutes | color | In English | 2023
CREDITS

Jason Andrew Cohn  Writer and Director
Camille Servan-Schreiber  Producer Kevin Jones and Jason Cohn  Editors Steven Emerson  Music Composer Sebastian Roché  Narrator Talia Mindich  Associate Producer
Talia Mindich, Andrei Valladolid, Nina Goodby  Assistant Editors

© 2023 Bread & Butter Films
For the Silo, Kelly Hargraves.

How The Old iPhone5 Was Made & Why It Still Matters

Courtesy of Silo tech writer Alex Hillsberg
Courtesy of Silo tech writer Alex Hillsberg

The Silo's new Sci-tech writer Mr. Alex Hillsberg
The Silo’s new Sci-tech writer Mr. Alex Hillsberg

Must Have Chargers For iPhone X

With Streetinsider.com recently reporting that Apple sold an estimated 6 million iPhone X units this past Black Friday weekend alone, as calculated by an analyst with Rosenblatt Securities, and with untold millions more sure to sell throughout the holiday season, it’s clear the marketplace is embracing this new iPhone iteration. In fact, the analyst firm is so optimistic that it increased December quarter iPhone X shipment estimates from 27-28 million to 30 million units.

Given each new iPhone model launch ushers in an abundance of equipment and accessories hell-bent on helping users make the most of their prized smartphone, I began wondering what extras are on tap for the eagerly anticipated iPhone X. In the event that your own curiosity is piqued just the same, or if you’re one of the multi-millions of customers for whom iPhone X marks the spot in your heart (and wallet), here’s an array of accouterments that can help ensure smartphone satisfaction for any X-phile. All prices are in US dollars and are msrp.

ALLDOCK USB Charging Station (www.Alldock.com) – Starting at $129.00

The ALLDOCK is a multi-port USB charging station that offers consumers a piece of technology that looks like anything but, allowing them to embrace their electronics and avoid the cable chaos. This interior design-friendly gadget gives users the ability to charge all devices you’re your iPhone X, as well as other smartphones and tablets, all in one place. Accessory add-ons include a docking mount for the Apple Watch, Fitbit Wearables, Apple Pencil and Apple AirPods (more charging add-ons are to be released the company says). It offers all around compatibility for both wireless and quick charge phones, and delivers a 40% quicker charge as compared to using a standard charger. ALLDOCK is available in two sizes and is offered in various premium materials such as Bamboo, Walnut and Aluminum.

ALLDOCK iphone charger
photo- ALLDOCK

Skyroam Solis (www.SkyRoam.com) – $149.99

Skyroam Solis is a dual 4G LTE global WiFi hotspot and power bank. It satisfies the most significant needs of travelers with super-fast LTE speeds, long-lasting battery and mobile charging that will keep users powered-up all day long. Skyroam service seamlessly connects users with mobile WiFi on-the-go through its proprietary global WiFi network that’s powered by patented virtual SIM (vSIM) technology. With instant access to secure, unlimited data in over 100 countries, Skyroam Solis offers the convenience of going from one country to another without configuring local SIMs, updating plans, or incurring roaming charges or overage fees. Plus, it’s shareable on up to five devices simultaneously.

Skyroam Solis
photo- Skyroam

Mophie Wireless Charging Base (www.Mophie.com) – $59.95

Mophie makes charging your iPhone easier than ever. This wireless charging base includes the latest Qi technology and delivers fast charging speeds of up to 7.5W to your phone on contact. In fact, it delivers safe charging speeds 50% faster than standard wireless chargers. The non-slip finish ensures perfect placement, and the included wall adapter means you get everything you need right in the box to start charging right away. The wireless capability also allows users to be clutter-free and take the base with them while they’re out and abut; making it a portable option for those indelibly on-the-go.

photo- Mophie

Tech Armor ActivePower External Battery Dual USB Power Bank (www.TechArmor.com) – $39.99

Compact and lightweight, the Tech Armor 6000mAh Power Bank will boost your smartphone’s battery life, ensuring you’re charged up for your entire day. You’ll get not just one, but two full charges for your smartphone. And, it boasts a built-in flashlight. The Power Bank also has dual USB ports to charge two devices at once, such as a phone and a tablet. Tech Armor’s high performance ActivePower external battery packs use only the highest quality, Grade A Korean battery cells available on the market. With four models to meet anyone’s power needs, Tech Armor ActivePower series his sure to have something for you. This 6000 model is covered by a hassle-free two-year warranty. The company even says that they are committed to your satisfaction and will send you free replacements for defective or unsatisfactory product.

photo- Tech Armor

Ventev Mobile Powercell Chargers (MobileAccessories.Ventev.com) – $54.99 – $74.99

Ventev Mobile’s Powercell 6000+ is a useful device for frequent travelers who hate carrying multiple chargers to keep all of their gadgets and gear juiced up. The Powercell 6000+ is a 6000mAh, dual output portable backup battery that provides a full 25 hours of talk, 18 hours of Internet, 20 hours of video or 100 hours of audio playback power. It features 1A and 2.1A ports that allow for optimal and simultaneous charging of phones and tablets, and built-in folding AC prongs that make recharging the battery extremely easy. Also from Ventev Mobile is the Powercell 3015c, a 3000mAh portable battery with a built-in Apple Lightning cable. The slim, wallet-sized device provides 12 additional hours of talk time and features an on/off button with LED battery indicator light. The attached cable eliminates the need to carry around pesky removable cords to charge your devices while on-the-go.

Ventev Powercell 6000
photo- murrayontravel.com


For the Silo, Merilee Kern. 
As the Executive Editor and Producer of “The Luxe List,” Merilee Kern is an internationally-regarded consumer product trends expert and hospitality industry voice of authority—a travel, dining, leisure and lifestyle expert who spotlights noteworthy marketplace innovations, change makers, movers and shakers. She reports on exemplary travel destinations and experiences, extraordinary events and newsworthy products and services across all categories. Reach her online at www.TheLuxeList.com, on Twitter at www.Twitter.com/LuxeListEditor, on Facebook at www.Facebook.com/TheLuxeList and on Instagram at
www.Instagram.com/LuxeListReviews.

Here’s the fineprint- ********Some or all of the accommodations(s), experience(s), item(s) and/or service(s) detailed above may have been provided at no cost and/or sponsored to accommodate this review, but all opinions expressed are entirely those of Merilee Kern and have not been influenced in any way.********

Featured image- ALLDOCK

Sources:

https://www.streetinsider.com/Analyst+Comments/Apple+%28AAPL%29%3A+iPhone+X+Production+Ahead+of+Schedule%2C+6+Mil+Est+Sold+Through+Black+Friday+Weekend+-+Rosenblatt/13544265.html

http://rblt.com/

Theft Of Artist Ideas May Not Be Theft At All

Recently, one of my readers wrote that “there is another kind of generosity that comes much harder to me. I know I shouldn’t be stingy in this way, but I find myself stubbornly so. It’s the generosity of sharing my ideas, my connections, or giving a leg up to those who could benefit sometimes from my knowledge – whether that’s contacts, networks, tips, or the meat of my ideas themselves.”

This concern, of course, is not unique and strikes at the heart of something that all those in creative professions fear and must face. The ownership of ideas is difficult to prove. If you tell someone your plan in confidence and they, in turn, use it for their own purposes, there is very little you can do to show that you are the originator. Spreading this rumor is likely to make you look like the bad guy. It’s no wonder that this sort of generosity is cause for concern.

Arguably, no one would really offer up their original ideas before they have been fleshed out and no one would expect this from another artist. Talking about work in progress in general terms is one thing, but detailing the entire plan is another altogether. There is nothing wrong with being a little protective of your creative capital, it is the lifeblood of what you do.

Steve Jobs and Apple on stealing ideasBut what about sharing your networks or some trade secrets that helped you get to where you are today? While you may have worked tooth and nail for everything you’ve gained, there were surely people along the way who said yes at the right moment and assisted your progress. No one can ask more than this, and as an artist of a certain standing, there is nothing wrong with offering this sort of help.

It’s important to ask yourself what you may gain or lose by offering your assistance in any way. While this may not sound like a very altruistic way of thinking, remember that you are indeed running a business and there is nothing wrong with a bit of shrewd thinking. Further, though, when you stop and think about the outcome of sharing your network, it is unlikely that helping an emerging artist by introducing people who might be able to help will in any way affect your position as a more established artist.

No one exists in a vacuum. Even you, who may have scraped and fought your way to where you are today, benefited from the acceptance and help of others. Sure, you may have pounded the pavement endlessly in order to secure your position but that is no reason not to pay forward the success you have achieved. It is too easy to forget, once you have achieved a certain status, the myriad small moments that led you there. While it may seem as though hardly anyone was out to help you in the early days, surely there were some, otherwise you could not be where you are today. Even if it was just a few gallerists who were finally willing to take a chance, there are always rungs of assistance in the ladder to every success, no matter how small.

In our present times, we live in a world where community is very much at our fingertips. The rules of social engagement have definitely changed. This is both a benefit and a burden. While the new landscape of online social engagement can absolutely open up opportunities that didn’t exist prior to this revolution in social connection, the online community can also present a world of its own difficulties. It is impossible to know who you are actually dealing with and with virtually everyone in the entire art world present online, it can easily overwhelm a newcomer to the scene.

For these reasons, there is a lot to be said for good old-fashioned face-to-face interaction. Being the sort of artist who is willing to mentor in the real world sets you apart. Establishing this sort of reputation, for being the one who will gladly share the bounty you have created, seldom reverses one’s own success and frequently opens new doors you may never have considered.

Getting back to the idea of sharing artistic ideas and concepts, this is a bit trickier. As I said before, it may be unwise to give away your nascent, unfruited plans to just anyone. On the other hand, allowing others to view works in progress isn’t likely to cause too much harm.

Arguably, there is no such thing as original art. Even some of the most contemporary artists’ work is derivative of past creations. Marina Abramovic, in her unique style, has absolutely drawn from (and occasionally been accused of copying) works by other artists. Pablo Picasso (and perhaps more famously, Steve Jobs who quoted him) said, “good artists copy, great artists steal.” This doesn’t mean that you should open yourself up to idea theft, but it does mean that perhaps being stingy with your concepts, your network, your position as an established artist, doesn’t count for as much security as you might think. Be smart about things, but in general, it is always a good idea to reach down the ladder and help those coming up behind you find the next rung.  For the Silo, Brainard Carey.

Brainard  is currently giving free webinars on how to write a better Artist bio and statement and how to get a show in a gallery – you can register for that live webinar and ask questions live by clicking here.

Featured image: Jarrod Barker.