advertisement
advertisement
advertisement

Life, Inc. Chapter Seven: From Ecology To Economy

Big Business and the Disconnect from Value

The Market Makers

We base our very survival on our ability to use and accumulate money. So its rules and characteristics can’t help but seep into our thinking and behaviors as individuals, as businesses, and as a society.

advertisement

The more we accept its use, the more we think of our centrally issued money as a natural player in the economy rather than a particular tool with particular biases. But over the centuries of its use, the influence of our money over our interactions has been demonstrated time and time again: a scarce currency designed in favor of competitive corporate behavior will promote such behavior in those who use it. This is not magic; the money is not possessed. It’s just biased toward the interests of those in a position to make money by storing it rather than spending it. It’s money for capitalists. And they had better use it as it was designed or they’ll end up on the wrong side of the currency equation themselves.

Our prevailing ignorance about the bias of the money we use undermines our best efforts at making the economy work better for the many or even the few. Businesses believe they are required to grow, and pick from among their inappropriate acquisition targets as if choosing between the lesser of two evils. Eighty percent of these acquisitions drain value and equity from both merging companies. Unions accept the false premise that the new competitive economy demands that they consent to lower wages; they fail to recognize that their wages are making up a progressively smaller portion of corporate profits, or that money paid to them circulates through the real economy, while the money doled out to CEOs and shareholders tends to stay put. As a result, with each cut to union wages, education and health care suffer, and the overall competitiveness of the workforce declines. Businesses, meanwhile, make decisions catering to the agenda of the investing shareholders who seek to extract short-term gains at the expense of a company’s long-term stability, research and development, or even basic competence. They outsource core processes, lose access to innovation, and depend on branding to make up the difference. Revenues suffer and growth slows, but there’s debt to be paid, so more acquisitions are made and the workforce is slashed or outsourced. All the while, central banks attempt to walk the fine line between stimulating growth through lower interest rates and maintaining the value of their monopoly currencies.

The Wall Street Journal, Fox News, and The Economist compete against the BBC, The Guardian, and PBS to explain the conflicting interests of workers, investors, and corporations in the new global economy. But no matter whose case they make, they all fail to consider whether the money everyone is fighting over has rules of its own that make such conflicts inevitable. They argue over the placement of the chess pieces without pausing to consider that the board beneath them has been quite arbitrarily arranged to favor players who no longer exist. Neither does the Left-Right divide ever adequately address the income disparity between people caught on opposite ends of the currency spectrum. We can argue labor’s cause all we want, add regulations to the system designed to minimize worker exploitation, maximize their participation in corporate profits, or increase the minimum wage. All along the way, management will be dragged along, kicking and screaming, while bankers and investors — the ultimate arbiters of credit — grow more reluctant to fund such handicapped enterprises. It’s a lose-lose scenario because it works against the either-or bias of a scarce central currency to promote central authority at the expense of the periphery where value is actually created. Yet this is precisely what the currency we use was designed to do from its inception.

Renaissance monarchs didn’t invent central currency, or even the first currency monopoly. Both ancient Egypt and the Holy Roman Empire issued central currencies. In Egypt’s case, as best we know from historical accounts, the empire had overextended itself, conquering and controlling territory eastward to Canaan and beyond. In an effort to fund the defense of its borders and the control of its population, successive pharaohs initiated increasingly restrictive and centralized monetary policies — along with centralized religion and culture. Pharaohs outlawed local currencies and forced people to bring grain long distances to royal grain stores in exchange for the central currency. The famous famine depicted in the Bible may have been the result of natural causes or, like the one accompanying the establishment of scarce currency in the Middle Ages, of economic origins. The Holy Roman Empire issued its own currency to every region it conquered, both to extract value from its new territories and to assert the authority of the emperor. In both historical cases, central currency was a means of control, taxation, and centralization of authority during expansive dictatorships. And, in both cases, after a few hundred years, the continual debasement of currency led to the fall of the empire.

We’re fast approaching the limits of our own currency system, instated to benefit corporate interests and adjusted over time to do it ever more efficiently and automatically. If double-entry bookkeeping can be thought of as the spreadsheet software on which businesses learned to reconcile their debits and credits, central currency was the operating system on which this accounting took place. Like any operating system, it has faded into the background now that a program is running, and it is seemingly uninvolved in whatever is taking place. In reality, it defines what can happen and what can’t. And if its central premise is contradicted too obviously by world events, it just crashes, taking those without sufficient reserves or alternative assets down with it.

We need to be able to save money for the future, but we also need to be able to spend and circulate it. The money we use is great for the former, but just awful for the latter. Because of the way it is lent into existence, centralized currency draws wealth away from where people are actually creating value, and toward the center, where the bank or lender gets it back with interest. This makes it impossible for those on the periphery to accumulate the wealth created by their labors, or to circulate it through other sectors of their business communities. Instead, the money is used for more speculation. The price of assets increases and inflation looms — but without the wage increases officially blamed for inflation in what is promoted as a “supply-and-demand” economy.

advertisement

There are two economies — the real economy of groceries, day care, and paychecks, and the speculative economy of assets, commodities, and derivatives. What forecasters refer to as “the economy” today isn’t the real one; it’s almost entirely virtual. It’s a speculative marketplace that has very little to do with getting real things to the people who need them, and much more to do with providing ways for passive investors to increase their capital. This economy of markets — first created to give the rising merchant class in the late Middle Ages a way to invest their winnings — is not based on work or even the injection of capital into new enterprises. It’s based instead on “making markets” in things that are scarce — or, more accurately, things that can be made scarce, like land, food, coal, oil, and even money itself.

Because there’s so much excess capital to invest, speculators make markets in pretty much anything that real people actually use, or can be made to use through lobbying and advertising. The problem is that when coal or corn isn’t just fuel or food but also an asset class, the laws of supply and demand cease to be the principal forces determining their price. When there’s a lot of money and few places to invest it, anything considered a speculative asset becomes overpriced. And then real people can’t afford the stuff they need. The oil spike of 2008, which contributed to the fall of ill-prepared American car companies, has ultimately been attributed not to the laws of supply and demand, but to the price manipulations of hedge-fund speculators. Real jobs were lost to movements in a purely speculative marketplace.

This is the reality of speculation in an economy defined by scarcity. Pollution is good, not bad, because it turns water from a plentiful resource into a scarce asset class. When sixty-eight million acres of corporate-leased U.S. oil fields are left untapped and filled tankers are parked offshore, energy futures stay high. Airlines that bet correctly on these oil futures stay in business; those that focus on service or safety, instead, end up acquisition targets at best — and pension calamities at worst. Such is the logic of the speculative economy.

As more assets fall under the control of the futures markets, speculators gain more influence over both government policy and public opinion. Between 2000 and 2007, trading in commodities markets in the United States more than sextupled. During that same period, the staff of the Commodity Futures Trading Commission overseeing those trades was cut more than 20 percent, with no corresponding increase in technological efficiency. Meanwhile, speculators have only gotten better at exploiting structural loopholes to engage in commodities trades beyond the sight of the few remaining regulators. Over-the-counter trading on the International Commodities Exchange in London is virtually untraceable, while massive and highly leveraged trades from one hedge fund to another are impossible to track until one or the other goes belly-up — and pleads to be bailed out with some form of taxpayer dollars. Government is essentially powerless to identify those who are manipulating commodities futures at consumers’ expense, and even more powerless to prosecute them under current law even if they could. People, meanwhile, come to believe that oil or corn is more scarce than it is (or needs to be), and that they’re in competition with the Chinese or the neighbors for what’s left.

The speculative economy is related to the real economy, but more as a parasite than as a positive force. It is detached from the real needs of people, and even detached from the real commerce that goes on between humans. It is a form of meta-commerce, like a Las Vegas casino betting on the outcome of a political election. Only in this case, the bets change the costs of the real things people depend on.

As wealth is sucked out of real economies and shifted into the speculative economy, people’s behavior and activities can’t help but become more market-based and less social. We begin to act more in accordance with John Nash’s selfish and calculating competitors, confirming and reinforcing our dog-eat-dog behaviors. The problem is, because it’s actually against our nature to behave this way, we’re not too good at it. We end up struggling against one another while getting fleeced by more skilled and structurally favored competition from distant and abstracted banks and corporations. Worse, we begin to feel as though any activity not in some way tied to the corporate sphere is not really happening.

advertisement

Wal-Mart’s success in extracting value from local communities, for example, is tied directly to the company’s participation in speculative markets beyond the reach of small business, and its tremendous ability to centralize capital. We buy from Wal-Mart because Wal-Mart sells imported and long-distance products cheaper than the local tailor or pharmacist can. Not only does the company get better wholesale prices; its centrality and size lets it get its money cheaper. Meanwhile, because we are forced to use centralized currency instead of a more local means of exchange or barter (and we’ll look at these possibilities later), each of our transactions with local merchants is passed through a multiplicity of distant banks and lenders. All of the advantages and efficiencies of local commerce are neutralized when we are required to use long-distance, antitransactional currency for local exchange between people. We must earn the currency from one corporation that has borrowed from the central bank in order to pay another corporation for a product it has purchased from yet another. We don’t have an easy way to get the very same product from the guy down the street who knows how to make it better and get it to us ultimately more efficiently than the factory in Asia.

But the notion of purchasing things with some kind of local currency system, bartering for them with members of our local community, or — worst of all — accepting favors in exchange for other ones feels messy and confusing to us. Besides, Wal-Mart is a big company with lots of insurance and presumably some deep pockets we could sue if something goes wrong. When favors replace dollars, who is responsible for what? Too many of us would rather hire a professional rug cleaner, nanny, or taxi than borrow a steamer from a neighbor (what if we break it?), do a babysitting exchange (do we really like them?), or join a carpool (every day? Ugh). Social obligations are less defined than financial ones.

Our successive disconnects from place and people are what make this final disconnection from value so complete and difficult to combat. We have lost our identification with place as anything but a measure of assets, and we have surrendered our identification with others to an obsession with ourselves against everybody else. Without access or even inclination to social or civic alternatives, we turn to the speculative market to fulfill needs that could have been satisfied in other ways.

Even those of us capable of resisting the market for most of our lives can’t help but cave in once we attempt to raise families of our own. I was actually looking forward to parenthood as an opportunity to disconnect my family from the consumerist pathology of the market and engage in one of the more natural processes still available to us. I was still under the false impression that there was something going on between mother, father, and baby in which no expert, marketer, or website could interfere.

Of course, it turns out that parenthood means enduring a full frontal assault of marketers trying to make a buck off our guilt, fear, and ignorance. While genetic counselors offer prenuptial evaluations of chromosomal compatibility, an industry of fertility experts offers in-vitro alternatives to worried thirty-somethings after they’ve been working unsuccessfully to get pregnant for two whole months. Once the baby is born, an army of professional consultants is available to do what parents and their community have done for millennia. Lactation consultants teach new mothers how to breast-feed, while sleep specialists develop the ideal napping schedules. Parents who think they’re resisting this trend hire the New Age equivalent of the same professionals — “doulas,” midwives, and herbalists perform the same tasks for equivalent fees. For those who don’t know which consultant to hire, there are agencies of meta-consultants to help, for an additional fee.

Parenthood — like so much of basic human activity — has been systematically robbed of its naturally occurring support mechanisms by a landscape tilted toward the market’s priorities. Towns used to have blocks in neighborhoods with old ladies and large families and neighbors who could watch the baby for an hour while you went out and got some groceries. Now, instead of repairing the neighborhood sidewalks, we purchase Bugaboos — the eight-hundred-dollar stroller equivalent of an SUV, complete with shock absorbers, to traverse the potholes. For every thread of the social fabric worn bare by the friction of modern alienation, the market has risen with a synthetic strand of its own. Refusal to participate means risking that your child will be ill equipped — you guessed it — to compete for the increasingly scarce spots at private prep schools and colleges.

advertisement

As someone who subsidized his early writing career by preparing high school students for their college entrance exams, I can attest to the competitive angst suffered by their parents — as well as the lengths to which many of them are willing to go to guarantee their children’s success. There was a moment in many of my engagements — one that any overpriced SAT tutor will well recognize — when Junior’s worried, wealthy parents would sit me down in their Beverly Hills living room, beyond their son’s hearing range, to ask me the difficult question: hypothetically, what would it take to get me to sit for the exam in place of their son? While I never agreed to accept the cars or cash offered to me, I still wonder how many steps removed my tutoring services were from any other artificial means through which a generation sought to guarantee their children’s place on the speculative side of the economy.

Were these the concerns of Depression-era parents who had experienced lives hindered by the lack of cash, or first-generation immigrants who had escaped from abject poverty, it might be easier to comprehend or excuse their willingness to teach their children to cheat. Their perception of the risks would be understandably magnified. No, the parents in question were hiring tutors to write papers and take tests for children who were already attending thirty-thousand-dollar-a-year private schools, and already in line to inherit multimillion-dollar trust funds. Teachers who challenge the cheating students on their plagiarized work soon learn that parent donors and trustees wield more power than department heads.

The wealthy discount such concerns as needlessly abstract. What does it matter, they ask, when the “real world” is similarly based on cheating and loopholes? (Just to be clear: what’s wrong is that kids end up in classes, schools, and jobs for which they are not prepared. Whenever they have trouble on an assignment or a test in the future — even if they are smart enough to complete it — they believe that they are being challenged beyond their ability. Worst of all, on an emotional level, they conclude that their parents don’t love them just as they are.) Those who don’t count themselves among the privileged classes see it as confirmation of the unfairness of the system, and ample justification for them to do whatever it takes to climb up the ladder themselves. As the arrest of a tiny minority of otherwise identical billionaire stockbrokers, CEOs, and hedge-fund managers teaches us, cheating is wrong only if you get caught.

Kids growing up in such a world inherit these values as their own. This is why American children surveyed will overwhelmingly say they want to grow up to be Bill Gates — the world’s richest man — even though almost none of them want to become software engineers. It is why kids can aspire to win American Idol without ever caring about singing or even music. The money and recognition they envision for themselves is utterly disconnected from any real task or creation of value. After all, the people who actually create value are at the bottom of the pyramid of wealth.

Sadly, that’s not just a perception. The bias of centralized currency to redistribute wealth from the poor to the rich gets only more extreme as the beneficiaries gain more influence over fiscal policy. Alan Greenspan, a disciple of Ayn Rand, repeatedly deregulated markets, leading to the average CEO’s salary rising to 179 times the average worker’s pay in 2005, up from a multiple of 90 in 1994. Adjusted for inflation, the average worker’s pay rose by a total of only 8 percent from 1995 to 2005; median pay for chief executives at the three hundred fifty largest companies rose 150 percent.

The top tenth of 1 percent of earners in America today make about four times what they did in 1980. In contrast, the median wage in America (adjusted for inflation) was lower in 2008 than it was in 1980. The number of “severely poor Americans” — defined as a family of four earning less than $9,903 per year — grew 26 percent between 2000 and 2005. It is the fastest-growing group in the American economy. On a global level, by 1992 the richest fifth of the world was receiving 82.7 percent of total world income, while the bottom fifth received just 1.4 percent. In 1894, John D. Rockefeller, the richest man in Gilded Age America, earned $1.25 million — about seven thousand times the average income in the United States. In 2006, James Simons, a typical hedge-fund manager, “earned” $1.7 billion, or more than thirty-eight thousand times the average income. On top of this, hedge-fund managers’ salaries are taxed at “capital-gains” rates much lower than the rate that average workers pay — about 35 percent of everyone else’s earnings goes to pay taxes of one form or another, and most of that money goes to pay interest on loans taken out by the government from the Federal Reserve Bank, at interest rates set by the bank.

advertisement

Unlike money paid to workers, the sums siphoned off by the wealthiest brackets are not used to buy things. These funds do not return to the real economy; they are invested wherever return is the highest. The money itself becomes a commodity to be hoarded, saved, and grown. For most investors, this means either placing it overseas, or in the derivatives and futures that make corn, oil, and money more expensive for everyone.

Since the beginning of currency and trading deregulation in the 1970s (when Nixon took the dollar off even a nominal gold standard), money has left the economy for pure speculation at ever faster rates. Over the same years, with less money in the system, the poor began sending mothers of young children to work — at rates that have doubled since then. Meanwhile, for the very first time, America experienced an overall growth of 16 percent in GDP between 2001 and 2007, while the median wage remained unchanged. The share of total income going to the richest 1 percent of Americans, on the other hand, rose to a postwar record of 17.4 percent. Americans work an average of three hundred fifty more hours per year than the average European, and at least one hundred fifty hours more than the average Japanese.

This means less time for Little League, barbecues, and the Parent-Teacher Association, and more indebtedness to credit-card companies and banks to make ends meet. Twelve percent of Americans borrowed money to pay their winter heating bills; 9 percent of them did so with credit cards. The neighborhood, such as it is, becomes a place people struggle to stay in or even to get out of — not a place in which people contribute their time and energy. It’s every family for itself.

This selfishness and individualism reinforces and is in turn reinforced by the avarice that has replaced social relationships in local communities. Adam Smith’s theories of the market were predicated on the regulating pressures of neighbors and social values. The neuroscientist Peter Whybrow has observed that as people meet fewer real neighbors in the course of a day, these checks and balances disappear: “Operating in a world of instant communication with minimal social tethers,” he explains, “America’s engines of commerce and desire become turbocharged.” As Whybrow sees it, once an economy grows beyond a certain point, “the behavioral contingencies essential to promoting social stability in a market-regulated society — close personal relationships, tightly knit communities, local capital investment, and so on — are quickly eroded.”

Instead of working with one another to create value for our communities, we work against one another to help corporations extract money from our communities. When the city of Buffalo, New York, lost dozens of factories to outsourcing along with their manufacturing jobs, it became a national leader in bankruptcies, foreclosures, and urban decay. Over 108 collection agencies have opened to address the problem in Erie and Niagara Counties, hiring over 5,200 phone operators to track down and persuade debtors like themselves to fix their credit. As interest rates on these debts rise to upwards of 40 percent, more wealth is transferred from the poor to the rich, from the real economy to the speculative economy, and out of circulation into the banking ether.

It’s a system in which there is simply not enough real cash left in circulation for people to pay for their needs. No matter what’s happening to the overall economy, the amount of money that consumers owe goes up every year. Again, this is not incidental, but structural, as the total increases every time money is loaned into and then extracted from circulation. According to the Federal Reserve’s figures, consumer credit — mainly credit-card debt — went up from $193 billion in 1973 to $445 billion in 1983 to $886 billion in 1993, and $2,557 billion in 2007. As of June 2008, American households have approximately $13.84 trillion in total debt obligations. This is roughly equivalent to the United States GDP for the entirety of 2007.

advertisement

In order to get people who have lost their access to cash to spend instead through debt, credit-card companies market credit as a lifestyle of choice. MasterCard’s “priceless” campaign pretends to appeal to those of us who understand that life’s pleasures have become too commodified. It chronicles a day at a baseball game or a beach vacation, and the price of each purchase, presumably achievable only by going into just a bit of debt. Of course, the real joy of the day — the love of a child, or a kiss from a wife — is “priceless,” but it comes only after all those purchases.

The Visa card has replaced money in the European version of Monopoly, and the American version of the Game of Life. Having saturated the college and teen markets with promotions and advertisements — often with kickbacks to schools for the privilege of pitching to a captive audience — the credit-card company is now targeting the “tween” market with pretend debt. According to Hasbro (which acquired the game’s creator, Milton Bradley), it’s meant as an educational experience: “Visa is an opportunity for Mom or Dad to talk to kids about managing money, and that debt isn’t a positive thing.”

Adults who refuse to use plastic instead of paper are scorned in commercials designed to take any remaining stigma away from debtors by placing it on cash payers instead. In one commercial, a long line of masculine debit-card users waiting to buy refreshments at a sports game groan as a smaller, nerdy man uses time-consuming cash. Actually paying for something with money is depicted as an emasculating fumble for change.

We like to blame corporations for this mess, but many of them are in almost exactly the same predicament. Most of the Fortune 500 companies are just the names on mountains of debt. The total value of any company’s shares — market capitalization — can be twenty, fifty, or several hundred times its actual annual earnings. Some multibillion-dollar companies don’t actually earn any money per share at all. But because corporations borrow their money from the same institutions the rest of us do, they are subject to the very same rules.

Like towns drained of their ability to create value through local business, many companies find themselves robbed of their most basic competencies by macroeconomic forces that push them toward spreadsheet management and reckless cost cutting. Thanks to their debt structure, corporations are required to grow. This means opening more stores, getting more business, and selling more products (increasing the “top line”), or cutting jobs, lowering salaries, and finding efficiencies (decreasing expenditures). Maintaining a great, sustainable business is not an option — not when shareholders and other passive institutional investors are looking for return on the money they have themselves borrowed and now loaned to the corporation. Stocks don’t go up when corporations make a steady income; they grow when companies grow, or at least appear to.

When Howard Schultz, the founder of Starbucks, returned to the helm of his company in 2007, he found his iconic coffee brand watered down by excessive expansion. Opening a Starbucks on every city block had sounded good to investors who hoped they had gotten in on the next McDonald’s, but the strategy had damaged the quality and experience consumers sought from a deluxe coffee bar in the first place. As Schultz explained in a candid memo posted on the Internet without his knowledge, by introducing “flavor locked packaging” and automatic espresso machines, “we solved a major problem in terms of speed of service and efficiency, but overlooked the fact that we would remove much of the romance and theatre that was in play with the use of the [La Marzocco] machines.” The mandate to open an outlet each day resulted in “stores that no longer have the soul of the past and reflect a chain.”

advertisement

Last year, the president of Ethiopia flew to Starbucks’ corporate headquarters in Seattle, hat in hand, to ask the company to credit his country for the export of the beans used in some of their standard coffee flavors. But Starbucks, understanding that helping Ethiopia brand its beans would hurt its own bargaining leverage, refused. From Star-bucks’ perspective, coffee is a commodity to be sourced from the lowest bidder; once beans have local identity and can be asked for by name, the locality producing them has pricing power. Only when threatened by the possibility of a public-relations disaster did the company relent.

I’m regularly called in by companies looking to improve what they call their “stories” — the way consumers and shareholders alike perceive them. But when I interrogate them about what their companies actually do, many are befuddled. One CEO called me from what he said was a major American television manufacturer. I happened to know at the time that there were no televisions manufactured in the United States. But I went along with the charade.

“So you make television sets?” I asked. “Where?”

“Well, we outsource the actual manufacturing to Korea.”

“Okay, then, you design the televisions?”

“Well, we outsource that to an industrial design firm in San Francisco.”

advertisement

“So you market the televisions?”

“Yes,” he answered proudly. “Well,” he qualified, “we work with an agency in New York. But I am directly involved in the final decisions.”

Fulfillment and delivery were handled by a major shipping company, and accounting was done “out of house,” as well.

Just what story was I supposed to tell? The company no longer did anything at all, except serve as the placeholder on processes that happened in other places.

The problem with all of this outsourcing isn’t simply the loss of domestic manufacturing jobs, however painful that might be to those in the factory towns decimated by the movement of these jobs overseas. The bigger problem is that the outsourcing companies lose whatever competitive advantage they may have had in terms of personnel, innovation, or basic competency. During the famous dog-food-poisoning crisis of 2007, worried consumers called their dog-food companies for information. Were the brands getting their chow from the plant in China responsible for the tainted food? Many of the companies couldn’t answer the question. They had outsourced their outsourcing to another company in China that hadn’t yet determined who had gotten which food. The American companies didn’t even do their own outsourcing.

As a substitute for maintaining any semblance of competence, companies resort to branding and marketing. When Paul Pressler — a former Disney executive — accepted his post as CEO of the Gap, he explained at his inaugural press conference that he had never worked a day in the garment industry. He didn’t express this as a deficit, but as a strength. Instead, he would bring his knowledge of marketing and consumer psychology to the forefront — as well as his relationships with cultural icons like Sarah Jessica Parker. While Parker made some great TV commercials, they weren’t enough to put better clothes on the racks, and under pressure, Pressler resigned in 2007. The company is now struggling to stay alive.

advertisement

Other companies seek to remain competitive by dismantling the private sector’s social safety net — pensions, benefits, and the steady salary increases won by long-time employees. In 2007, Circuit City came under pressure from big box stores such as Wal-Mart and Best Buy, whose young employees earned less than its own. The company decided to dismiss 3,400 people, about 8 percent of its workforce. They weren’t doing a bad job, nor were the positions being eliminated entirely. It’s just that the workers had been employed for too long and as a result were being paid too much — between ten and twenty dollars per hour, or just around the median of American workers. By definition, to stay competitive, Circuit City would have to maintain a workforce making less than that average. The company blamed price cuts on flat-panel television sets made in Asia and Mexico, which had squeezed their margins.

The corporatist justification for the layoffs, courtesy of McKinsey & Company, was that “the cost of an associate with 7 years of tenure is almost 55 percent more than the cost of an associate with 1 year of tenure, yet there is no difference in his or her productivity.” The assertion that a company cannot leverage greater productivity from a more experienced employee is at best questionable and at worst a sign of its own structural inability to properly utilize human competence. That it sees no path to letting employees participate in the value they have created over time for the company is pure corporatism.

Not that this value is even recognized by the spreadsheet. Many assets — like customer and employee satisfaction, innovation, customer loyalty during a crisis, numbers of unsolicited applications for jobs, or contribution to the state of the industry — remain unrecorded in the Excel file, off the quarterly report, and are misjudged as tangential, “feel-good” bonuses, akin to winning the intra-office softball game. Of course, these are some of the most important measures of a company’s success both as a business and as a human enterprise.

A few radical business theorists — like Harvard’s Bob Kaplan — have promoted the use of “scorecards” designed to measure and include some of these unconventional metrics in the overall appraisal of a company’s worth. The traditional spreadsheet, Kaplan believes, is like a supply curve from Microeconomics 101: “It tells you what things cost but not what they’re worth. The Balanced Scorecard is like a multidimensional demand curve. It tells you what’s creating value.” Still, the scorecard itself boils down to more numbers.

Kaplan’s former partner, Portland State University’s H. Thomas Johnson, thinks the Balanced Scorecard is little better than any other. In his opinion, the reduction of every value to a piece of quantitative information is itself the crime. According to Johnson, human activity, commerce, and creativity are closer to real life than they are to math, and the focus on metrics “contributed to the modern obsession in business with ‘looking good’ by the numbers no matter what damage it does to the underlying system of relationships that sustain any human organization.” Hitting quarterly targets earns CEOs their options bonuses. The “underlying system of relationships” only matters to people who have the luxury of working in a business that isn’t stuck on the compounding-interest-payments treadmill.

When it is even considered, creating real, sustained value for customers, employees, partners, or, worst of all, competitors, is less often seen as a plus than a problem. In the zero-sum logic of corporatist economics, creating value for anyone other than the shareholders means taking value away from the shareholders. If employees are retiring with money to spare, it means they are being paid too much. If customers get more use out of a car or computer than promised, it means too much was spent on the materials. If a product actually works, then it might solve the problem that a corporation was depending on for its income.

advertisement

For example, most of us grew up in the era of synthetic insecticides and “crack and crevice” aerosol roach killers. Spraying the kitchen meant poisoning the pets, and the chemicals ended up polluting yards and groundwater. Besides, the formulations worked for only so long before the roaches would become immune, and new, more powerful sprays would have to be deployed. Then, in 1979, some researchers at American Cyanamid in New Jersey came up with a new odorless, tasteless insecticide that was much less toxic to humans and pets, and broke down into harmless ingredients before it could get to any groundwater. The only catch was that roaches needed to eat the ingredient. So the clever scientists dipped communion wafers in the insecticide and waited for roaches to voluntarily eat them. This worked so well that they toyed with the idea of selling the wafers under the name “Last Supper.”

Combat, the name they settled on, was so successful at killing roaches that by the end of 2000 Pest Control magazine ran the headline “are cockroach baits simply too ef fective?” After peaking at $80 million in 1995, the market for consumer-grade roach products had begun to shrink. It has gone down by 3 to 5 percent every year since then. Combat has killed its market along with all those roaches. Derek Gordon, a marketing VP at Combat’s parent, Clorox, put on a happy face, saying, “If we actually manage to drive ourselves out of business completely, frankly we’d feel like we did the world a service.” Clorox execs seemed less impressed by Combat’s service record, and sold off the brand to Henkel, of Germany, as part of a larger deal. Even though they had come up with one of the most truly successful industrial-chemical products in modern history, the scientists at Combat were now part of a declining box in the balance sheet and had to be discarded. Their value as innovators — or the value they had created for so many urban dwellers — was not part of the equation.

The less people spend on killing roaches, the worse it is for the economy by corporate and government measures. The universal metric of our economy’s health is the GDP, a tool devised by the National Bureau of Economic Research to help the Hoover administration navigate out of the Great Depression. Even the economist charged with developing the metric, Simon Kuznets, saw the limitations of the policy tool he had created, and spoke to Congress quite candidly of the many dimensions of the economy left out of his crude measure. Burning less gas, eating at home, enjoying neighbors, playing cards, and walking to work all subtract from the GDP, at least in the short term. Cancer, divorce, attention-deficit/hyperactivity disorder diagnoses, and obesity all contribute to GDP. The market works against human interest because of the way it measures success. And its measurement scheme has been based on tracking a currency whose bias toward scarcity and hoarding isn’t even generally recognized.

Nor do the aggregate GDP figures measure how many people are involved in all the spending. As Jonathan Rowe, director of West Marin Commons, testified at a Senate hearing, “they do not distinguish between a $500 dinner in Manhattan and the hundreds of more humble meals that could be provided for the same amount. A socialite who buys a pair of $800 pumps at Manolo Blahnik appears to contribute forty times more to the national well-being than does the mother who buys a pair of $20 sneakers for her son at Payless.” Centralized currency’s bias toward the accumulation of wealth by the already wealthy is camouflaged by the measures we use to gauge economic health.

Those who should be our best economics journalists and public educators seem oblivious of the way our business practices and fiscal policies drain value from our society in the name of false metrics. Although free-market advocates such as The Wall Street Journal, The Economist, and Financial Times are written with the interests of the businessman and investor in mind, their editorial staffs are educated and experienced enough to contend with these very basic contradictions. Instead, they continue to depict the economy as a natural ecology, whose occasionally brutal results are no worse than those of cruel nature herself. That which doesn’t kill us makes us stronger, anyway.

Journalists write this way only until the supposedly free and unfettered market comes after the periodical they happen to work for. When AOL bought Time Warner along with its portfolio of magazines, including Time, People, and Sports Illustrated, writers and editors at those publications complained that their periodicals were being turned into assets. Editorial budgets went down, writers were instructed to become more advertiser-friendly, and the integrity earned by the magazines through years of hard work was being spent all at once on lowbrow television specials and cross-promotions.

advertisement

The Wall Street Journal didn’t shed a tear over such developments. As an independent publication run by the Bancroft family’s Dow Jones company since 1902, the Journal‘s articles described only the process through which Time Warner’s “brands” would be updated, its divisions made more efficient, and its overpaid staff trimmed down. Within a few years, however, it was The Wall Street Journal fending off an unsolicited $5 billion offer from Rupert Murdoch. A pervasive feeling among investors that print publications were imperiled by the Internet had led to a decline in all newspaper stock prices — making the Journal an easy target, even though its website was one of the few profitable online newspaper ventures, earning far more than its print edition.

All of a sudden the tables were turned. Editors who had long argued for free-market principles now saw the benefit in keeping a company small and independent — especially after it had gained over a hundred years of reputation and competency in its field. One of the owners wrote an editorial arguing that “a deal with Rupert Murdoch would not be a deal between partners with shared values. One of Murdoch’s stated goals of the purchase is to use The Wall Street Journal to shore up his new business cable channel. By Murdoch’s own admission, this so-called business-friendly television channel would shy away from reporting scandals, and concentrate on the more positive business news.” In a “wag-the-dog” scenario even more preposterous than the one imagined by Hollywood comedy writers, a corporation buys a business news brand as a public-relations hedge on its investments.

What these editors now understood was that by becoming a part of News Corporation, the Wall Street Journal staff would lose its ability to create value through its newsgathering and analysis. News Corp. was buying the newspaper for the value it could extract from the venerated media property. The Wall Street Journal‘s name and, for a time, its editors and writers could lend support to Murdoch’s effort to build a TV business-news brand. The Economist depicted the acquisition of Dow Jones as gaining “the media equivalent of a trophy wife.”

Even allies of corporatist culture cannot be allowed to thrive on the periphery. Because it was itself a publicly held company, Dow Jones had nowhere to hide in the open market it had defended for so long. Now the editors — off the record more than on (at least until they were fired or resigned) — were railing against the concentration of global media ownership, warning about the political influence of foreigners on American business, and touting the necessity for family-owned newspapers to maintain their impact and high standards by remaining independent of centralized business interests.

But to do that, they would have had to find a way to remain independent of all centralized media, including the biased medium we call centralized currency.

Let Them Eat Blog

Ironically, it was the threat of competition from a decentralized medium — the Internet — that rendered the Journal temporarily weak enough to be conquered by a centralized medium it had unwittingly supported for a century: money.

In one sense, the Internet breaks all the rules imposed by centralized currency and the speculative economy. Value can be created, seemingly, from anywhere. An independent clothing designer can use consumer-grade equipment to shoot pictures of her clothes, post them online, take orders, and even print the postage. No need to pitch the department stores for space on their precious sales floors, to approach major clothing lines for an anonymous position under one of their labels, or to utilize any of the corporate middlemen traditionally extracting value from both the designer and her customers.

Craftsmen from remote regions use communal websites to export products that previously couldn’t make it beyond the confines of their own villages. Film students post their low-budget videos on YouTube and earn mainstream attention along with big Hollywood contracts. Politicians use their websites to raise funds from individuals, and amass more capital through many small donations than they would have through a few big ones. A few hundred thousand hobbyists can collaborate on a free online resource, Wikipedia, that beats Britannica in breadth, usage rates, and often accuracy. Another group develops and maintains a web browser, Firefox, that works better and more safely than Microsoft’s.

Corporate charters allowed wealthy élites to monopolize industries; the Internet allows competition to spawn anywhere. Only the best-capitalized companies could finance the construction and maintenance of Industrial Age factories; an Internet business can be run and scaled from a single laptop. Monopoly currencies and a few centuries of legislation worked to centralize value creation; the Internet works toward decentralizing value creation and promoting the work of those on the periphery and direct transactions between them.

At the dawn of the Internet era, Marxists, feminist intellectuals, and postmodernists celebrated the decentralization they believed would soon occur on every level of society. They saw in new media the emergence of a truly social and organic human collective. Instead of being controlled and artificially divided by ideologies, class boundaries, nations, or even gender, humans would now become part of what the Frenchmen Gilles Deleuze and Félix Guattari called a rhizome. The rhizome is a peculiarly horizontal and nonhierarchical plant structure — like water lilies or a ginger root — that is capable of producing both the shoot and the root systems of a new plant. New growth and value can come from anywhere. Likewise, a rhizomatic culture would be constantly negotiating meaning and value wherever meaning and value needed to be determined — instead of through some arbitrary central authority.

The Internet and its many hyperlink structures were understood as the true harbingers of this rhizomatic culture. Other metaphors abounded: “cyberia,” “fractals,” “hyperspace,” “dynamical systems,” “rain forests” — all described the same shift toward a more organic and bottom-up, outside-in cultural dynamic. Ideas and value could emerge from anyone, anywhere, and at any time.

At least to a point.

While digital technologies may foster the creation and duplication of nearly every kind of value out there, all this value is still measured by most people in terms of dollars and cents. Napster was a sensation because it saved consumers money (while costing some corporations their revenue). Hackers made fewer headlines for coding brilliantly than for selling out and getting listed on NASDAQ. Participation itself is made possible by purchasing equipment and bandwidth from one of a dwindling number of conglomerating megacorporations.

For those with time and money to spend, there’s certainly a whole lot of terrific activity occurring online that flies in the face of contemporary corporatist culture. People with rare diseases can find support groups, pregnant women can get free advice, creative types can collaborate on new kinds of projects, amateur drag racers can trade car parts, rock bands can find audiences, nerds can find friends, activists can organize rallies, bloggers can expose political corruption, and people can share their hopes and dreams with one another in forums safer than those available to them in real life.

Still, apart from a few notable and, sadly, declining exceptions to the rule, the energy fueling most Internet activity is not chi (life energy) but cash — or at least chi supported by cash. However horizontal its structure, the Internet rhizome is activated by money: old-fashioned, top-down, centralized currency. As a result, what occurs online is biased toward the very authorities that the Internet’s underlying network structure might seem predisposed to defy. Things can feel — or be made to feel — novel and revolutionary, even though they still constitute business as usual.

Last year, a student of mine — a clever woman with a good sense of media — sent me a link to a website that had confused her. I clicked on the URL and a video played images from a Matrix-like postapocalyptic future. A narrator spoke: “There are those who still remember how it all began. How light and reason had retreated. How greed gave way to great power. How power gave way to great fear. The Great War swept across all the lands, neighbor against neighbor, city against city, nation against nation. The Corporate Lords claimed the world. Creativity and self-expression were outlawed. The Great Darkness had begun. But they speak in low whispers of the legend that One will come. A gifted child. Legend speaks of him finding the Magic Gourd that he will fill with an elixir to restore the soul of mankind.”

That elixir, it turns out, is Mountain Dew. The film, directed by Forest Whitaker, is for a web promotion called DEWmocracy. Harnessing the tremendous democratizing force of the Internet, Mountain Dew let its online community select the color, flavor, label, and name of its next sub-brand of soda — from a group of four preselected possibilities. Arriving just in time for the presidential election of 2008, the promotion pretended to encourage democratic thinking and activism on the part of Mountain Dew’s young consumers — when it was really just distracting them from democratic participation by getting them to engage, instead, in the faux populist development of a beverage brand.

Maybe the surest sign that the threat of the rhizome has been all but neutralized is corporate America’s willingness, finally, to celebrate the Internet’s revolutionary potential. Nowhere was Internet culture lauded more patronizingly than by Time magazine’s 2006 “Person of the Year” issue. We can only imagine the editors’ satisfaction at turning the blogosphere on its head: if those pesky bloggers are going to give us hell no matter whom we choose, why not just choose them? Let’s show the great, unwashed masses of YouTubers that we’re on their side for a change. A little silver mirror on the cover reflected back to each reader the winner of the award: you.

“Welcome to your world,” the article greeted us. Welcome to what? Weren’t we online already? “For seizing the reins of the global media, for founding and framing the new digital democracy, for working for nothing and beating the pros at their own game, Time‘s Person of the Year for 2006 is you.” There was something pandering about all this false modesty. It only betrayed how seriously the editors still took their role as opinion-makers: Our liberation from top-down media isn’t real until the top-down media pronounces it so.

And where is the evidence that we’re actually liberated? Sure, YouTube, Facebook, and Wikipedia constitute a fundamental change in the way content is produced. But for the corporations profiting off all this activity, it’s simply a shift in the way entertainment hours are billed to consumers. Instead of our paying to watch a movie in the theater, we pay to make and upload our own movies online. Instead of paying a record company to listen to their artists’ music on a CD player, we pay a computer company for the hardware, an Internet-access company for the bandwidth, and a software company for the media player to do all this. And that’s when we’re doing it illegally, instead of just paying ninety-nine cents to Apple’s iTunes.

“Silicon Valley consultants call it Web 2.0, as if it were a new version of some old software. But it’s really a revolution,” Time enthused. Peer-to-peer networking is terrific, indeed, but revolutions are those moments in history when a mob storms the palace and cuts off the heads of the people who have been exploiting them. This is something else.

Yes, bloggers and YouTubers have had many successes, particularly against government. They have brought down a Republican senator, an attorney general, and even made headway against the repressive net censorship of the Chinese. YouTube not only embarrassed Barack Obama about his preacher; it also exposed political repression in Myanmar and FEMA incompetence in New Orleans. But this activity is occurring on a platform that has almost nothing to do with the commons. The Internet may have been first developed with public dollars, but it is now a private utility. We create content using expensive consumer technologies and then upload it to corporate-owned servers using corporate-owned conduits, for a fee. More significantly, we are doing all this with software made by corporations whose interests are embedded in its very code.

User agreements on most video sites require us to surrender some or all of the rights to our own creations. iTunes monitors our use of music and video files as keenly as marketers monitor the goings-on at MySpace and Second Life for insights into consumer behavior. Gmail’s computers read our email conversations in order to decide which ads to insert into them. Each and every keystroke becomes part of our consumer profile; every attempt at self-expression is reduced to a brand preference.

Microsoft’s operating system interrupts users attempting to play DVDs that the system suspects may not really belong to them by asking whether or not rights to the movie have been purchased and warning of the consequences of owning illegal files. The iPhone is locked to prevent anyone from using a carrier other than AT&T, the majority of our software is closed to user improvement, and most websites accept little input other than our shopping choices and credit-card numbers.

Had Time pronounced us Person of the Year back in 1995 — before the shareware-driven Internet had been reduced to an electronic strip mall and market survey — it might have been daring, or even self-fulfilling. Back then, however, the magazine was busy deriding the Internet with sensationalist and inaccurate stories of online child porn. In the intervening years, Walt Disney and its fellow media conglomerates may have cleaned up Times Square, but on MySpace, owned by News Corp., teens were already stripping for attention and for gifts off their “wish lists.” Time‘s hollow celebration meant that corporate America was secure enough in the totality of its victory that it could now sell this revolution back to us as a supposed shift in media power. Let them eat blog.

Yes, we are using media differently, sitting in chairs and uploading stuff instead of sitting on couches and downloading stuff. And there are new opportunities for finding allies and organizing with them. But in the end we’re still glued to a tube, watching mostly television shows, arguing too often like angry idiots, surrendering the last remains of our privacy, and paying a whole lot more to mostly the same large corporations for the privilege. Time‘s choice for Person of the Year was announced on Time Warner-owned CNN, in a special program that may as well have been an infomercial for the user empowerment offered by Time Warner–owned broadband services AOL and Road Runner. One way or another, each of us anointed Persons of the Year was still just a customer.

Our acceptance of this role along with its constraints is largely voluntary. We would rather be consumers of unalterable technologies that dictate the parameters of our behaviors than the users of tools with less familiar limits. Technologies resistant to our modification tend to be easier and more dependable for users, but also safer for corporations. Who cares if we can’t upload our own software into an iPhone as long as the software Steve Jobs has written for us already works well? Likewise, the early Macintosh computer worked so much more dependably than Windows for novice users precisely because Apple, unlike Microsoft, required its users to use only Apple peripherals. Windows tried to accommodate everyone, so incompatibilities and code conflicts inevitably arose. By attempting to accommodate all comers, Microsoft (the company we like to call monopolist) was actually permitting value creation from the periphery instead of monopolizing it all in the name of hardware compatibility.

Besides, on an Internet where an errant click might introduce a virus to your hard drive or send your banking passwords to a criminal in Russia, the stakes are high. Many of us would gladly surrender the ability to modify our computers or even share music files for the seeming security of a closed and unalterable piece of technology. In exchange for such safety and dependability, we can’t use these closed devices in ways their manufacturers don’t specifically want us to. We can’t change the technologies we purchase to get value out of them that wasn’t intended to be there. We can’t provide applications for one another’s Verizon or Apple cell phone without going to the phone operator’s centralized online store and giving it a cut of the money. We cannot create value for ourselves or for one another from the outside in.

But value can still be extracted from the inside out. Technology providers maintain the ongoing ability to change the things they’ve sold us. Software can be upgraded remotely with or without users’ consent; cable television boxes can have their functionality altered or reduced from the home office; the OnStar call-for-help systems in GM cars can be switched on by the central receiving station to monitor drivers’ movements and conversations; cell phones can be locked or “bricked” by a telecom operator from afar. In the long run, we surrender the ability to create new value with interactive technologies for the guarantee of getting at least most of what we want out of them as consumers. These sterile technologies generate less new growth, promote a less active role for users, and prevent anyone but the company who made them from creating additional value.

The more we are asked to adapt to the biases of our machinery, the less human we become. In spite of its chaotic and organic propensities, the Internet isn’t reversing the Industrial Age role exchange between people and corporations. The Internet provides human beings with an even more entirely virtual, controlled, and preconfigured landscape on which to work and live, while providing corporations with the equivalent of corporeal existence for the very first time. Out on the Web, people have no more substance or stature than any virtual entity — and in most cases, less. We become more virtual while our corporate entities become more real.

The people may as well be the machines. Once high-tech security-minded employers in California and Cincinnati get their way in the courts, they’ll be materializing this vision by implanting workers with the same kinds of RFID tags Wal-Mart puts in its products. A central-office computer monitors exactly who is where and when, opening doors for those who have clearance. While implantation isn’t yet mandatory for existing laborers, the additional and convenient access to sensitive materials it affords makes voluntary implantation a plus for worker recognition and advancement.

Increasingly, we find ourselves working on behalf of our computers rather than the other way around. The Amazon Mechanical Turks program gives people the opportunity to work as assistants to computers. Earning pennies per task, users perform hundreds or thousands of routine operations for corporate computers that don’t want to waste their cycles. There are credits available for everything from finding the address numbers in photos of houses (three cents a pop) to matching Web-page URLs with the product that is supposed to appear on them (a whopping nickel each).

In the constant battle against automated spam, websites require users to prove they are human. In order to register for a site or make a comment, the user must complete a “challenge,” such as identifying the distorted letters in an image and typing them into the keyboard. One particularly dastardly spammer has gotten around this by employing people to do what his computers can’t: he has created a game that offers pornography to users who complete his computer’s challenges. The program copies the picture it can’t decode from one location on the Internet and displays it for the porn-seeking human. The human completes the task and is rewarded with a titillating image — the sa me way a lab rat earns a piece of cheese for ringing a bell.

While this artificial artificial intelligence may nudge computers beyond their current limitations, it does so by assigning mechanical tasks to living people in the same way a microchip farms out cycles to its coprocessors.

In the 1950s, visionaries imagined that technology would create a society in which work would be limited to the few tasks we didn’t want our machines doing for us. The vast majority of our time was to be spent at leisure — not in boredom, racking up three-cent credits on our laptops, or performing rote operations on behalf of microprocessors in return for some pixels representing a breast.

But these operations are big business — big enough to be outsourced. Workers in massive Chinese digital sweatshops perform the computer tasks that those of us in wealthier nations don’t have time to do. A single factory may hold several hundred workers who sit behind terminals in round-the-clock shifts. Amazingly, this work is not limited to data entry, spam evasion, or crunching numbers. In one of the more bizarre human-machine relationships to emerge on the Internet, Chinese day laborers play the boring parts of online games that Westerners don’t want to bother with — all the tiny tasks that a player’s fictional character must perform in order to earn pretend cash within a game world. People who want to participate in online game worlds without actually playing the games will buy game money on eBay from these digital sweatshops instead of earning it. With millions of people now participating in games like Second Life and World of Warcraft, the practice has become commonplace. Current estimates number the Chinese labor force dedicated to winning game points on behalf of Westerners at over one hundred thousand. There are even published exchange rates between game money and U.S. dollars.

This, in turn, has motivated the players within many games to start pretend businesses through which real cash might be earned. The biggest business in the online game Second Life is prostitution. Pretty female avatars engage in sex animations with other players in return for in-game money, which they exchange for real currency on eBay. When players get good or famous enough at this, they move up a level in the business hierarchy, construct bordellos or sex clubs, and then hire other players to do the actual online coupling. Finally, Linden Lab, the corporation that owns Second Life, charges the bordello proprietors for the space and cycles they use on the web server.

The point is not that virtual prostitution is immoral or even unseemly; it’s that when we have the opportunity to develop a “second life” — a fantasy realm unencumbered by the same scarcity and tiered system of labor we endure in the real world — we end up creating a market infused with the same corporatist ground rules. If people pretended to be prostitutes in an online fantasy realm instead of providing the Internet equivalent of phone sex for money, it might at least indicate a willingness to use an entertainment medium — a play space — for play.

And Second Life is a mere microcosm of the Internet at large. The “open-source” ethos encouraging people to work on software projects for free has been reinterpreted through the lens of corporatism as “crowd sourcing” — meaning just another way to get people to do work for no compensation. And even “file-sharing” has been reduced to a frenzy of acquisition that has less to do with sharing music than it does with filling the ever-expanding hard drives of successive iPods. Cyberspace has become just another place to consume and do business. The question is no longer how browsing the Internet changes the way we look at the world; it’s which browser we’ll be using to buy and sell stuff in the same old world.

Those of us who don’t understand the capabilities of computers are much more likely to accept the limits with which they are packaged. Instead of getting machines to do what we might want them to do, the machines and their programmers are getting us to do what they want us to do. Writing text instead of just reading text is certainly a leap for-ward — but when web publishing is circumscribed by software and interfaces from Amazon and Google, we must at least understand how these companies are limiting our creations and the value we hope to derive from them.

Where does the number of new job applicants or level of worker satisfaction fit in an Excel spreadsheet’s bottom line? Does Blogger .com give a person the ability to post something every day, or does the bias of its daily journal format compel a person to write shorter, more frequent and superficial posts at the expense of longer, more considered pieces? Do electronic trading sites encourage certain kinds of trades, at certain frequencies? What does it mean that a person’s name and picture in Facebook are posted next to how many “friends” he has accumulated? Why would Facebook choose to highlight this particular number? What are the defaults, what can be customized easily, and what can’t? The more automatically we accept the metrics and standards assumed by a program, the more tied we are to its embedded values. If we don’t really understand how arbitrarily computer programs have been designed, we will be more likely to look at software as something unchangeable — as the way things are done rather than just one way to do things.

This is not how the Internet’s early pioneers and developers saw their network being used. They envisioned the interactive revolution as the opportunity to rewrite the very rules by which society was organized — to reprogram the underlying codes by which we live. In contradiction to popular mythology about them, these researchers had less allegiance to the Defense Advanced Research Projects Agency (DARPA) and the U.S. military than they did to the pure pursuit of knowledge and the expansion of human capabilities. Although their budgets may have come partly from the Pentagon, their aims were decidedly nonmilitary. As seminal essays by World War II technologists Vannevar Bush, Norbert Wiener, and J.C.R. Licklider made clear, the job before them was to convert a wartime technology industry into a peacetime leap forward for humanity. Bush, FDR’s former war advisor, wrote of a hypothetical computer or “Memex” machine he intended as an extension of human memory. Wiener, the founder of “cybernetics,” believed that lessons in feedback learned by the Air Force during the war could be applied to a vast range of technologies, giving machines the ability to extend the senses and abilities of real people. Licklider’s work for DARPA (then called ARPA) was based entirely on making machines more compatible with human beings.

The Internet itself developed around new models of resource sharing. This is why the code was written allowing computers to “talk” to one another in the first place: so that many different researchers could utilize the precious operational cycles of the few early computers in existence at that time. This ethos then extended naturally to the content those researchers were creating. The more access people had to the ideas, facts, and discoveries of others, the better for everyone.

It was a pervasive societal norm, yet one so contrary to the dictates of corporatism that AT&T actually turned down the opportunity to take over the early Internet in 1972. A medium based on sharing access and information was anathema to an economy based on central authority, hoarding, and scarcity. AT&T saw “no use” for it.

Of course, thanks to public money, university interest, and tremendous social desire, the Internet came into being. The software created to run it was developed almost entirely by nonprofit institutions and hobbyists. The urge to gain and share the ability to communicate directly with others was so great that students and researchers wrote software for free. Pretty much everything we use today — from email and web browsers to chat and streaming video — came out of the computer labs of places like the University of Chicago, Cornell, and MIT.

Meanwhile, the emergence of interactive technology was beginning to change the way everyone experienced broadcasting and other top-down media. A device as simple as the remote control gave television viewers the ability to deconstruct the media they were watching in real time. Instead of sitting through coercive commercials, they began to click away or even fast-forward through them. Camcorders and VCRs gave people the ability to make their own media, and demystified the process through which the media they watched was created. Stars lost some of their allure, commercials lost their impact, and newscasters lost their authority.

As computer technology eventually trickled down to the public via consumer-grade PCs, people found themselves much more engaged by one another than with the commercial media being pumped into their homes via cable. The Interactive Age was born. People shared everything they knew with one another. And since computers at that time were still as easy to program as they were difficult to use, people also shared the software they were writing to accelerate all this sharing. Programmers weren’t in it for the money, but for the value they were able to create for their fellow netizens, and perhaps the associated elevation in their reputation and social standing.

An early study showed that the Internet-connected home watched an average of nine hours less commercial television per week than its nonconnected counterparts. A people that had been alienated from one another through television and marketing were now reconnecting online in a totally new kind of cultural experience. Something had to be done, and it was.

Mainstream media corporations began by attempting to absorb the assault and assimilate the new content. It wasn’t easy. When bloggers like Matt Drudge released salacious news about the president, traditional news gatekeepers struggled to keep up or lose their exclusive authority over the coverage of current events. Major media circled the wagons and became hyper-centralized, debt-laden bureaucracies. The more media empires merged and conglomerated, the less control they seemed to have over the independently created media that trickled up through their empires.

Crudely drawn animations like The Simpsons, Beavis & Butt-Head, and South Park began as interstitial material or even independent media, but became so popular that they demanded prime-time slots on networks and cable. Although the profits still went to the top, the content flowing through the mainstream mediaspace seemed to be beyond the control of its corporate keepers. Gary Panter, an artist and animator for Pee-wee’s Playhouse, wrote a manifesto arguing that the counterculture was over; artists should simply make use of the market — turn the beast against itself by giving it entertainingly radical content that it couldn’t help but broadcast. His friend Matt Groening followed the advice and sold The Simpsons to Fox, making the brand-new, otherwise money-losing TV network a tremendous success. The fact that this may have single-handedly funded Fox News notwithstanding, it appeared that a marriage between radical content and the mainstream media infrastructure might be in the making.

As much as it frightened movie studios and protective parents, a radical content revolution didn’t really threaten media conglomerates, as long as they owned the conduit on which all the content was broadcast. Beavis & Butt-Head‘s wry commentary may have undermined MTV’s music-video business, but there was always more content for the network to put in its place. Perhaps the Internet could become an adjunct to the media market rather than its competitor.

To make markets, however, speculators had always sought to exploit or create scarcity. Nothing seemed to be scarce about the Internet. It was an endless and ever-growing sea of data, which everybody shared with everybody else, for free. Abundance wasn’t just a byproduct; it was the net’s core ethos. But that early study showing how Internet households watched less TV revealed a scarcity that corporate media hadn’t considered before: the limits of human attention. Sure, the Internet might be infinite — but human beings only had so many “eyeball hours” to spend on one medium or another. The precious real estate in the Internet era was not server capacity or broadcasting bandwidth, but human time.

At last, there was a metric compatible with the scarcity bias of corporatism. Wired magazine, which had already established itself as the voice of business online, announced that we were now living in an “attention economy” in which success would be measured by the ability to garner users’ eyeball hours with “sticky” content. The trick would be to develop websites that people found hard to navigate away from — virtual cul-de-sacs in which users would get stuck. A web marketing company called Real Media took out full-page ads in net business magazines such as Fast Company showing Internet users hanging helplessly from a giant strip of flypaper. The caption read: “Nothing Attracts Like Real Media.” Corporations would mine the attention real estate of users the same way they mined the colonies for gold centuries earlier. So much for empowering users. We are the flies.

The new mantra of the connected age became “content is king.” The self-evident reality that the Internet was about connecting people was conveniently replaced with a corporatist fantasy that it was about engaging those people with bits of copyrighted data. Users weren’t interested in speaking to one another, the logic went; they were interested in downloading text, pictures, and movies made by professionals. At least this was something media companies could understand, and they rushed to go online with their books, magazines, and other content.

What they hadn’t taken into account was that people had gotten used to enjoying content online for free. By rushing to digitize and sell their properties, corporations ended up turning them from scarce resources into infinitely reproducible ones. Along came Napster, Bit-Torrent, and other technologies that gave former consumers the ability to “rip” movies off DVDs, music off CDs, and TV off their TiVo and then share it anonymously over the Internet. Anything digital, no matter how seemingly well protected or encrypted, was capable of being copied and shared. The bias of the Internet for abundance over scarcity appeared to be taking its toll.

Hollywood studios and record companies began lobbying Congress for laws and enforcement to prevent their entire libraries from becoming worthless. Comcast, a cable company that offers broadband Internet service, began blocking traffic from peer-to-peer networks in an effort to prevent losses to its corporate brethren and subsidiaries. Other corporations lobbied for changes to the way Internet access is billed, making it easier for large companies to support fast content distribution, but much harder for smaller groups or individuals to share their data.

The content wars redrew the battle lines of the net revolution. It became a struggle between consumers and producers: customers fighting to get the products they wanted for free, and doing it by investing in a host of other products that, all told, probably cost them more money anyway. What does it matter if one’s iPod contains eighty thousand hours of music? This recontextualization of the net revolution reduced the very definition of winning. The Internet era became about what we could get as consumers rather than what we could create as people. The notion of creating value from the periphery was surrendered to the more immediate gratification of getting free products to the periphery.

While corporations could no longer make the same kind of money off their digital content, the massive flow of entertainment and files from person to person was a lot easier to exploit than genuine conversation between people. Every website, every file, every email, every web search, was an opportunity for promotion of a brand.

Genuinely social spaces, from Friendster to Facebook, looked for ways to cash in on all this activity, too. Millions of people were already posting details about themselves, linking up with others, and forming affinity groups. Although corporations couldn’t make too much money giving away web space to people, they could try to dictate the metrics people used to describe themselves — and then sell all this information to market researchers.

On social-networking sites — where real hugs can never happen — people compete instead for love in the form of numbers: how many “friends” do you have? The way to get friends, other than inviting people, is primarily to list one’s favorite books, movies, bands, and products. This results in a corporate-friendly identity defined more by what one consumes than what one does. Moreover, in cyberspace brands could create pages as human as any person’s. And just like people inhabiting these social spaces, they compete for the highest numbers of friends. Do you want to be the “friend” of Chase bank? What if it enters you into a sweepstakes where you might make some money?

Nonprofit groups and social activists got into the act as well, sending out invitations pressuring “friends” to support one another’s favorite issues. These issues, in turn, become part of the users’ profiles, better defining them for everyone. The ability to attract a hundred thousand fans online goes a long way toward convincing record labels that an independent band may have what it takes to earn a “real” contract. People, companies, brands, and rock groups are all “friends” on the network, even though most of them aren’t people at all.

Of course, each of the social networks where all this activity occurs is itself ultimately for sale. MySpace was sold to Murdoch for $580 million. YouTube went to Google for $1.65 billion in stock. As of this writing, Facebook had turned down a billion-dollar offer from Yahoo. These numbers do more than make the head spin; they confirm the mythology underlying the frenzy of Internet investment and activity by corporations: that interactive media technology is the surest path to growth in an era when most everything else is pretty well used up.

While some terrific and socially positive activity is occurring on these sites, they are founded on pure financial speculation, and faith in the same universally “open markets” corporations have been advocating through the World Bank and the IMF. As the Global Business Network cofounder Peter Schwartz argued in his pre-dot-com-crash book, The Long Boom, “Open markets good. Closed markets bad. Tattoo it on your forehead.” The infinite growth and expansion required by credit-fueled corporate capitalism found a new frontier in the theoretically endless realm of cyberspace.

The myth was enough to fuel the speculative dot-com bubble, which Alan Greenspan belatedly called “irrational exuberance,” but which went on long enough to convince investors to lift high-tech issues on the NASDAQ stock exchange beyond even the most optimistically speculative valuations. This was a “new economy,” according to Wired‘s editor Kevin Kelly. A “tsunami,” echoed its publisher, Louis Rossetto — one that would rage over culture and markets like a tidal wave. More than simply costing millions of investors their savings, the movement of the Internet from the newspaper’s technology section to the business pages changed the way the public perceived what had once been a public space — a commons. The truly unlimited potential for the creation of value outside the centralized realm of Wall Street had been all but forgotten. In its place was an untrue perception that the way to get rich online was to invest in a stock exchange, come up with a compelling business plan, or sell a half-baked enterprise to a venture-capital firm and wait for the IPO.

Places, people, and value again become property, individuals, and money. The evolution of the Internet recapitulates the process through which corporatism took hold of our society. Eyeball hours served as the natural resource that became a “property” to be hoarded. People and groups became “individuals,” all with their own web pages and My-Space profiles to be sold to market researchers. Computers — once tools for sharing technological resources — mutated into handheld iPods and iPhones that reduced formerly shared public spaces to separate bubbles of private conservation and entertainment. And value creation — which in cyberspace could have potentially come from anywhere and be measured in units other than dollars — became subject to the rules of the same centralized marketplace that favors existing sectors of wealth. Yes, some people became millionaires or even billionaires, but they did so by entering the game of central capital, not by creating an alternative.

A few did try. PayPal may have come the closest. Online users of sites like eBay needed an easy way to pay one another. They weren’t real businesses, and weren’t set up to accept credit cards. Besides, some of the exchanges were for such small amounts that a credit-card company’s minimum service fees could as much as double the total. PayPal’s original plan was to offer its alternative payment service for free. The company would charge nothing, but make money on the “float” by holding on to the cash for three days and keeping the interest earned. This made sense for most users anyway, since PayPal could then even serve as an escrow account — releasing the money only after the product was shipped or received. But the right to make money from money was reserved, by corporate charter, to the banking industry. Its representatives demanded that regulators step in and define PayPal’s activity as a violation of banking law, at which point PayPal was forced to charge its users a traditional service fee instead. Their original vision dashed, PayPal’s owners nonetheless made their millions by selling the whole operation to eBay for $1.5 billion.

In another effort — this one to transcend the polarizing battle over digital content — the legal scholar Lawrence Lessig’s Creative Commons project helps content creators redefine how they want their works to be shared; most authors, so far, seem to value getting credit for their contributions over getting paid for them. Traditional publishers still kick and scream, however, forcing some authors and musicians to choose between making money and making an impact — earning central currency or creating social currency. While authors and rock groups who have already succeeded in the corporate system — such as Radiohead or Stephen King — can give away their content for free online and still sell plenty of physical copies, up-andcomers have much less luck.

The battle to unleash the potential of the Internet to promote true decentralization of value may not be over yet. Even on a playing field increasingly tilted toward centralized and moneymaking interests, there are people dedicated to using it for constructive, creative, and common causes. For every bordello in Second Life there is also a public library; for every paid strip show there is a free lecture. Branded “islands” are proving to be a waste of advertising money, while university-sponsored spaces now offer seminars to nonpaying students from around the world. For every company developing a digital-rights-management strategy to prevent its content from being copied, there is a researcher posting his latest findings on Wikipedia for all to learn from and even edit or contest, for free.

On the other hand, for every community of parents, Christians, or environmentalists looking to engage with others about their hopes, doubts, and concerns, there is a media company attempting to brand the phenomenon and deliver these select eyeballs to the ads of their sponsors. For every disparate community attempting to “find the others” on the Internet, there is a social-networking site attempting to sell this activity in the form of a database to market researchers. For every explosion of young people flocking to a new and exciting computer game or virtual world, there’s a viral marketer or advertiser attempting to turn their creativity into product placements and their interactions into word-of-mouth promotions. Even a technology that seemed destined to reconnect people to one another instead ends up disconnecting them in new ways, all under the pretense of increasingly granular affinity.

Most of the people at companies exploiting these opportunities believe they are ultimately promoting, not exploiting, social activity. They may even be dedicated to the constituencies they’re serving, and simply looking to subsidize their communities’ activities with a business plan. But well-meaning or not, these companies are themselves bounded by a corporatist landscape that works against their own best sentiments. The platforms they create are built on borrowed money, and conform to the logic of centralized value creation. Sooner or later, value must be taken from the periphery and brought back to the center.