It’s Time To Regulate Big Tech, Says VC Who Helped Create It

In his new book Unscaled, Hemant Taneja says the era of regulation-free growth is coming to an end, and entrepreneurs need to adapt.

It’s Time To Regulate Big Tech, Says VC Who Helped Create It
Hemant Taneja [Photo: Steve Jennings/Getty Images for TechCrunch]

There was a time when tech companies could grow to be unbelievably massive while facing little interference from the hand of regulation, but venture capitalist Hemant Taneja thinks that era is ending. In his new book Unscaled: How AI and a New Generation of Upstarts Are Creating the Economy of the Future (PublicAffairs), he offers a guide book of sorts for entrepreneurs in a tougher regulatory environment.


Taneja, a managing director at General Catalyst, looks at how technology has changed the economy, how regulators must respond, and what responsibilities entrepreneurs must shoulder if they want to chart a better path forward.

Fast Company spoke with Taneja about why he thinks business needs to scale down rather than up, and how the definition of what constitutes a monopoly is changing.

Fast Company: What is different about [Big Tech] monopolies and monopolies of the past?

Hemant Taneja: When you used to break up companies in the Sherman Anti-Trust era, these were physically sprawled companies and you broke them up geographically and regulated their markets. But you can’t break up network-effect businesses geographically. So you have to look at it through the motif of data analytics and eyeballs. These sort of modern platforms, it’s not about geographic terms.

When [Amazon] keeps buying … They keep picking up these applications that are on top of their platform AWS. At some point, they understand which ones are wanted by consumers, and they’re using that data to just keep consolidating. That’s what we have to prevent.

If [Amazon] can understand there’s a huge market for home security stuff, and [they] can buy a Ring, and have a subscription around that, and dump it into Prime–it will be big. And now all of sudden they’ve taken another big chunk of the market and aggregated it into this Prime subscription, right? Well, at some point, all of our commerce is getting done through this company.


The problem with these monopolies is that most of what they give to consumers is free. To me, antitrust needs to get pointed toward developer ecosystems and innovation. If you think about a company like Amazon, they have the eyeballs, they have the algorithms, they have the data.

Where does the monopolistic aspirations of these platforms get regulated or where do they stop? That has to get answered. And it has to because theoretically you’re only going to have one big company per industry that’s going to do everything for you. You don’t have the free market.

FC: Tell me about scale and your argument against big business.

HT: I think there’s a deep-seated conviction that scale has run its course. We scaled the healthcare system, scaled banks, scaled schools. Healthcare is failing, banks have failed, and education is not preparing us for the 21st century. So while it’s done a lot of good, scale has run its course.

I have a deep belief we’re in this 30-year secular shift, where the companies started today, if done responsibly with data and AI, are the ones that are going to solve the problems we’re having with scale in healthcare and education and public safety and finance. We have to make sure we’re working with the tech sector. We’ve had a tech sector that’s been brash and not responsible.

The idea is to tease these issues out and make sure there is a bit of a blue print for how do you do this the right way. How do you create this rewrite over the next couple decades that’s good for society and business growth is aligned with “good for society” and building frameworks for building AI responsibly.


FC: How should we be thinking about these companies in order to regulate them?

HT: I think these global data companies are the highways of the past. These are utilities in my opinion. Google is a utility. If you think about the fact that we live our lives—our content, community, commerce is all online—well, none of that is possible without these utilities. That’s the same as having power in an electrified world. So the question is, if these are utilities, how do we proliferate an ecosystem of companies that provide these next-generation services that we care about and that we scale and are not breaking?

[Facebook’s] problem is that [it] built this on a free IP stack and there needs to be monetization here for this to exist. It’s not just okay to offer a free stack and do things like advertising to monetize. That’s a misalignment.

FC: Do you think regulation is coming?

HT: If you think about the last four or five years, [Facebook] opened up the platform, and the data is being used by all these companies, and there’s no trail of who’s been doing what with it. That all needs to be done in a responsible way. The forward plumbing for what these guys have to do has to be lots of rules and regulations of how and what others can use it for … making it available, making it measurable. Because the governance also has to be software defined.

You can’t have your pen-and-paper-based government employees come and figure out what Facebook’s doing—it’s all in software. You need to have to have software watchdogs and government that are monitoring this. You need to have to have algorithmic canaries in your product so you can actually see where bias and discrimination might be being taken advantage of, versus not by other products.


FC: Is anyone doing that?

HT: This is what I’m talking about! You need to have this notion of algorithmic accountability and enable that. For startups building today, you want to make sure these products are measuring where the company’s growth is maybe diverging from what is good for society.   

FC: No one is looking at the software. No one’s verifying it.

HT: I think it’s problematic. You know why? I asked [former U.S. Secretary of Commerce] Penny Pritzker, where is your AI department? The reason there is no oversight is because you have humans on the other side that can’t look at these black boxes. You can’t go into these markets that are fundamentally at the intersection of policy and technology, these regulated markets, and not have governance be software enabled for products that are entirely AI powered. It just doesn’t work. To me it’s about the government: Where’s your AI department? They don’t have it. I think it’s ridiculous.

We’ve thought about starting companies to do this, by the way. Well, if they’re not going to do it, we should just enable something that is open source and third party, [and] this is kind of their job.

FC: Do you think that AI can be a force for good?


HT: As long as we shape how we use AI, I don’t see the harm. It’s a neutral technology just like nuclear was, and just like the internal combustion engine was. If we had a way to measure early on what this was doing while you put it all over the world in power plants and everything else, maybe we could have caught climate change early. With AI you can do that because you get feedback every moment, so take advantage of that. We’re just trying to raise awareness around that so it becomes a part of our MVP.


About the author

Ruth Reader is a writer for Fast Company. She covers the intersection of health and technology.