Fast company logo
|
advertisement

China’s government is cracking down on what technology companies can develop—an approach that will halt innovation.

China’s new proposed law could strangle the development of AI

[Source images: hernan4429/iStock; Martin Holverda/iStock]

BY Arijit Sengupta3 minute read

China’s internet watchdog, the Cyberspace Administration of China (CAC), recently issued a draft proposal of regulations to manage how technology companies use algorithms when providing services to consumers.

The proposed law mandates that companies must use algorithms to “actively spread positive energy.” Under the proposal, companies must submit their algorithms to the government for approval or risk being fined and having their service terminated.

This is an incredibly bad and even dangerous idea. It’s what happens when people who don’t understand AI try to regulate AI. Instead of fostering innovation, governments are looking at AI through their unique lenses of fear and trying to reduce the harm they worry about most. Thus, western regulators focus on fears such as violation of privacy, while Chinese regulators are perfectly okay with collecting private data on their citizens but are concerned about AI’s ability to influence people in ways deemed undesirable by the government.

If the Chinese law is adopted, it will create a lengthy bureaucratic process that will likely ensure that no small company or startup will survive or even enter the market. The moment you allow government regulators to be the final arbiters of what emerging technologies can and cannot do, you’ve strangled innovation. The only people who will profit under such a law are large companies that can invest in unproductive bureaucratic activities due to massive cash reserves and bad actors because they’ll ignore regulators and do whatever they want. Cash-starved startups who wish to follow the law will be most disadvantaged by this approach.

China is not alone in taking bureaucratic approaches to AI. In April, the European Union released a draft Artificial Intelligence Act that would ban certain AI practices outright and mandate that AI applications deemed “high risk” meet strict data governance and risk management requirements. This includes requirements on testing, training, and validating algorithms, ensuring human oversight, and meeting standards of accuracy, robustness, and cybersecurity. Businesses would need to prove that their AI systems conform with these requirements before placing them on the European market.

Imposing algorithm requirements or requiring companies to justify approaches can sound less onerous than the outright banning of technologies. The reality is that in either case, startups do not have the resources to participate in such bureaucratic slow processes. Smaller companies will be forced out of the arena even though they are most likely to create true innovations in this space.

Imagine a world where startups had to get patents on their technology before building their software. Only about half of U.S. patents are approved, which is not terrible, but it takes about two years for approval to come through. Algorithms are more difficult to examine than patents—especially deep learning algorithms which very few experts understand. Based on the lengthy timelines in the patent office, we can surmise that algorithm approval processes are likely to take longer than two years. This is simply not fast enough: Technology in a rapidly evolving space like AI would already be outdated by the time it was approved. Any approach that involves regulators preapproving algorithms would strangle innovation in this space.

There’s another reason why such legislation would be more onerous for small companies. For startups reliant on venture capital investments, typical funding cycles are 18 months long, which means that investors expect to see tangible results from the investment in less than 18 months. Thus, current investment approaches would not support waiting years to get algorithms approved before launching a product. While some VCs may adopt a different investment approach, similar to medical investments for example, many entrepreneurs would simply turn away from AI and pursue other opportunities.

The U.S. is in a unique position to get AI guidelines right. While China and the European Union outline ever-stricter guidelines banning certain types of AI, the U.S. has an opportunity to establish ethical guidelines without inhibiting innovation. The only appropriate approach to regulating AI is one where we make our societal goals clear from the start and hold companies liable if they violate those goals. For example, we don’t force every company to undergo Occupational Safety and Health Administration (OSHA) inspections before being able to operate. However, labor safety expectations are enshrined in law and violators are prosecuted. If companies find alternative approaches to keeping their employees safe, they are not penalized as long as the societal goals are achieved.

Giving government regulators the power to limit broad technology categories is not the approach that built the internet or the smartphone. That’s why the U.S. should tackle AI regulation by making our societal goals clear and giving organizations flexibility in achieving such goals.


Arijit Sengupta is the founder and CEO of Aible.

Recognize your brand's excellence by applying to this year's Brands That Matters Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

Explore Topics