advertisement
advertisement

Software As Politics: How The Power Of Tech Can Be Held Accountable

2016 was the year it became impossible to ignore the power software exerts on society. Today, in 2018, we can start to identify the companies and organizations that are putting power back in the hands of consumers.

Software As Politics: How The Power Of Tech Can Be Held Accountable
[Source Images: Federica Galli/Unsplash (photo), Vanzyst/iStock (pattern)]

Software is politics. I wrote that back in 2016, arguing that the digital services we all rely on should not just be designed for ease of use–they also need to be understandable, accountable, and trusted.

advertisement
advertisement

Viewing software as politics is about more than tech, and it’s about more than ethics. It’s about the idea that, if politics is about the distribution of power in society, then software is inherently political. How that power is managed and the choices about who it is put to work for are the political questions of our age.

If 2016 was the year it became impossible to ignore the power software exerts on society, then today, in 2018, we can start to identify some signals about what the levers of control might be. Are there reasons to be optimistic? Which companies are using trust as a competitive advantage? What organizations are showing how the power of tech can be held to account? Here are six themes that are emerging:

[Image: Vanzyst/iStock]

Trust, competitive advantage, and the power of markets

Research by Kelly Martin, Abhishek Borah, and Robert Palmatier published in Harvard Business Review has found that data breaches have a ripple effect–if one company in an industry suffers a data breach, then others in that industry will also feel its effects on their finances. The researchers also found that companies can mitigate that risk when they are transparent about how they use data and give users control of their data.

This prompts an important question: How would investors–those who hold the ultimate power over which businesses rise and which ones fall–understand if a company is a risk or not?

There may be some parallels here to the Carbon Disclosure Project (CDP). CDP collects and standardizes data about the environmental impact of companies. Investors use that data to make ethical investment decisions or manage risk; regulators use it to make better laws. Maybe investors will start to evaluate risk by consulting transparency initiatives like Ranking Digital Rights and Terms of Service Didn’t Read. Taken to its logical end, only transparent companies would receive funding, and opaque companies would falter, elevating the services available to consumers overall.

[Image: Vanzyst/iStock]

Auditing and transparency

Inspired by ProPublica’s investigations into biased algorithms, New York’s city government passed an algorithmic accountability bill into law and established a task force to bring transparency and accountability to automated decision-making by the city’s agencies.

advertisement

What’s encouraging about this is that the initiative came, not from a campaign group, but from a serving politician, James Vacca, chair of the city’s Committee on Technology. Transparency is now a matter of mainstream importance.

Transparency is not just being adopted by the public sector though, as Canadian VPN provider TunnelBear showed when it published the results of an independent security audit.

The idea behind TunnelBear’s audit was to reveal to its users that the company could be trusted over competitors in a sector that has significant trust issues.

There are some intriguing technical approaches to transparent design, too. To pick just two:

Code Keeper is a new service for creating escrow agreements for code, specifying the legal circumstances where source code can be accessed. The main focus of the project is to allow access to source code when a company goes bust. But I wonder if it could also be used to enable access to source code for audits?

Google is working on a General Transparency database called Trillian. Based on the same ideas as the decentralized SSL certificate system Certificate Transparency, the idea is to make it easy to create a dataset–say the list of changes to a company’s terms of service. In turn, that dataset’s integrity can be independently verified.

advertisement
[Image: Vanzyst/iStock]

Certification and standards

The Internet of Things has been under scrutiny recently, as botnets, data breaches, and poor safety make headlines. But two things came out of the Mozilla Foundation at the end of 2017 that show how the connected device market could shift to prioritize consumers’ safety.

The first was a privacy buying guide, a pre-Christmas review of the most popular connected devices that compared the relative safety and protections of each platform. Hopefully more mainstream consumer review sites like Wirecutter and marketplaces like Amazon will take the idea and run with it.

The second was a report Mozilla commissioned from Thingscon, exploring options for a trustmark for IoT. The report recommended building on the work the U.K.-based #iotmark project has done to develop an open certification mark for a minimum set of principles that connected devices should meet.
At the same time Doteveryone, which campaigns for a fairer internet, has been looking at the concept of a trust mark for digital services.

Separately, we’ve seen other standards-based initiatives begin to emerge around digital rights. Consumer Reports published the Digital Standard in 2017, signaling a new era for testing and advocacy organizations. Part testing-framework, part certification scheme, it’s a great resource for anyone developing digital products to ask: “Is what I’m doing right?”

Time will tell if kitemarks and certification are an effective way of ensuring the safety of connected devices, but it’s an encouraging development.

[Image: Vanzyst/iStock]

Decentralizing machine learning

Machine learning algorithms, rather than being explicitly programmed, are trained using data. Crudely speaking, the more data they have, the smarter they get. When it comes to data about people, this poses an obvious privacy challenge: the trade-off for better software is more sensitive, centralized datasets.

advertisement

Google’s Clips camera is an always-on wireless camera that uses machine learning to decide what to take pictures of. Rather than uploading photos to a central server for classification, it all happens locally. The hypothesis, presumably, is that people are more likely to trust an always-on camera if it keeps what it is seeing to itself.

Both Google and Apple have recently introduced products that make use of “differential privacy,” a technique that allows services to learn from the behavior of groups of users without revealing anything about any individuals.

Apple has been using the technique to add new words iOS’s keyboard autocorrect function. Google has been using the technique in combination with federated machine learning to understand how to make better suggestions in its Gboard keyboard in Android. None of this represents a magic bullet. And there are questions about the exact implementations of differential privacy. There is also a risk that, although the learning is decentralized, the control and the learnings remain centralized–only Google and Apple can run experiments like this. Further, there is also an issue of who will verify the promises they are making?

These concerns aside, decentralized data and localized learning represent a very clear change in approach from the cloud services of today. It will be exciting to see what happens next.

[Image: Vanzyst/iStock]

Changing how products get made

How software gets made has an impact on what software gets made.

Recently, we’ve seen various initiatives that aim to make it easier for developers and designers to do the right thing when it comes to making products that respect users’ rights and safety.

advertisement

GitHub has announced that it will start telling developers when their projects have insecure dependencies.

The Simply Secure project has been providing professional education for user experience designers, researchers, and developers working on privacy, security, transparency, and ethics.

At IF, where I’m chief operater officer, we’ve updated our open-source Data Permissions Catalogue, which we hope will make it a more useful resource for designers building services that need permission from users to access data.

There are also increasing calls for ethics modules to be added to computer science degrees, and Harvard and MIT have started offering a course to their students specifically on the ethics and regulation of artificial intelligence.

[Image: Vanzyst/iStock]

New data regulation

Those of you living outside Europe might not of heard of GDPR. GDPR stands for “General Data Protection Regulation,” and it is the EU’s new set of rights and regulations for how personal data gets handled. It enshrines a slew of digital rights and levels huge financial threats against companies that don’t comply. These new rights should make it easier for people to understand and control how data about them is used, see who’s using it, and do something if they’re not happy with what’s going on.

Companies, wherever they are based, face the choice of meeting the regulations or risk being locked out of the European market. As such, GDPR could become a de facto global standard for data protection.

advertisement

Rather than a regulatory burden, this is a huge opportunity for companies to show how they can be trusted with users’ data. (For example, the Open Data Institute has written about what that might look like for the retail sector.)

In addition to GDPR, In January, after several years of lobbying and activism, the Open Banking Standard was introduced in the U.K. It is designed to facilitate a new range of banking services and applications. There are some potential risks, but with good design, it has the potential to empower customers by allowing them to reuse the data held by their banks for other purposes–for example sharing data with an accountant or proving income.

Beyond the opportunity to transform markets, GDPR and initiatives like the Open Banking Standard represent an opportunity to educate people about data–to provide totally new accountability and transparency mechanisms–and produce a healthier public debate about what data should never be collected in the first place.

[Image: Vanzyst/iStock]

What do we call this thing?

I’m optimistic about where we are heading. Companies are developing reputations–good and bad–for how they handle data. Regulators are starting to hold people to account for decisions that affect people’s lives. New technologies and new sources of open data are going to make it easier for companies to be transparent and accountable. There’s a growing interest from people in the tech sector about ethics and responsibility.

And once people get used to having new digital rights, we’re going to expect more. This is a huge opportunity for organizations, whose digital strategies and policies empower users.

One thing I’m left wondering, though, is this: The examples I’ve listed here include new regulations, technologies, design patterns, professional development, tools, ethical frameworks, standards, and market realities. The thing that ties them together is that they can all play a part in ensuring that more of the products and services we rely on respect more of the rights we have.

advertisement

This prompts the question: What is the name of this emerging field of software politics? It feels like it should have one. Names are useful.

While it includes some elements of security, it definitely feels like a different field. “Responsible Tech” or “Digital Ethics” state the intent, but don’t really leave room for the business reality of “trust as a competitive advantage.”

“Decentralized” is fast becoming devalued as it is used unquestioningly in association with technologies like blockchain. Answers on a postcard. Or maybe it doesn’t need a new name. Maybe it’s just politics.

Richard Pope is COO of IF. IF works with organizations shaping the digital world to show how they can empower people, be trusted with data, and be effectively regulated. Disclaimer: IF has partnered with the ODI on design patterns research, done consultancy work for Google, and made submissions to the Consumer Reports Digital Standard. He is also a DigitalHKS fellow at the Harvard Kennedy School.

Want to comment on this article? Send ideas for what to call this thing? Write to us: CoDTips@fastcompany.com. 

advertisement
advertisement