Ray Ozzie: Perspective

Ray Ozzie founded Groove Networks in October 1997. Previously, Ozzie was a founder and president of Iris Associates, where he created and led the development of Lotus Notes. Prior to Iris, he was instrumental in the development of Lotus Symphony and Software Arts' TK!Solver and VisiCalc. What follows is a partial transcript of his talk at Supernova:

For the last few decades, I've been trying to help improve work with communication tools. There are many stories of how the new technology is deployed — and fails. This is not an area where you can just put technology and tools up and have it accomplish what you have in mind. That's because it involves people and organizations. When you instill new processes and practices where there are people are involved and egos are at stake, you need to have a feedback loop and change the tool based on what is working. In such a chaotic environment, it's best to learn from your successes rather than looking at what doesnt work.

This is not about Groove, but the case studies are Groove-centric. Groove is about smart clients and dumb networks. It's software that runs on PCs very securely and instantly. The first case study starts with Eric Rasmussen, a naval physician who was part of the humanitarian operations center in Iraq. Those go behind the people who break things and blow up things, assess the humanitarian, and deliver aid not post-conflict but in the middle of the conflict. He called me and said he had a real problem. Working with the Red Cross and the UN, in three months, all they'd been able to accomplish was a schema, a way to communicate. He wanted to move everything into Groove. And things came together quite quickly.

As they moved into the reconstruction phase, other people decided they wanted to use it. The first challenge was organizational complexity. The second challenge was travel complexity; they could not travel around to meet with people. And the third challenge was infrastructural complexity. The default is offline. Almost everyone could get on the Net, but there was no IT support. There were IT groups, but they weren't necessarily where you are. The Joint Security Working Group is a classic example. They use shared spaces to exchange threat updates and post reports. The Ministry of Communications uses mapping tools to locate postal facilities. And the Reconstruction Strategy Planning Group needed to involve ministers in communicating with people in regional offices. All of this stuff happened at the edge.

Back home, a few agencies needed to figure out how to do threat assessments among the New York City Police Department, Defense Intelligence Agency Joint Intelligence Task Force Combating Terrorism, and others. The Joint Regional Information Exchange System has already helped break a case that I can't speak about. The word got out, and now we have the Homeland Security Information Network.

This is real. This is not theoretical. Work is changing now. So what have we learned at the edge? One, design is the key to achieving value. A tool's value rises dramatically as does its fitness for purpose. Awareness-based swarming and ad hoc groups are real and valuable. Hybrid architectures are key in organizational contexts. Two, successful joint work feels simple and local. Real and compelling local need to work together is required. Individuals participate largely for selfish reasons. Trust, accountability, and privacy are required for participation.

And three, active resistance is a fact of life. Deal with it. Even though we're working at the edge, servers are centers of territorial power. Regulatory, compliance issues are real but are used as weapons. Embrace the regulatory stuff and have a plan to work through it. And increased transparency and accountability can be threatening to people who have built their careers on brokering information and keeping people from talking to each other.

The way people work is rapidly changing and is driven by the capabilities of new tools, but effective tools have to be designed to match the way people work.

Add New Comment

1 Comments

  • Mentor Cana

    You mention the necessity of the feedback loop. It is almost obvious that the feedback loop should start much early in the process during the system development in order to iteratively check whether the systems looks like it will deliver according to the specifications. Sometimes even the specification might need to be revised due to the appearance of unforeseen issues.

    Well, despite this obviousness, I too have seen many information systems FAIL because the feedback loop was never established, sometimes because the 'bosses' decide it is not needed, and sometimes simply because of ignorance.