If you’ve just finally wrapped your head around what the tech world means by “the cloud” (no one is quite sure), we may have some bad news: People are now talking about something called “the fog.”
The good news is this metaphor is not quite as hazy as it could be.
Sometimes, important things happen locally on your devices: You edit a Word document on your laptop, or crop a photo on your phone, or your printer pops up an angry error message because it’s out of paper.
Other times, important things happen in the cloud: You collaborate in Google Docs, or share a photo on Facebook, or upload your company’s annual report to have it printed and shipped halfway around the world.
Working locally has its benefits–fast response times, software that works even if your internet connection is spotty, and local control over how data is used and secured. The cloud has its benefits, too, such as redundant and reliable central servers managing your data, economies of scale that make it possible to process and store huge volumes of information, and easy collaboration between people and devices.
The fog, a term that’s been popularized lately by companies like Microsoft and Cisco, the latter which claims to have coined it, essentially refers to a compromise between the two approaches: Data can be processed on servers on a network relatively close to the devices that are collecting it, whether that’s industrial sensors or laptops and smartphones, uploaded to more distant servers in the cloud, or both, depending on what’s needed. “The fog” became more or less official in March, when the National Institute for Standards and Technology issued a publication defining fog computing and explaining its benefits.
Generally, fog computing is a more flexible architecture that, rather than shifting everything all the way to a remote data center or computing everything extremely locally, can shift data to where it’s best suited based on time, space and processing power constraints. “Because fog nodes are often co-located with the smart end-devices, analysis and response to data generated by these devices is much quicker than from a centralized cloud service or data center,” according to NIST.
Proponents, like the tech industry’s Open Fog Consortium, say it’s particularly promising when it comes to the ever-impending internet of things, where devices big and small gather and generate data that needs to be processed more quickly than cloud bandwidth and latency constraints allow. (Autonomous cars, for instance, may drive through the fog: These vehicles often depend upon distant servers, but must also be able to make split-second decisions on their own, even where there’s no internet connection.) A similar concept, “edge computing,” also moves data processing from central servers to the outer points, or edge, of a network.
And if the fog and cloud metaphors aren’t enough, some networks have even added an additional layer, called “the mist.” That typically includes lower-powered computers that sit even closer to sensors or other devices at the network’s edge than the fog for even lower latency.
Ultimately, though, The fog could also be read as another buzz term in Silicon Valley’s unending parade of technical verbiage aimed at, ultimately, helping sell more services. As atmospheric it all sounds, like the misty weather of the Bay Area, make no mistake about it: The cloud, the mist, the fog, and all of its various permutations ultimately means lots and lots of warm machines, and lots and lots of cold hard cash.