Buy Now

QuickLogic Blog

Living on the Edge…

A few days ago, TechCrunch published an interesting article about a theory of edge versus cloud computing developed by Peter Levine, a general partner at venture capital firm Andreessen Horowitz.  The one sentence summary of his theory is that the recent trend toward cloud computing for mobile and Internet of Things (IoT) devices will soon change back to favoring edge computing, for multiple reasons.  

The first of these is bandwidth.  He gives the example of self-driving cars which he calls “data centers on wheels”.  They can easily generate 10 GB of data for every mile driven.  The case of a Lytro (professional virtual reality) camera is even more extreme.  It can generate 300 GB of data every second.  If you multiply the amount of data coming from any one device by the millions or billions of devices that are generating data today (with more being added all the time), it quickly becomes clear that just moving that much data back and forth between the edge and the cloud is a big challenge.  Edge computing can help solve the bandwidth problem by processing data before it needs to be transmitted, dramatically reducing the transmission bandwidth requirements.

The second reason is latency.  Any time data is transmitted to a remote location, processed, and a result returned, there will be latency.  In some cases, the latency won’t matter.  For example, if I ask Google Home for a current weather forecast, I don’t much notice that it takes the system some fraction of a second to respond.  However, as Peter notes, if a self-driving car needs the cloud to process an image of a human crossing the street in front of the car while the car is moving at high speed, latency becomes a big issue.  At QuickLogic, we recently had a real-life example with a customer who is building senior home monitoring systems and didn’t want to use cloud-based voice processing as the extra latency might be critical in an emergency situation. 

The third factor he gave was privacy.  Much of the data out there in the real world includes sensitive, personal information.  When it needs to move across system boundaries there is always the risk that it will be intercepted and exploited for nefarious purposes.  Processing data at the edge keeps it more secure.

We would add a fourth reason that seemed to be left off of Peter’s list, and that is power.  Processing data locally (especially when done efficiently) almost always consumes less power than transmitting it to some distant location.  For battery-powered or battery-backed up devices this can become the single biggest reason to avoid or minimize the use of cloud computing resources.  The same customer who we used in our example above also had a need to save power to increase battery life, but we’ve had many, many more power reduction examples just within our own customer base.

At QuickLogic, we firmly believe that there will be a tremendous amount of edge-based computing for the same reasons that Peter elucidates, plus our additional reason of reduced power consumption.  In fact, we developed devices such as the EOS™ S3 sensor processing platform specifically to meet the exponentially growing need for edge processing.  This device and its associated ecosystem of tools and IP can increase bandwidth, decrease latency, improve data privacy and reduce battery consumption for a wide range of end user (aka “edge”) applications.  

Now you can see why we like to “live on the edge” and how this trend towards edge computing will reshape the industry. 

You can read the full TechCrunch article along with a video of a recent presentation Peter gave which explains his ideas in more detail by clicking on the link below.

https://techcrunch.com/2017/08/03/edge-computing-could-push-the-cloud-to-the-fringe/?ncid=mobilenavtrend

 

Posted in IoT

One thought on “Living on the Edge…

  1. Hey Brian,

    All great points. One thing that the article might not hit on hard enough is the idea of the ‘mission-critical’ nature of the node, and how that should effect the desire for edge computing. For example, smart sensors are being deployed into high rise towers for occupancy sensing. Per LEED requirements, these sensors are used to detect occupancy, and turn on/off the lights accordingly. Regardless of whether the building’s internet is functioning, the lights need to come on when a person enters the room. Edge processing enables that always; cloud computing doesn’t. Other functions of these sensors (e.g. building and conference room utilization over time) aren’t mission-critical, and can be passed to the cloud without concern. In short: cloud doesn’t work for everything, as the article you referenced clearly states.

    Cheers!

    Paul

Leave a Reply to Paul Karazuba Cancel reply

Your email address will not be published.

button - scroll to top