Explaining Edge Computing


Welcome to another video from ExplainingComputers.com. This time I’m going to talk about edge computing. This places networked computing resources as close as possible to where data is created. As we’ll see, edge computing is associated
with the Internet of Things, with mesh networks, and with the application of small computing devices like these. So, let’s go and delve more deeply into
computing on the network edge. To understand edge computing we need to reflect
on the rise of the cloud. In recent years, cloud computing has been
one of the biggest digital trends, and involves the delivery of computing resources over the Internet. In the early days, most of the devices that
accessed cloud services were PCs and other end-user hardware. But increasingly, devices accessing cloud
services are also Internet of things or IoT appliances that transmit data for analysis
online. Connecting cameras and other sensors to the
Internet facilitates the creation of smart factories and smart homes. However, transmitting an increasing volume of data for remote, centralized processing is becoming problematic. Not least, transmitting video from online
cameras to cloud-based vision recognition services can overload available network capacity and result in a slow speed of response. And this is the reason for the rise of edge
computing. Edge computing allows devices that would have relied on the cloud to process some of their own data. So, for example, a networked camera may perform
local vision recognition. This can improve latency — or the time taken
to generate a response from a data input — as well as reducing the cost and requirement
for mass data transmission. Staying with our previous example, let’s
consider more deeply the application of artificial neural networks for vision recognition. Today, Amazon, Google, IBM and Microsoft all
offer cloud vision recognition services that can receive a still image or video feed and
return a cognitive response. These cloud AI services rely on neural networks that have been pre-trained on data center servers When an input is received, then they perform
inference — again on a cloud data center server — to determine what the camera is
looking at. Alternatively, in an edge computing scenario, a neural network is usually still trained on a data centre server, as training requires
a lot of computational power. So, for example, a neural network for use
in a factory may be shown images of correctly produced and then defective products so that
it can learn to distinguish between the two. But once training is complete, a copy of the
neural network is deployed to a networked camera connected to edge computing hardware. This allows it to identify defective products
without transmitting any video over the network. Latency is therefore improved and the demands on the network are decreased, as data only has to be reported back when defective products are identified. This scenario of training a neural network
centrally and deploying copies for execution at the edge has amazing potential. Here I’ve indicated how it could be used
in vision recognition. But the same concept is equally applicable
for the edge processing of audio, sensor data, and the local control of robots or other cyber physical systems. In fact, edge hardware can be useful in any
scenario where the roll-out of local computing power at the extremities of a network can
reduce reliance on the cloud. One of the challenges of both the Internet
of Things, and of edge computing, is providing an adequate network connection to a vast number
of cameras, sensors and other devices. Today, the majority of devices connected wirelessly to a local network communicate directly with a WiFi router. However, an alternative model is to create
a mesh network in which all individual nodes dynamically interconnect on an ad-hoc basis to facilitate data exchange. Consider, for example, the placement of moisture and temperature sensors in a large industrial greenhouse. If all of these devices have to have direct
wired or wireless connectivity, then a lot of infrastructure would need to be put in
place. But if the sensors can be connected to edge computing devices that can establish a mesh network, then only one wired or wireless connection to the local network may be required. Edge computing hardware is defined by its
location, not its size, and so some edge devices may be very powerful local servers. But this said, a lot of edge computing is destined to take place on small devices, such as single board computers. Here, for example, we have a LattePanda Alpha and a UDOO BOLT, both of which could be deployed to process data at the edge. Other potential edge devices include the Edge-V from Khadas as we can see here — this has even got “edge” in its name — and it’s
got multiple camera connectors, which is very useful for edge applications. And then over here we have a Jetson Nano SoM, a system-on-a-module, and this is a particularly interesting single board computer because
it’s got a 128 CUDA core GPU. So it’s very good for vision recognition
processing at the edge. Another slightly different and very interesting device is this, the Intel Neural Compute Stick 2, or NCS2. This features a Movidius Myriad X vision processing unit, or VPU, and it’s a development kit for prototyping AI edge applications. And if I take off the end here you’ll see
this is a cap, and this is actually a USB device. And the idea is you can plug this into a single board computer, such as a Raspberry Pi, in order to significantly increase the capability of a small board like a Raspberry Pi to run edge applications like vision recognition. The exact definition of edge computing remains
a little blurry. This said, all major players agree that it
places networked computing resources as close as possible to where data is created. To provide you with some more extensive definitions, IBM note that “Edge computing is an important emerging paradigm that can expand your operating model by virtualizing your cloud beyond a data center or cloud computing center. Edge computing moves application workloads from a centralized location to remote locations, such as factory floors, warehouses, distribution centers, retail stores, transportation centers, and more”. Similarly, the Open Glossary of Edge Computing
from the Linux Foundation defines edge computing as “The delivery of computing capabilities
to the logical extremes of a network in order to improve the performance, operating cost and reliability of applications and services. By shortening the distances between devices and the cloud resources that serve them, and also reducing network hops, edge computing mitigates the latency and bandwidth constraints of today’s Internet, ushering in new classes
of applications”. Cisco have also introduced the term “fog
computing”, which it describes as “. . .a standard that defines how edge computing should work, and [which] facilitates the operation of compute, storage and networking services between end devices and cloud computing data centers”. What this means is that fog computing refers to resources that lie close to the metaphorical ground, or between the edges of a network
and the remote cloud. It may be, for example, that in a factory
some edge sensors communicate with local fog resources, which in turn communicate as necessary with a cloud data center. It should be noted that the term “fog computing” is mainly used by Cisco, and is viewed by some as a marketing term rather than an entirely distinct paradigm to edge computing. Edge computing is emerging for two reasons. The first is the rising pressure on network
capacity. While the second is our growing demand to obtain a faster and faster response from AI and related applications. As a result, while for a decade we’ve been
pushing computing power out to the cloud, increasingly we’re also be pushing it in
the opposite direction to the local extremities of our networks. More information on a wide range of computing developments — including AI, blockchain and quantum computing — can be found here on the ExplainingComputers YouTube channel. But now that’s it for another video. If you’ve enjoyed what you’ve seen here
please press that like button. If you haven’t subscribed, please subscribe. And I hope to talk to you again very soon.

100 thoughts on “Explaining Edge Computing

  1. The concept of the cloud was driven more by the corporate marketing departments rather than their R&D labs. The cloud fits the service provider model that links the customers wallet to the corporates bank account via a subscription agreement.
    The fog solution recognises the unacceptable load that universal adoption of directly connected IoT will place on the internet and appears to be an attempt to salvage some of the cash generating power of subscription services from the obvious practical problems of send all that low level data to remote servers.
    Most engineers who have thought about the problem have concluded that distributed computing is the answer with only necessary data being moved to the increasingly remote parts of the network. As has been stated in this video, this solves many of the latency problems and limits the data load placed on the internet. Not mentioned here, is that if the standards are well designed it will increase the resilience of the system by reducing the dependence of local nodes on each other, but also on access to the remote servers. The test being what happens if the internet connection is broken, does the local network continue to operate and at what level of functionality.
    In defence of the technical advantages of cloud computing, it must be said that the cloud has provided users with access to significant processing power and allowed the development of AI applications that would otherwise be impossible to afford, but he data load produced by mass adoption along with advances in affordable computer power will return the location of the processing engines to their rightful place, as local and close to the action as possible.

  2. Fascinating. I foresee a day when thousands or millions of small neural networks send their analyses to other, larger neural networks closer to the cloud, for meta-analysis, which in turn send their data up the chain, in a hierarchy of neural networks, allowing heretofore undreamed of levels of abstraction in the interpretation of data. Which path should we take to the Singularity? All of them!

  3. The only problem I see this solving is autonomous interdependent vehicles (AIVs), who cognitively interoperate to share observations and plans that avoid accidents and speed throughput.

  4. This technology has been a great equaliser in bringing the quality of Australian takeaway up to Yank and Pom standards.. Missed going abroad and having my takeaway be right 90% of the time, [as opposed to 65% at best here in straya]. Machine learning+Cameras working their magic at Dominoes pizza ( ͡° ͜ʖ ͡°)

  5. Watching computers evolve the way organic biology did is amazing. This is just like a video I watched about "robot skin", how there's too many points of data to track even for a human mind so a lot of the sensing data is computed by the nerves before it sends a signal, basically just significant changes are sent to your brain, and this is the exact same thing for silicon based intelligence. Watching the entire planet gain sentience via computing is wild, it's a crazy time to be alive to watch this all happening so fast right in front of us.

    It took people billions of years to reach this point, and computers will surpass it in a hundred. What massive and exponentially more potent use of matter, it's almost insane to think about.

    This of course, is completely leaving out the ramifications, the philosophy behind this. It's just a fact, computers are becoming more organic but with the ability to evolve in seconds, not millennia. It's organic life, but with the "edit" mode unlocked.

  6. The back and forth between dumb terminals and clever clients will not stop…. it's price vs performance. I don't like dumb terminals and fully centralised systems………….. anywhere.

  7. This will be perfect for china to keep track of it's citizens for it's social credit system. I'm sure it will be used in the UK soon, as well as North America eventually!

  8. I'd love a video on what home users can do with a device like the NCS2. For instance, if we run something like openHab, could we utilize video processing or facial recognition at a reasonable cost to improve the quality of home video monitoring?

    Another great video, btw!

  9. The intel NCS and Google Coral allow for training something like a visual recognition system on a raspberry pi. They’re not needed to run a recognition system. The same advantage the Jetson Nano has with its Cuda cores. They’re used for TRAINING, not implementation.

  10. In the 1970s my Community College had "dumb" mechanical teletype terminals connected to another college to run their software programming projects. We didn't call that a "cloud connection" at that time. I favor local control and processing of inputs and programs where it makes sense. Thanks for another great video.

  11. oh Lord, this is the most wierdest video on this channel as it is completely and utterly about nothing ) it is as abstract as human mind can go: look, for decades we were trying to be as 'remote' as possible with remote being both marketing term and scientific declaration, and all of a sudden 'lets store data locally'? Who's benefitting?)

  12. So, it's back to the local PC/Server with Internet model, got it. Oh except it's called "Edge Computing" now, that's so progressive.
    One note, I'm glad you don't use "leverage and leveraged" among all the necessary buzz words.

  13. I would totally use a Digital Assistant that sounded like Chris.

    It could even be called Christopher.

    Or just an EC how to video on making Google Assistant sound like him.

  14. Hi I have a request can you make a video on Deepin OS please? The link below shows that it is better, but I want your expertise and advice in this.
    https://www.forbes.com/sites/jasonevangelho/2018/12/10/meet-the-linux-desktop-that-blows-away-windows-10-and-macos/amp/

  15. Good idea..have all your data stored on somebody else's hardware then have all your appliances hooked into the interweb so {{{they}}} can spy on you

  16. Given the increasing processing power in phone and tablet system on chips, most notably the newest iPhones and iPad Pro, would you expect to see Edge like features move over to these devices as well? This would allow for a lower latency, and I could see a new system where there is localized learning.
    And do you think that in your own that works or chips for these will be built into Intel and AMD chipsets in the near future? It seems to me that the PC is being left behind.

    PS. There already are implementations for active directory in Windows Server 2016 and 2019, which creates an edge like paradigm. The active directory for managing all of the computers, devices, and accounts is centralized in a cloud-based server with the local servers being copies. In some cases these local servers can an accountant update data, but not others they are simple copies. Either way, there's regular synchronisation at set times.

  17. 9:00
    Is "cloud" not still essentially a marketing term?
    I never stopped seeing it that way. Just meaning "web service," be it storage, processing, hosting, software, etc.
    It's humorous thinking companies decided to make the "thing" in their IoT less useless to save on bandwith costs.
    Edge computing as a concept is somewhat understandable, but to the end-user it may aswell be a device that half-works on its own but is still dependent on the net and phoning-home, at least with the AI example given.

  18. Another great video of just the right length. No waffle, no banter, just the information you need to understand the concept.

    It makes me wonder, is this process cyclic? 50 years ago compute power was expensive so processing was centralised with mainframes and dumb terminals. Then PCs change the landscape with cheap processing and we de-centralised. Then we upped the amount of processing we needed to handle the vast amounts of data we are now collecting so the Cloud was born and we centralised again. Now we've got dedicated devices like VPUs and pre-trained neural nets that can offload the processing and a limited resource of bandwidth, so we're decentralising again. I wonder if we "fix" the bandwidth problem will we centralise again …😀

  19. way to make the video 10 minutes bro. this 10 minute thing is getting ridiculous and makes me not want to watch YouTube anymore.

  20. It's obvious that computers can be related to eachother and can take notice what happens in the envoriment, thanks for showing and kind regards.

  21. Thanks for another excellent tutorial. I could never work out how training your Raspberry Pi to tell the difference between an orange and a banana was going to change the world.

  22. Video example – but, of storing data live in Cloud for cctv is also so robbers can not alter the data. Thats why people use Youtube live for protection.

  23. In the future, you'll no longer have a name. You'll just be a numbered data node and valued only for how many unique data sets you can provide.

  24. So edge computing is them realizing, again, that local compute is better than remote server compute. It basically is a cycle and we are on the side of the cycle that is moving back to local computer from remote compute.
    I agree edge computing is important and interesting to that the data sets needed for some of this are to large to set on the local device. Having to go to the cloud many times is a bad solution so i am happy a solution has been found for that. Just the data set has grown to big for several applications of use. Mostly recognition software that has to ID someone with a name instead of a true or false answer.
    Factory products can be programmed into a central server then compared against however many times needed for factory QA but facial recognition is a no go for locally found.
    Temperatures can be done almost all locally with 1 server connecting to all the up to 254 nodes … or more if you want to use multiple network subset address spaces and the software allows for that.

  25. You have to laugh at the definitions of Edge Computing given by IBM and the Linux Foundation from 7:23 onwards. Basically, what they're saying is: 'we sold you the idea of Cloud Computing, and like a fool you bought it. But you've maxed out your bandwidth doing it – and we've reached the limit of what we can take, store and process in the Cloud (i.e. our Datacentres). So now we're pushing the storage and processing workload back to you (you know, like you did in the first place) and we're going to charge you for the privilege by selling you a load of new devices to do it with'. And – yet again – we'll fall for it.

  26. I believe we will soon have truly distributed computing in the sense that all devices will be mesh networked and process shared workloads. Small jobs may only use one system or a small group of systems, while larger jobs will spread out further, like SETI. The big cloud companies will have to make some changes in their business model to continue being profitable.

  27. Never understood this categorization. In the very beginning, there were computers. Then someone thought about using one computer from different places simultaneously, and made the terminals. Then computers became cheaper so everyone got their own, capable computers. Than stuff happened and we needed giant server clusters (at the end of the day, the "cloud" is just that, servers) for computing rather than the flimsy hardware in smartphones and IoT devices. Now some asshat coins the term "edge computing", because we are not using someone else's computing power from afar, instead we use powerful hardware onsite for this. See the pattern?

  28. Seems a very nebulous and foggy buzzword which amounts to computing on one's own network not someone else's cloud with AI ,perhaps. Nothing to see here , move along.

  29. So, let me get this straight.
    We used to NOT have a cloud and do everything locally, then they sold us the cloud as the best thing ever and now they want us to move again to the old ways with a fancy new name and a price markup, I assume.

  30. Many have commented on the "Circle of Computing" that seems to happen where we centralize then decentralize the stuff doing the number crunching. While there are marketing force trying to capitalize on the trend I think its the result of a natural feedback loop.

    We have a range of possible solutions.

    At one extreme all computing and data is done/kept locally. At the other extreme it is processed and stored centrally. There are various technological and economic constraints.

    At one point the only type of computers were big and expensive. This led to a centralized solution. Along comes relatively cheap minicomputers local to a single building or floor which helped to mitigate the expensive telecommunications costs and delays caused by centralized development teams.

    Then came inexpensive PCs and and LANs. Computing/data was pushed out onto individual desktops. Time rolls on and low and behold it turns out that all those highly specialized and trained people who had been talking about keeping backups, properly describing your requirements and other "we don't need to do that but actually do" stuff were actually right.

    The costs of telecommunications went down and there is a problem with local design teams only thinking of their local problems leading to individual silos creating multiple incompatible solutions for the same problem, none of which easily talk to each other nor give the higher levels of the organization what they need. The pendulum swings and we are back to a more centralized solution with the newly name cloud which looks an awful lot like the networked mainframe computing that has been chugging along with big iron for several score years.

    We are now starting to see that solution giving way since the hammer ( central computing) requires a bigger and larger number of nails ( communications speed and volume).

    Basically either solution has pluses and minuses, a yin and yang if you will. If one is the "solution de jour" it gets over used and the other starts to look better, hence the feed back loop comment.

  31. Edge computing is a logical step.
    Simplest way to unburden the internet.
    NN, ML, huge amount of data…, but as well powerful SBCs, faster machines and greater storage makes edge computing an expected step.
    But more important, much more possibilities to experiment with, on individual level, affordable technology with power.
    Citius, altius, fortius.

    Kind of'reversing', before cloud computing era. 🤔

    Excellent video! 👌🏻✨
    Cheerio Chris! 👋🏻

  32. so it's moving cloud computing away from the cloud…like we used to do it before the cloud. So, like we knew the whole time, the cloud is useless. Thanks.

  33. I think edge computing could be very helpful for individuals, and not just for enterprise users. I believe it was in the comments of one of your videos – or perhaps the video itself – that someone talked about the notion of a databox that would use a pretrained neural network to take in all the data from devices in your home and either make inferences by itself or convert that data to something more acceptable to send to a larger datacentre. The reasoning behind it is that the company that makes your smart teapot doesn't need to know the Unix timestamp of when you put the kettle on if it only wants to know how many cups of tea you have in a day, so the databox would do all the logging and send a revised average of cups of tea per day to the datacentre at the end of each week.

    The cool thing about it is that your entire house could automate a lot of things while being offline; e.g. your smart shower would tell the databox that its just been turned on, and the databox would infer that you don't want to step out of the warm shower into a cold room, so it turns the heating up a little. The databox also uses information from your smart security system to know what your usual route is after you've taken a shower, and draws your smart blinds so people can't see you walking to the bedroom, and all of that without ever touching a computer that's not in your house, or sending data to a device that doesn't need to know.

  34. What came to my mind was my highschool network server. It connected all the computers together so you can log onto your account on any computer in the school.

  35. Your explanation of cloud and edge computing reminds me of the old client server model that evolved into cooperative computing where the client PC would process the data before transmitting to the server. Lots of new terminology.

  36. so, like a saturation of devices that are inbetween each other in the connections….? sort of more computers to fill in the gaps in the world where there aren't any computers or networks, creates a stronger network? it makes sense, like the spider spinning more layers into the web 🙂

  37. I believe edge computing is just a reference to explicit data graph execution (EDGE). I'm not sure if the edge computing brand actually implement processors that are build with the EDGE instruction set and corresponding data flow processor, but I think it's along that vein of thinking. Maybe it's simulated/virtualized? Anyway, that's my take after having researched the TRIPs architecture and data-flow model.

  38. What is old is new again. The '50s, '60s, and '70s were a time of nothing but cloud computing — you had large mainframe computers (the 'cloud' of the day) attached to dumb terminals so people could access these computing resources.

    Then the 1980s and 1990s came along and the PC revolution happened — people had their own PCs that weren't connected to any networks at all, or to a very basic file-sharing system at most. All computing was done locally.

    Then the '00s and '10s — the cloud came back in the form of the Internet. People still did local computing, of course, but offloaded more and more of their computing and storage to the cloud, as our devices became smaller and, interestingly enough, less powerful, because we traded power for mobility.

    Now, as we reach the '20s and beyond, we go back to "local" computing resources, pulling away from the cloud and doing more computing on our own hardware — relying less and less on remote hardware.

    It's fun to watch the pendulum swing back and forth.

  39. An interesting video thanks Chris. Are you planning on showing some actual edge computing demonstrations on some SBCs? I'd like to see some more.

  40. This sounds basically like a thin client but with AI. I'm assuming it's more to do with how Edge systems interface with cloud and local.

    The latency thing interests me from a cloud gaming point of view. Google's bullsh*t 'negative latency' is guaranteed to lead to Stadia to flop with hardcore gamers, but the option for AI that can handle aspects locally wouldn't be as much of an issue in bridging the gap between full cloud gaming (which will always suffer latency) and local processing (which will always be limited to the hardware available).

    In this way you can make use of say, powerful local gaming hardware from a graphical standpoint, but use cloud servers to offload AI-centric aspects through an Edge type interface…

  41. Nothing has changed in the last 50 years: It's the continuing battle of those who want to centralize computing under their control (mainframes) and those who want to decentralize computing (personal computers).

  42. Thanks, Christopher! Watching the programs you make, in particular, the ones on your AMD build, gave me the confidence to build my own AMD system. I love learning from you, please keep teaching.

  43. Proud Nottingam Business School alumni. I remember your lectures on cloud computing eons before it was mainstream.

  44. Hi Chris, nice video. Currently, I feel that Edge, Cloud and Fog computing are not in my future. Interesting video though.

    On a second note, going back to the SCSI situation I mentioned to you before, I had a thought that I want to share with you. What would you think of slapping a SCSI card ( I'm told that they still exist ) in a desktop and then network it so that you could park it in your basement, or closet and talk to it via your Pi? Let me know what you think. Cheers.

  45. If the technological singularity is not implemented in our phones themselves, we will lose the future to evil giants like Google. We need a distributed AI, not a single point of machine intelligence.

Leave a Reply

Your email address will not be published. Required fields are marked *