On The Edge: Technologies Driving Edge Computing And Challenges Ahead

March 11, 2019

March 11, 2019 | Weisong Shi leads both the Mobile and Internet SysTems Laboratory (MIST) and the Wireless Health Initiative (WHI) at Wayne State University. He’s interested in edge computing applications in a host of industries, particularly autonomous vehicles, though he sees a great deal of opportunity in healthcare applications as well.

Edge computing, he says, refers to the enabling technologies allowing computation to be performed at the edge of a network: both into and out of cloud storage. Edge is a continuum including any computing and network resources—the path between the data sources and the cloud data centers.

But just where that edge is—that’s the tricky part. You can’t just say, “Where is the edge?” Shi explains. It depends on the application. For example, a smartphone might be the centralized point for a chronically ill patient who is combining medical records, biosensor data, and patient reported outcomes. “The smartphone is a centralized point, connecting all this information and then talking to the cloud directly,” he explains. For an autonomous vehicle, roadside units—which take input from the car and submit warnings—might be the edge.

On behalf of Bio-IT World, Hannah Loss spoke with Shi about the technologies supporting edge computing and the challenges ahead of us.

Editor’s note: Loss, a conference producer at Cambridge Healthtech Institute, is planning a track dedicated to Edge at the upcoming Bio-IT World Conference & Expo in Boston, April 16-18. Shi is a featured speaker on the program, discussing how edge address concerns of response time requirement, battery life constraint, bandwidth cost saving, and data safety and privacy. He will explore how edge and artificial intelligence will interact in 5-10 years. Their conversation has been edited for length and clarity.

Bio-IT World: What technology supports edge computing?

Weisong Shi: In the last decade, there have been at least seven related technologies that have been developed for edge computing. Widely deployed networking techniques such as 4G, 5G, and LG communication are very important for edge computing, as are isolation techniques.

For example, there are a lot of servers on the cloud. You can isolate a portion of that cloud to support different types of applications. Edge computing definitely needs isolation. When managing the edge server, you have multiple players—stakeholders—that have something running there. You need to have isolation.

The advance of computer architecture in the last three to five years has led us to a third technique: the accelerator. Hardware accelerators such as AI-powered accelerators are enabling the edge, putting more and more intelligence at the edge. This is what is making them really useful.

The fourth technology is the operating system of the edge. You need an operating system here to manage different resources at the edge. In addition to managing the resources at the edge, an edge operating system also needs to interact with the Cloud and a massive number of IoT devices.

In addition to an operating system, you also need an execution framework on the edge, which dynamically allows AI packages to run (e.g. PyTorch from Facebook and TensorFlow from Google) on the cloud where you have very powerful machines. Most recently, they have come out with a lite version, so that users can easily download and focus on business intelligence.

The sixth technique is security and privacy. Edge is the ideal place for you to handle privacy, as some of the data you never want to leave these physical places. The edge is a perfect place for you to apply privacy-preserving mechanisms and for running security apps.

While the last technique could be related to the execution framework, I would like to address it separately. This is the data processing platform. There is a large amount of data that will be generated at the edge and how you manage this data itself becomes a challenge. In most of the pre-edge computing era, people collected data and sent it to the cloud. When working on connected vehicles, you have all kinds of data such as camera's data, LiDAR's [Light Detection and Ranging] data, and driver behavior data. The data processing platform is missing. One of our ongoing projects is looking for an open way to address and use this platform for future vehicles.

Those are the seven technologies needed for the edge computing.

It's interesting that having the human aspect, or the user aspect, is really a challenge. What challenges do you see within your work and the field overall in edge computing?

Edge computing is really hitting the market in 2019, not just for academia but also for industry. We’ve come a long way in the last several years. People are realizing that, "Okay. Edge computing is here. It's important." But, there're still some challenges.

The first being “what is the programming model for edge computing?” In one sense, cloud computing is fairly mature today. If you want to write code, you have many existing programming models. In edge computing, it's application-specific. We have different application scenarios. We need a good programming model so that we can tell the industry practitioners or the graduate students, "You can download this and write code for the edge." Different companies are trying to do this—both Apache and Microsoft both have edge initiatives. They are coming with something, but right now they are still sticking with their own products. A general programming model is still missing.

The second challenge is how to complete software selection for particular application scenarios. Like we discussed earlier, the edge itself is not fixed. For example, people want to use edge computing within the health domain. It is really dependent upon the scenario. If you are looking for chronic illness management, the edge is probably the gateway to use, as there is a lot of information you need to collect. What kind of hardware is the best fit for these purposes? What about communication and cost, etc?

The third challenge is benchmarking. It functions as a standard. In discussing systems in computer science research, we will determine a suite of benchmarks so that we can compare performance: which way is good and which way is bad. This ends up being related to the second challenge—application management. How do you select the best application? We need benchmarks for specific areas, so we can say, "Look, if you can do these benchmarks very well, then your product is good." Right now, we don't. Everybody is trying to claim they are the best, but we don't have any standards to compare against.”

The fourth challenge is dynamic scheduling between the cloud and edge. How do you partition applications? In other words, which part of it is going to be running on the edge? Which part is going to be running in the cloud? Can this be dynamic? Networking is changing. Given the scenarios, you probably need to make the choices in real time. Right now those are more ad-hoc. Many people are doing this manually, and they will decide which part is running where. In an ideal case, the system should be able to adapt automatically.

I think that the fifth challenge is vertical application domains. For example, if you are working on connected autonomous vehicles, you need to be working with domain experts. I recently heard that of the cameras on the market right now, only one or two vendors’ cameras are automobile-grade cameras. Many of the LiDAR [Light Detection and Ranging] products on the market are not safe or reliable to be used in real vehicles. That's working with vertical applications. If you want to use edge in the health domain, you need to consider GDPR and other requirements before this can really play in real life. The technology people really need to sit down with the application people so that there is a better chance to understand each other. This is really a challenge.

The last challenge is not necessarily technical or technology-driven: Can edge computing really help to make money? Who’s going to be willing to deploy this? Companies—I have colleagues in some of them—have to see a very clear path. They must be able to make more revenue, otherwise they are not willing to invest in the edge. In the U.S., AT&T and Verizon are talking about deploying some edge gateways now, but there is still a long way to go.

Which applications are best supported by edge?

I think that in order to determine if a technology is successful, you need to answer the question, “Do you really have killer applications?” In the last several years, killer applications have been developing very well. Health is a huge market for the edge. For example, edge computing can help a lot with chronic illness management. With edge you can record sensor health data at home. The Amazon Echo system has an SDK [software development kit] so you can read information from there, and then do some computing, such as fall detection, or monitor if people have enough activities at home, etc. I think this could be very usable with edge putting more and more AI intelligence in homes.

A second type of application I see to be very important is virtual reality. Virtual reality has huge computing requirements. Last month, Microsoft released HoloLens version two. But imagine where HoloLens, glass gear, where you have a lot of computing requirements there. Ideally this lens itself is going to be working with edge servers to quickly do its computing, thus enabling a lot of potential applications. Remote surgery is a perfect example. This was a long-time dream, right? Now, with virtual reality becoming much more mature, you can do things even more realistically. Colleagues at Wayne State University have used virtual reality for recovery or for kids who have autism. This is a very good application for edge computing in the health domain.

Another related application is what I call the collaborating edge. Right now, in many of the hospitals, there are hundreds of systems with a lot of data. For example, Henry Ford Health Systems has six hospitals in the metro Detroit area with data stored at each of them. One of my students, who works at Henry Ford, is using machine learning on medical images to detect prostate cancer. You can then use collaborative machine learning, sometimes called federated machine learning, to use all related data on multiple sites. This is an example of you having to do this at the edge because the data is never going to leave the hospital.

The last application related to health that I think could be useful is real-time EMS. We have been working with the Detroit Fire Department in this regard and I think it could revolutionize how EMS works. Today, most of the EMS use the advanced life support system (ALS) or the basic life support system (BLS). In short, when someone calls 9-1-1, paramedics go there. They perform basic support and move the patient to the hospital. This usually takes half an hour with not much being done. We can put an edge server running on the ambulance. For example, you can take a short period of video, or even simply take an image of the patient, and immediately send it back to the hospital ER room. That way the doctor can see what is really happening and can give help. This can be very useful for the EMS staff. In addition, once you put the patient in the ambulance you can continuously monitor them using streaming video. People watching at the hospital can be more prepared before the patient reaches the ER room.