Differentiated Granularity Support for Service and Data Handoff in Fog/MEC 5G Scenarios: Tradeoffs, Lessons Learnt, and Experimental Validation

Paolo Bellavista, Antonio Corradi, Luca Foschini, Domenico Scotece

The Multi-access Edge Computing (MEC) and Fog Computing paradigms are enabling the opportunity to have middle-boxes, either statically or dynamically deployed at suitable network edges, acting as local proxies, with virtualized resources, for supporting and enhancing service provisioning in edge localities. This makes possible, among the other advantages, better scalability and better reactivity in the interaction with mobile nodes, anytime local control decisions and actuation operations are applicable. One crucial and open technical challenge in this new, emerging, and fully-integrated 5G scenario is how to ensure that mobile users always receive the requested performance (quality of available resources/services) independently from their runtime mobility across different edge localities. Novel approaches and frameworks for performance- aware data/service migration across edge nodes are core enablers to achieve this goal. In this lively area, this paper proposes an original framework for migrating data/services with different granularity levels (whole service plus its data, service and data separately, only the needed and complementary data layers) across resource-limited edge nodes that host virtualized resources. Innovative elements of technical contribution of this framework, if compared with the existing literature, include i) possibility to select either an application- agnostic or an application-aware approach, ii) possibility to choose either more traditional virtualization techniques or container-based ones, and iii) in-the- field experimental results about the performance achieved by our solution over rapidly deployable environments with resource-limited edge nodes such as Raspberry PI devices.

Project

5G networks will be the first generation to benefit from location information that is sufficiently precise to be leveraged in wireless network design and optimization, as well as for local service execution and better scalability.

Figure 1 depicts a typical and basic example of service/data handoff management in MEC/fog-enabled 5G networks: user1 is initially connected to edge node1 and has some of her service/data components hosted on virtualized resources on it; the single-hop local connection between user1 and edge node1 favors low latency and better scalability through localized provisioning. However, after some time and during service provisioning, user1 may move to a location that has direct connectivity to edge node2 (node2 locality). In several application cases, it can be beneficial to migrate user1’s service/data components from edge node1 to edge node2, in order to continue to serve user1 with virtualized resources in her current locality.

Architecture
Figure 1. High-level overview of MEC/fog edge-enabled architecture for handoff management.

In this project, handoff-related migration can involve service components, data ones, or both. In our proposal, we distinguish between so-called layered services and monolithic services. Layered services consist of various layers, such as service and data parts, that can be managed as different separated blocks; on the opposite, monolithic services have to be considered by our framework as one single block, which internally includes service components, data components, and all the associated needed resources.

Our framework enables handoff management for both monolithic and layered services, by suggesting the integration with VM techniques for the former case and by integrating with Docker-based containerization for the latter, with significant advantages in the case of deployment constraints associated with resource-limited edge nodes.

Protocol

We defined three different protocols: basic protocol (reactive application-agnostic handoff), proactive application-agnostic handoff, and proactive application-aware handoff procedure.

Reactive Handoff Procedure

Typically, the reactive handoff procedure starts when the mobile node loses connection with its old edge node and sends a handoff request message to the target edge (step 1). Upon handoff request message reception (step 2), the old edge starts the migration of needed service/data software layers towards the target edge (step 3 and step 5) and subsequently the target carries out the service/data software startup (step 4 and step 6). Then, the old edge prepares data backup and sends it to the target (steps 7-8). Finally, when the target edge receives the data, it works for data startup (step 9) to restore user’s application session, and then the handoff procedure ends (step 10).

Architecture
Figure 2. Baseline reactive handoff procedure.

Let us observe that Figure 2 describes the basic handoff protocol (reactive handoff), which mimics the service interruption (from step 1 to step 10) typically suffered also by monolithic services (where data and service parts are typically both contained in the same VM). Note that, for the above motivations, we implemented this protocol also as a baseline to use for comparison in our experimental evaluation (see below).

Of course, our framework would allow to optimize also this reactive handoff management, e.g., leveraging service/data software layering to avoid migrations (if needed layers are already available at the target edge) and, similarly, applying application-aware data management if possible.

Proactive application-agnostic Handoff Procedure

Figure 3 shows the proactive application-agnostic handoff procedure that leverages both long- and short-term predictions to enable the maximum possible proactivity; this procedure has demonstrated to be suitable also for long-term handoff management operations.

Architecture
Figure 3. Proactive application-agnostic handoff.

Proactive application-agnostic handoff procedure begins when our history-based mobility prediction module triggers (steps 1-2) the proactive execution of the migration of service (steps 3-4) and data software (steps 5-6) layers. In case of application-agnostic handoff, we consider the entire data state as a black box with no possibility to isolate only those parts that have been changed lately (as in the application-aware approach). Therefore, steps 5-6 only send the data software layer while we postpone the request for data backup migration to when the mobile node loses connection from the old edge, so to make sure to receive a more consistent data state, with all changes made at the old edge node (step 7). Accordingly, the old edge starts to prepare the data for the migration step (step 8) and, once completed data migration (step 9), the target edge restarts the data part with the received user’s data backup (step 10) and sends the handoff completed signal to the mobile node (step 11).

Note that this form of application-agnostic proactive handoff decreases the service interruption time compared to reactive handoff and can be performed without any application-specific knowledge and requirements.

Proactive application-aware Handoff Procedure

Figure 4 shows how our optimized proactive application- aware handoff procedure can further reduce this service interruption time leveraging the plug-in module that contains application-specific mechanisms to create and move proper (latest) data changes (only of those data chunks changed in the period of interest).

Architecture
Figure 4. Proactive application-aware handoff.

As in the previous case, our handoff management procedure starts when our mobility prediction algorithm emits a long-term user’s mobility prediction, and the target edge requires service/data migration from the old edge (steps 1-2). Let us note that compared to application-agnostic handoff, application-aware handoff allows us to proactively move not only service and data software layers, but also the whole data state part. Thus, the old edge migrates the service (steps 3-4) and the data parts (steps 5-7), where the data is a snapshot of the current data part (both software and state); in this time interval, the mobile node continues to be provisioned through the old edge. Then, our procedure introduces a periodic data reconciliation phase, triggered by fine-grained mobility prediction (step 8-9’), to reduce the service interruption interval, by limiting data state migration to those data chunks that allow to align the initially migrated data state snapshot, with the latest modifications at the old edge. This periodic data injection phase (managed by our framework by using its application-aware plug-in) terminates when the mobile node loses connection with the old edge: our protocol guarantees data consistency by sending another data chunk update from the old to the target edge, and that completes the whole handoff procedure (steps 9’’-10).

Framework

Figure 5 shows the framework architecture that we organized in two layers: edge and device layers. The framework consists of a set of components that are deployed at the two layers and enable our differentiated granularity handoff process.

Architecture
Figure 5. Overall architecture of our framework.

APIs. This component offers a set of common APIs that enable the interactions of our handoff protocol between all involved distributed entities at the two layers. It exposes several methods and features related to our migration-enabled edge handoff procedure, such as handoff request, start, stop, and so on.

Prediction. The goal of this component is to predict an upcoming change of edge node due to node mobility. As outlined in the previous section, we claim the importance of distinguishing two kinds of mobility prediction: a coarse-grained historical-based mobility prediction and a fine- grained RSSI-based mobility prediction. The first one is based on the history of user movements: edge nodes, possibly coordinating also with the mobile node (to gather mobility traces (such as GPS positions) and the global cloud layer (to process those traces), track user movement to enable long- term predictions of user mobility habits, such as during working days, during weekend, and so forth. When this coarse-grained mobility prediction component is available, it is possible to execute proactively long-running operations such as migration of (static) service parts towards the target edge. The second one, namely, fine-grained mobility prediction, evaluates handoff decision by using the value of edge-to-mobile RSSI by employing the monitored RSSI values obtained through heterogeneous short range wireless technologies, such as Wi-Fi and Bluetooth. As widely recognized in the literature, this kind of prediction is expensive, typically works on shorter time intervals, but gives more accurate information about when to trigger handoff and consequently the migration of more dynamic data parts. Finally, the availability of both prediction modes enables higher flexibility for handoff management.

Plug-in. The goal of this component is to enable the application-aware handoff by embedding the application-specific knowledge to manage finer grained data migrations. In other words, our framework leverages the plug-in to control and perform application-specific operations, namely, to extract the data on one edge node and to restore the same data on the new target edge.

Connection Manager. This component contains the functions to manage heterogeneous wireless connectivity resources together in a synergic way. Its focus is to manage different wireless technologies, such as Wi-Fi and Bluetooth, and to offer to the components sitting atop it a unified connectivity view to the underlying device layer.

Implementation

We widely assessed and validated the feasibility of our solution; this section presents our framework implementation.

Architecture
Figure 6. Implementation of our framework.

Experimental Results