The Mobile Edge Computing (MEC) and Fog Computing paradigms are enabling the opportunity to have middle-boxes, either statically or dynamically deployed at suitable network edges, acting as local proxies, with virtualized resources, for supporting and enhancing service provisioning in edge localities. This makes possible, among the other advantages, better scalability and better reactivity in the interaction with mobile nodes, anytime local control decisions and actuation operations are applicable. One crucial and open technical challenge in this new, emerging, and fully-integrated 5G scenario is how to ensure that mobile users always receive the requested performance (quality of available resources/services) independently from their runtime mobility across different edge localities. Novel approaches and frameworks for performance- aware data/service migration across edge nodes are core enablers to achieve this goal. In this lively area, this paper proposes an original framework for migrating data/services with different granularity levels (whole service plus its data, service and data separately, only the needed and complementary data layers) across resource-limited edge nodes that host virtualized resources. Innovative elements of technical contribution of this framework, if compared with the existing literature, include i) possibility to select either an application- agnostic or an application-aware approach, ii) possibility to choose either more traditional virtualization techniques or container-based ones, and iii) in-the- field experimental results about the performance achieved by our solution over rapidly deployable environments with resource-limited edge nodes such as Raspberry PI devices.
5G networks will be the first generation to benefit from location information that is sufficiently precise to be leveraged in wireless network design and optimization, as well as for local service execution and better scalability.
Figure 1 depicts a typical and basic example of service/data handoff management in MEC/fog-enabled 5G networks: user1 is initially connected to edge node1 and has some of her service/data components hosted on virtualized resources on it; the single-hop local connection between user1 and edge node1 favors low latency and better scalability through localized provisioning. However, after some time and during service provisioning, user1 may move to a location that has direct connectivity to edge node2 (node2 locality). In several application cases, it can be beneficial to migrate user1’s service/data components from edge node1 to edge node2, in order to continue to serve user1 with virtualized resources in her current locality.
In this project, handoff-related migration can involve service components, data ones, or both. In our proposal, we distinguish between so-called layered services and monolithic services. Layered services consist of various layers, such as service and data parts, that can be managed as different separated blocks; on the opposite, monolithic services have to be considered by our framework as one single block, which internally includes service components, data components, and all the associated needed resources.
Our framework enables handoff management for both monolithic and layered services, by suggesting the integration with VM techniques for the former case and by integrating with Docker-based containerization for the latter, with significant advantages in the case of deployment constraints associated with resource-limited edge nodes.