EDGE SERVER OFFLOADING FOR MACHINE LEARNING WEB APPS
Keywords:
Cloud computing, edge computing, web application, machine learning, neural network, tensor flow, computation offloadingAbstract
Applications need heavy computations power in machine learning technique, particularly with the help of deep neural network, so that embedded device with partial hardware cannot execute the application its own. To overcome such kind of issue, offload DNN computations from customer side to an immediate edge server. Present methods developed to DNN offloading with edge servers also focus the edge server for secure, exact application or modified edge server for varied applications subsequently migration in a huge VM images that included the customers back end software system. In this project, author proposed new method so that edge server can perform the DNN calculation along with its influential hardware to offload DNN computation is the framework of web applications. Author transfer the recent implementation state of web applications from the customer to the edge server objective earlierperformance starts form DNN computation. Then, author can transfer the fresh execution state from the edge server to the customer so that customer can remain to execute the application. Author use asynchronous calls for transferring the execution state with a minor overhead. Author report few problems linked to offloading DNN applications how to send the DNN exemplary and how to develop the confidentiality of user data. Author research with real DNN based web application displays that asynchronous offloading realizes an auspicious outcome compare to executing the application completely on the server.