Skip to content

Schumzy/R_RESTApi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

R_RESTApi - Inference as a Service in R

This project was set up by Sonja Gassner and Veronica Pohl to gather the options to use a trained R model for production. We looked at the approach "Inference as a Service". This means that the machine learning (ML) model is supposed to be deployed from R and an inference service is supposed to be made available as RESTful API.

Project structure

The project is split into three sub-projects (three different possibilities to provide inference as a service using R models):

All sub-projects take the MNIST data set of handwritten digits and train different models by using random forest.

There are three models involved in the prediction:

  • empty model: This model does not consider the input data - it always predicts the number 0.
  • small model: This model does use the input data. It was trained with random forest by using 50 trees and 60000 observations.
  • large model: This model does use the input data. It was trained with random forest by using 500 trees and 60000 observations.

The R Script used for training the small and large models and the resulting models are provided in the models directory.

The docs directory contains detailed information about the three projects.

The three projects were deployed on an Azure Linux Virtual Machine (VM). Details about the configuration of the VM can be found in the file Configure_Azure_Linux_VM.md in the docs directory.

Initial information about the 3 projects:

Plumber

Requirements

  • Installed R (version >= 3.0.0) and integrated development environment (IDE) for R like RStudio.
  • Installed Docker

Getting started

Assuming you have cloned at least the repository "R_RESTApi" and installed the above requirements already.

1. Create the Docker image

For Windows in PowerShell:

  1. Set the directory from the dockerfile by cd ~\R_RESTApi\plumber
  2. docker build . (This gives the image ID as Successfully built 9f6825b856aa). So in this example <image ID>=9f6825b856aa .
  3. docker run -p 8080:8080 --name plumber 9f6825b856aa So plumber runs on port 8080!

ℹ️ For Linux

Same procedure as in Windows except that you have to put a "sudo" before every docker command for using it as an administrator!

2. Make requests

If it stated “Starting server to listen on port 8080”, one can test the port and make GET/POST requests. The status "200 OK" means that the request has succeeded. You can make the requests using R directly, using Postman, using Python or some other languages. The url should look like this:

Local
On a virtual machine

Examples for requests can be seen in the repository "IndustrialML/mlbenchmark" (Python), specially in docs in Make_Requests.md, and ../plumber/post_request_to_RESTApi.R (R).

Inference as a Service

To get started with deploying a ML model from R to made an inference service available as RESTful API via plumber, we refer to Plumber_in_Docker.md.

OpenCPU

Requirements

  • Installed R and integrated development environment (IDE) for R like RStudio.
  • Installed Docker

Getting started

Assuming you have cloned at least the repository "R_RESTApi" and installed the above requirements already.

1. Create the Docker image

For Windows in PowerShell:

  1. Set the directory from the dockerfile by cd ~\R_RESTApi\openCPU
  2. docker build . (This gives the image ID as Successfully built 9f6825b856aa. So in this example <image ID>=9f6825b856aa .)
  3. docker run -p 80:80 --name opencpu 9f6825b856aa So OpenCPU runs on port 80!

ℹ️ For Linux

Same procedure as in Windows except that you have to put a "sudo" before every docker command for using it as an administrator!

2. Make requests

If it stated "OpenCPU cloud server ready", one can test the port and make GET/POST requests. The status "200 OK" means that the request has succeeded. You can make the requests using R directly, using Postman, using Python or some other languages. The url should look like this:

Local
On a virtual machine

Examples for requests can be seen in the repository "IndustrialML/mlbenchmark" (Python), specially in Make_Requests.md, and ../openCPU/performenceTest.R (R).

ℹ️ Status code

Other than normally the status code for OpenCPU is "201" which means the request has been fulfilled and has resulted in one or more new resources being created. Therefore one should allow this status comming back e.g. in python in ../test/test_mnist.py :

def call(self, data):
        response = requests.post(self.url,
                                 headers=self.headers,
                                 json=self.preprocess_payload(data)
        )

        if response.status_code == 200 | response.status_code == 201:
            return self.preprocess_response(response)

        else:
            return None

Inference as a Service

To get started with deploying a ML model from R to made an inference service available as RESTful API via OpenCPU, we refer to OpenCPU_in_Docker.md.

Microsoft Machine Learning Server

Requirements

  • Installed R and integrated development environment (IDE) for R like RStudio.
  • Read through documentation provided in MLserver.md.
  • MS R Client installed on local machine
  • MS ML Server installed on remote machine

Getting started

Please read carefully through the documentation provided in MLserver.md. The R code for getting started with deploying a ML model trained in R to make an inference service available as RESTful API via Microsoft ML Server is given in the two R Scripts ms_rclient_mlserver.R and ms_rclient_mlserver_realtime.R in MS_MLserver.

Making Requests

For making requests in R, Postman, and Python, we refer to Make_Requests.md.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages