Issue
I have used Tensorflow-GPU for object detection on my laptop. Now the management team wants to check it with URL at its own place. I never published/deployed the model on the web as I am not python developer but now I have to do that.For that I tried to go through some online tutorials for Flask but they weren't that helpful.
How can I publish the model using Flask API? Are there any guidance/blog/video to deploy the Object detection model on URL using Flask?
my project structure is something like this
Solution
You can write a flask restful api which can be used with any other services.
For image-based tasks, it's always smart to use
base64
encoded images while making a request. That cuts a good amount of bandwidth overhead.Here, I use my dummy template while prototyping very simple ML/DL models to just test with rest API.
It has a simple test
route which tests if the server is live or not, finally, another route to handle post requests with base64 images, it converts the base64 image to numpy
array (convenient for passing to ML models).
You can change the intermediate parts to make it work for you.
ml_app.py
from flask import Flask
from flask_restful import Resource, Api, reqparse
import werkzeug, os
import json
import numpy as np
import base64
class NumpyEncoder(json.JSONEncoder): # useful for sending numpy arrays
def default(self, obj):
if isinstance(obj, np.ndarray):
return obj.tolist()
return json.JSONEncoder.default(self, obj)
app = Flask(__name__)
api = Api(app)
parser = reqparse.RequestParser()
parser.add_argument('file', type=werkzeug.datastructures.FileStorage, location='files')
parser.add_argument('imgb64')
# add other arguments if needed
# test response, check if live
class Test(Resource):
def get(self):
return {'status': 'ok'}
class PredictB64(Resource): # for detecting from base64 images
def post(self):
data = parser.parse_args()
if data['imgb64'] == "":
return {
'data':'',
'message':'No file found',
'status':'error'
}
img = data['imgb64']
#print(img)
br = base64.b64decode(img)
im = np.frombuffer(br, dtype=np.uint8). reshape(-1, 416, 3) # width will be always 416, which is generally the bigger dimension
# reshape with the actual dimension of your image
#print(im.shape)
#print(type(im))
#print(im.shape)
if img:
r = # call your model here
#print(r)
return json.dumps({
'data': json.dumps(list(r)), #(images), # may change based on your output, could be a string too
'message':'darknet processed',
'status':'success'
}, cls=NumpyEncoder)
return {
'data':'',
'message':'Something when wrong',
'status':'error'
}
api.add_resource(Test, '/test')
api.add_resource(PredictB64,'/predict_b64')
if __name__ == '__main__':
app.run(debug=True, host = '0.0.0.0', port = 5000, threaded=True)
To run, simply do:
python ml_app.py
more examples: https://github.com/zabir-nabil/flask_restful
darkent/yolo: https://github.com/zabir-nabil/tf-model-server4-yolov3
Answered By - Zabir Al Nazi
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.