Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to export and run on android the exported custom class trained model using mask_rcnn_fbnetv3a_C4.yaml #489

Open
VaibhavAsare opened this issue Mar 2, 2023 · 10 comments
Labels
documentation Improvements or additions to documentation

Comments

@VaibhavAsare
Copy link

Hello Everyone,
I have trained a custom class model using mask_rcnn_fbnetv3a_C4.yaml and exported the model demo begineer notebook in the repoistry but the model is not working in android.
How to solve this issue ?? and
how to check the exported model is working fine in our working environment ???

@VaibhavAsare VaibhavAsare added the documentation Improvements or additions to documentation label Mar 2, 2023
@wat3rBro
Copy link
Contributor

wat3rBro commented Mar 2, 2023

could you provide more context?

@VaibhavAsare
Copy link
Author

I have trained a custom class model using mask_rcnn_fbnetv3a_C4.yaml and exported the model using the following code

import copy
from detectron2.data import build_detection_test_loader
from d2go.export.exporter import convert_and_export_predictor

from d2go.export.d2_meta_arch import patch_d2_meta_arch

from d2go.runner import GeneralizedRCNNRunner
from d2go.runner import Detectron2GoRunner
import os
import logging

disable all the warnings

previous_level = logging.root.manager.disable
logging.disable(logging.INFO)

patch_d2_meta_arch()

torch.backends.quantized.engine = 'qnnpack'
cfg.merge_from_file("/home/ubuntu/d2go/codes/trail11/config1.yml")
cfg.QUANTIZATION.BACKEND = 'qnnpack'
runner = GeneralizedRCNNRunner()
model = runner.build_model(cfg, eval_only=True)
model.cpu()

datasets = cfg.DATASETS.TRAIN[0]

cfg.QUANTIZATION.BACKEND = 'qnnpack'
data_loader = runner.build_detection_test_loader(cfg, datasets)

predictor_path = convert_and_export_predictor(
copy.deepcopy(cfg),
copy.deepcopy(model),
"torchscript_int8@tracing",
'./',
data_loader
)

recover the logging level

logging.disable(previous_level)

f = open('config2.yml', 'w')
f.write(cfg.dump())
f.close()

#for android
from typing import List, Dict
import torch

class Wrapper(torch.nn.Module):
def init(self, model):
super().init()
self.model = model
coco_idx_list = [1, 2, 3, 4, 5, 6]

    self.coco_idx = torch.tensor(coco_idx_list)

def forward(self, inputs: List[torch.Tensor]):
    x = inputs[0].unsqueeze(0) * 255
    scale = 320.0 / min(x.shape[-2], x.shape[-1])
    x = torch.nn.functional.interpolate(x, scale_factor=scale, mode="bilinear", align_corners=True, recompute_scale_factor=True)
    out = self.model(x[0])
    res : Dict[str, torch.Tensor] = {}
    res["boxes"] = out[0] / scale
    res["labels"] = torch.index_select(self.coco_idx, 0, out[1])
    res["scores"] = out[2]
    return inputs, [res]

orig_model = torch.jit.load(os.path.join(predictor_path, "model.jit"))
wrapped_model = Wrapper(orig_model)
scripted_model = torch.jit.script(wrapped_model)
scripted_model.save("d2go.pt")

metrics = runner.do_test(cfg, model)

print(metrics)

These two model which I have exported is working fine while inferencing in python notebook but the .pt model for android is not working how to resolve this issue???

@dipu0
Copy link

dipu0 commented Mar 9, 2023

did you find any way out to run custom model in android??

@labs10
Copy link

labs10 commented Mar 14, 2023

You've to optimize the model and use .ptl format to use it in android.
Add this to your code this will work.

from torch.utils.mobile_optimizer import optimize_for_mobile
optimized_model = optimize_for_mobile(orig_model)
optimized_model._save_for_lite_interpreter(predictor_path+"d2go.ptl")
#Pt model
orig_model.save(predictor_path+"d2go.pt")

@VaibhavAsare
Copy link
Author

Hello @labs10 , I have implmented the same solution provided by you, but its giving me the error
I have shared the error message in the picture
image

@labs12
Copy link

labs12 commented Mar 17, 2023

@VaibhavAsare d2go provides mobile optimized tracing, use
predictor_path = convert_and_export_predictor( copy.deepcopy(cfg), copy.deepcopy(model), "torchscript_mobile_int8@tracing", './', data_loader ) .
This will provide weights on .ptl format you can use this on android.

@VaibhavAsare
Copy link
Author

@labs10 This also not working in android. Is This issue with instance segmentation using mask_rcnn_fbnetv3a_C4.yaml only???

@nkhlS141
Copy link

nkhlS141 commented Oct 5, 2023

I have trained a custom class model using mask_rcnn_fbnetv3a_C4.yaml and exported the model using the following code

import copy from detectron2.data import build_detection_test_loader from d2go.export.exporter import convert_and_export_predictor

from d2go.export.d2_meta_arch import patch_d2_meta_arch

from d2go.runner import GeneralizedRCNNRunner from d2go.runner import Detectron2GoRunner import os import logging

disable all the warnings

previous_level = logging.root.manager.disable logging.disable(logging.INFO)

patch_d2_meta_arch()

torch.backends.quantized.engine = 'qnnpack' cfg.merge_from_file("/home/ubuntu/d2go/codes/trail11/config1.yml") cfg.QUANTIZATION.BACKEND = 'qnnpack' runner = GeneralizedRCNNRunner() model = runner.build_model(cfg, eval_only=True) model.cpu()

datasets = cfg.DATASETS.TRAIN[0]

cfg.QUANTIZATION.BACKEND = 'qnnpack' data_loader = runner.build_detection_test_loader(cfg, datasets)

predictor_path = convert_and_export_predictor( copy.deepcopy(cfg), copy.deepcopy(model), "torchscript_int8@tracing", './', data_loader )

recover the logging level

logging.disable(previous_level)

f = open('config2.yml', 'w') f.write(cfg.dump()) f.close()

#for android from typing import List, Dict import torch

class Wrapper(torch.nn.Module): def init(self, model): super().init() self.model = model coco_idx_list = [1, 2, 3, 4, 5, 6]

    self.coco_idx = torch.tensor(coco_idx_list)

def forward(self, inputs: List[torch.Tensor]):
    x = inputs[0].unsqueeze(0) * 255
    scale = 320.0 / min(x.shape[-2], x.shape[-1])
    x = torch.nn.functional.interpolate(x, scale_factor=scale, mode="bilinear", align_corners=True, recompute_scale_factor=True)
    out = self.model(x[0])
    res : Dict[str, torch.Tensor] = {}
    res["boxes"] = out[0] / scale
    res["labels"] = torch.index_select(self.coco_idx, 0, out[1])
    res["scores"] = out[2]
    return inputs, [res]

orig_model = torch.jit.load(os.path.join(predictor_path, "model.jit")) wrapped_model = Wrapper(orig_model) scripted_model = torch.jit.script(wrapped_model) scripted_model.save("d2go.pt")

metrics = runner.do_test(cfg, model)

print(metrics)

These two model which I have exported is working fine while inferencing in python notebook but the .pt model for android is not working how to resolve this issue???

@VaibhavAsare I think the forward function must be changed since it is a mask-rcnn model. Ideally a fastrcnn model has three outputs and maskrcnn unlike fastrcnn has an extra output i.e masks
So the ideal change in forward func must be
res["boxes"] = out[0] / scale
res["labels"] = torch.index_select(self.coco_idx, 0, out[1])
res["masks"] = out[2]
res["scores"] = out[3]

No?

@nkhlS141
Copy link

nkhlS141 commented Oct 5, 2023

@VaibhavAsare please let me know if this works.

@VaibhavAsare
Copy link
Author

@nkhlS141 No it is working.
have you find any other way to fix this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

6 participants