Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected output from inference mode. #1

Open
hasoweh opened this issue Jul 29, 2019 · 0 comments
Open

Unexpected output from inference mode. #1

hasoweh opened this issue Jul 29, 2019 · 0 comments

Comments

@hasoweh
Copy link

hasoweh commented Jul 29, 2019

I am working with remote sensing imagery and am getting very strange outputs from the model in ference mode. I am using weights from the crowdai building segmentation challenge which I fine-tuned using my own data. When I loaded those weights into the model and tried to run an inference on a few images I got the following:

image

My classification only uses 2 classes (background and one target class). Any idea why I am getting this image showing so many other classes?

Here is the code I use to run the inference. The weights I used were not that great (val_loss of 2.96) but this was more just to see how the network was performing before I try and fine-tune network hyperparameters.

# inference code
d_cnn = HedgeCNN(mode=inference, config=InfHedgeConfig(), model_dir=/mnt/dataDL/ahls_st/Data/Data3/Scripts/logs/)
d_cnn.load_weights(os.path.join(root, weights.03-2.96.hdf5), by_name=True)

output_dir = os.path.join(root, output_data)
dataset_dir = root
subset = testing

detect(d_cnn, dataset_dir, subset, output_dir)

And here is my config file:

# Train on 1 GPU and 8 images per GPU. We can put multiple images on each
# GPU because the images are small. Batch size is 8 (GPUs * images/GPU).
GPU_COUNT = 1
IMAGES_PER_GPU = 2

# Number of classes (including background)
NUM_CLASSES = 1 + 1  # background + hedge

# Use small images for faster training. Set the limits of the small side
# the large side, and that determines the image shape.
IMAGE_MIN_DIM = 320
IMAGE_MAX_DIM = 320

# Use smaller anchors because our image and objects are small
RPN_ANCHOR_SCALES = (4, 8, 16, 28, 40)  # anchor side in pixels

# Num of training images / batch size (I add a few more steps because data augmentation creates a few more images?)
STEPS_PER_EPOCH = 70

# Num of valid. images / batch size
VALIDATION_STEPS = 30

#can play with this to see what gives best accuracy
DETECTION_MIN_CONFIDENCE = 0.8

#todo
RPN_ANCHOR_RATIOS = [0.33, 1, 14]

#True is good when using high resolution images. Planet isnt that high?
USE_MINI_MASK = False

#mean pixel values for each band.
MEAN_PIXEL = np.array([241, 511, 477])

MAX_GT_INSTANCES = 50

DETECTION_MAX_INSTANCES = 50

#keep a positive:negative
# ratio of 1:3. You can increase the number of proposals by adjusting
# the RPN NMS threshold.
TRAIN_ROIS_PER_IMAGE = 400

LEARNING_RATE = 0.0001
LEARNING_MOMENTUM = 0.8

# Weight decay regularization
WEIGHT_DECAY = 0.0001`

I made a few other changes due to having troubles with NaN losses. Thus, I have changed the optimizer to adam, and added some epsilons to the rpn losses. If anyone is interested in what I mean about this I can find the code I changed, but I think this isn't related to my current issue.

Any help would be great!!!!

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant