You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am working with remote sensing imagery and am getting very strange outputs from the model in ference mode. I am using weights from the crowdai building segmentation challenge which I fine-tuned using my own data. When I loaded those weights into the model and tried to run an inference on a few images I got the following:
My classification only uses 2 classes (background and one target class). Any idea why I am getting this image showing so many other classes?
Here is the code I use to run the inference. The weights I used were not that great (val_loss of 2.96) but this was more just to see how the network was performing before I try and fine-tune network hyperparameters.
# Train on 1 GPU and 8 images per GPU. We can put multiple images on each
# GPU because the images are small. Batch size is 8 (GPUs * images/GPU).
GPU_COUNT = 1
IMAGES_PER_GPU = 2
# Number of classes (including background)
NUM_CLASSES = 1 + 1 # background + hedge
# Use small images for faster training. Set the limits of the small side
# the large side, and that determines the image shape.
IMAGE_MIN_DIM = 320
IMAGE_MAX_DIM = 320
# Use smaller anchors because our image and objects are small
RPN_ANCHOR_SCALES = (4, 8, 16, 28, 40) # anchor side in pixels
# Num of training images / batch size (I add a few more steps because data augmentation creates a few more images?)
STEPS_PER_EPOCH = 70
# Num of valid. images / batch size
VALIDATION_STEPS = 30
#can play with this to see what gives best accuracy
DETECTION_MIN_CONFIDENCE = 0.8
#todo
RPN_ANCHOR_RATIOS = [0.33, 1, 14]
#True is good when using high resolution images. Planet isnt that high?
USE_MINI_MASK = False
#mean pixel values for each band.
MEAN_PIXEL = np.array([241, 511, 477])
MAX_GT_INSTANCES = 50
DETECTION_MAX_INSTANCES = 50
#keep a positive:negative
# ratio of 1:3. You can increase the number of proposals by adjusting
# the RPN NMS threshold.
TRAIN_ROIS_PER_IMAGE = 400
LEARNING_RATE = 0.0001
LEARNING_MOMENTUM = 0.8
# Weight decay regularization
WEIGHT_DECAY = 0.0001`
I made a few other changes due to having troubles with NaN losses. Thus, I have changed the optimizer to adam, and added some epsilons to the rpn losses. If anyone is interested in what I mean about this I can find the code I changed, but I think this isn't related to my current issue.
Any help would be great!!!!
Thanks!
The text was updated successfully, but these errors were encountered:
I am working with remote sensing imagery and am getting very strange outputs from the model in ference mode. I am using weights from the crowdai building segmentation challenge which I fine-tuned using my own data. When I loaded those weights into the model and tried to run an inference on a few images I got the following:
My classification only uses 2 classes (background and one target class). Any idea why I am getting this image showing so many other classes?
Here is the code I use to run the inference. The weights I used were not that great (val_loss of 2.96) but this was more just to see how the network was performing before I try and fine-tune network hyperparameters.
And here is my config file:
I made a few other changes due to having troubles with NaN losses. Thus, I have changed the optimizer to adam, and added some epsilons to the rpn losses. If anyone is interested in what I mean about this I can find the code I changed, but I think this isn't related to my current issue.
Any help would be great!!!!
Thanks!
The text was updated successfully, but these errors were encountered: