Hi Serban,
When training and validating the SqueezeSegV2 network with your own dataset and encountering "NaN" values for 'MeanAccuracy' and 'MeanIoU', several factors might be contributing to the issue. Here's a breakdown of potential causes and solutions:
1. Vertical Field of View (FoV) Adjustment: Changing the VerticalFoV to [26.8 -24.8] is fine as long as it accurately represents your sensor's field of view. The key is to ensure that the network receives data that reflects the actual distribution and characteristics of the input it will encounter in deployment. However, make sure that the conversion from unorganized to organized point clouds is done correctly and consistently.
2. Lack of Intensity Information: The absence of intensity information might be significant, especially if the SqueezeSegV2 model you are using was pretrained or designed to expect intensity data. The network might be relying on intensity information for certain feature extractions. You can try a few approaches here:
- Add Intensity as a Feature: If possible, add intensity information to your dataset. If the actual intensity data is not available, consider using a placeholder value (other than 0) that might work better with the network.
- Modify the Network: If adding intensity data is not feasible, consider modifying the network architecture to work without intensity data. This would likely require retraining the network from scratch.
3. Data Preprocessing: Ensure that your data preprocessing steps (such as normalization, scaling, etc.) are correctly applied and consistent with the requirements of the network. Incorrect preprocessing can lead to ineffective training and validation.
4. Labeling and Annotation Quality: The way you convert 'xlsx' files to PNG labels is crucial. Ensure that this conversion accurately reflects the class of each point and is in a format that the network expects. Any mismatch in labeling can lead to poor training outcomes.
5. Training Hyperparameters: Check your training hyperparameters. Sometimes, inappropriate learning rates or optimization algorithms can lead to NaN values during training.
6. Network Initialization: If you're using a pretrained model, ensure that it's appropriately adapted to your dataset. If training from scratch, ensure the network is initialized correctly.
Debugging Strategy:
- Start with a Small Dataset: Try training and validating on a small, well-understood subset of your data where you can manually verify the inputs and expected outputs.
- Monitor Gradients and Losses: During training, monitor the gradients and loss values to check for exploding or vanishing gradients.
- Use a Validation Set: A validation set that the network has not seen during training can provide a better understanding of how well the network is generalizing.
In summary, the issue might not necessarily be due to the FoV parameters or the absence of intensity information alone. It's often a combination of factors related to data preparation, network architecture, and training process. Careful examination and systematic debugging of each component should help in identifying and resolving the issue.
Hope this helps!