-
Notifications
You must be signed in to change notification settings - Fork 88
Description
The performance of the pre-training model is slightly worse than that described in the paper, especially the Mean Accuracy.
They are the results of using the pre-trained model on the validation set.
- Am I doing something wrong?
Evaluation code reference link:
https://github.com/HCPLab-SYSU/ATEN/blob/master/evaluate/test_parsing.py - Is there a better pre-training model?
==================================================
overall accuracy 0.8625252479197769
Accuracy for each class (pixel accuracy):
background : 0.935942
hat : 0.784895
hair : 0.801849
sun-glasses : 0.416315
upper-clothes : 0.240663
dress : 0.825752
coat : 0.344182
socks : 0.680521
pants : 0.513359
gloves : 0.847266
scarf : 0.322934
skirt : 0.158371
torso-skin : 0.329161
face : 0.858112
right-arm : 0.730865
left-arm : 0.752589
right-leg : 0.695891
left-leg : 0.708378
right-shoe : 0.583139
left-shoe : 0.589186
mean accuracy 0.6059685085404918
background : 0.862065
hat : 0.635582
hair : 0.693225
sun-glasses : 0.340043
upper-clothes : 0.212232
dress : 0.678585
coat : 0.289160
socks : 0.548446
pants : 0.421071
gloves : 0.717109
scarf : 0.250269
skirt : 0.142373
torso-skin : 0.238828
face : 0.730600
right-arm : 0.619979
left-arm : 0.634775
right-leg : 0.572062
left-leg : 0.571267
right-shoe : 0.441749
left-shoe : 0.446395
mean IU 0.5022908042317561
fwavacc 0.7652094827965213
==================================================