I'm trying to compare two sets of data that are containing ratios to see wether or not there are significatn differences and I would like to get your opinion on the test I should apply to do so.

I want to compare two sets of satellite image data (one collected over burned areas, the other one on unburned areas for reference):

-set A contains pixel values of burned areas from date 1 and date 2 (same pixel, two months later). The pixel value correspond to the red band of the satellite (sensitive to changes on vegetation due to forestry fires) and we created the set by photointerpreting the satellite image.

-set B contains pixel values of unburned areas from date 1 and date 2, for reference. There can be some variation on pixel values although no fire happened, due to the some seasonal changes on vegetation (change in temperature, rainfall, etc. between those two months).

I have calculated the ratio date1/date2 for all pixels in set A and all pixels in set B.

I want to see if there is a significant statistical difference between the distribution of ratio date1/date2 for set A and the distribution of ratio date1/date2 for set B. If so, I would use this ratio on other areas (other pixels that is) to detect burned areas in a satellite image.

I have tested the normality of both distributions with a Shapiro test. Those distribution are non-normal ones. The number of observations (pixels) in both sets is around 2000.

Is it correct to apply a Wilcoxon test for this situation (applied in R, with paired parameter set to FALSE)?

Any idea is welcome. thanks!