BRDF correction with R

From Open Foris Wiki

Jump to: navigation, search

NB: This exercise is work in progress

The BRDF correction can be done using an approximation of the sun coordinate system and registered radiance. In this exercise, we use oft-calc, oft-extr and R.

To compute the correction function one needs to know the relation between the registered pixel values and the location of the pixels within an image. Furthermore, the location information needs to be know in relation to the location of the sun. Here, we do not take into account the sun angle, but only the location of the pixels in relation to the scene centre.


  • First, let us compute the relative pixel coordinates:
 oft-suncoor.bash <input> sun_crd.tif 
  • Then, let us generate a 1000 x 1000 m grid of coordinates to build the model
 oft-gengrid.bash <input> 1000 1000 grid.txt 
  • And extract the sun coordinates for every coordinate point
 oft-extr -ws 1 -o grid_crd.txt grid.txt sun_crd.tif
  • and repeat the same step for the actual image using the previous output
 oft-extr -ws 1 -o grid_crd_image.txt grid_crd.txt myimage.tif
  • Remember, that it is advisable to use Dark Dense Vegetation or other mask to select invariant features for the model building. Otherwise the results may be questionable. In case you have such a mask, you can use and additional
oft-extr

step to add the mask values in your file.

oft-extr -ws 1 -o grid_crd_image_mask.txt grid_crd_image.txt <input> 

and then, in case the valid areas in your mask carry value 1, use the following awk script to exclude other observations from the resulting file

awk '{if($NF == 1) print $0}'  >grid_crd_image_mask.txt > training.txt
  • now, you can analyze the content of your training file using
oft-ascstat.bash training.txt
  • And now you can use the column numbers and awk to clean your data. For example, using my dataset I got the following output

            Col             Min             Max             Avg
              1        0.000000    46535.000000    23268.000000
              2        0.000000   393996.000000   280996.000000
              3  -101789.000000   102211.000000      211.000000
              4        0.000000     7663.000000     3896.665198
              5        0.000000      133.000000     3533.000000
              6        0.000000      456.000000     1458.616031
              7        0.000000     3372.000000     3776.585065
              8        0.000000      247.000000     1452.812507
              9        0.000000      600.000000     1540.655399
             10        0.000000     3641.000000     3885.456366
             11        0.000000     1006.000000     1620.918019
             12    -9999.000000      192.000000    -3960.842849
             13    -9999.000000      310.000000    -3894.623552
             14    -9999.000000      219.000000    -3948.395853
             15    -9999.000000     2693.000000    -2559.783217
             16    -9999.000000     1192.000000    -3391.098270
             17    -9999.000000      438.000000    -3895.197335
             18        0.000000     7663.000000     3896.665198
             19        0.000000      133.000000     3533.000000
             20   -11418.000000       49.000000    -3885.088106
             21   -10341.000000     3260.000000    -3540.760976
  • to get rid of observations with minimum value -9999 (NoData) we use the following awk script
# remove all lines in which the value of column 12 is equal to -9999 
# and print the rest in file grid_crd_image_clean.txt
 
awk '{if($2 != -9999) print $0}' grid_crd_image.txt > mydata.txt
  • finally, you can import this dataset to R and build a correction model
# remove existing models.txt file and start R
data<-read.table("mydata.txt")

# here, I cleaned the data and my file contains only an id, bands values and relative coordinates
names(data)<-c("id","b1","b2","b3","b4","b5","b7","x","y")

# compute mean of each band

# run a linear model for each band separately
for ( i in c(1,2,3,4,5,7)) { 
eq=paste("data$meanb",i,"<-mean(data$b",i,")",sep="");
eval(parse(text=eq));
eq=paste("f",i,"<-lm((data$b",i,"- data$meanb",i,") ~ data$x + data$y)",sep=""); 
print(eq) ; 
eval(parse(text=eq));
name=paste("f",i,sep="");
out<- capture.output(eval(parse(text=name)));
cat(out,file="models.txt",sep="\n",append=TRUE);
}
# QUIT R BEFORE PROCEEDING
 
  • The final step is to save the parameters in a file
 
grep -v -e Coeff -e Inter -e Call -e formula  models.txt|\
awk '{if(length() >0) printf("%10f %10f %10f\n",$1,$2,$3)}' > models2.txt 

mv models2.txt models.txt
  • The really final step is of course to create the oft-calc equations to compute the result. Honestly. But please note that here we are assuming an 8 band input. The last two channels are the relative coordinates.
bands=`cat models.txt|wc -l`
awk -v bands=$bands 'BEGIN{print bands }{print "#7 "$2" * #8 "$3" * + "$1" + #"NR" +"}' models.txt > eq.txt
  • right, now you need to run the correction. Here we do it for a 16BIT input
oft-calc -ot Int16 <input> <output> < eq.txt
  • Done.



Back to Open Foris Toolkit Main Page

Back to Tools & Exercises



Personal tools