Temperature Statistics

#1
Hi all, I've been posed a problem and was wondering the best way to approach it. I have limited knowledge in statistics but have been more recently interested in growing that knowledge. The problem is, theoretically, if I take a temperature reading, say, at my home and a temperature reading at my friend Johns house 5 miles down the road, and maybe another 15 miles down the road at a local park, every day for a year, what is my best option to use that information to say that the reading at my friends house and the reading at the park are redundant and we would only really need to take the reading at my house to know the temperature for our city. I've been attempting to use confidence intervals and ANOVA methods but have been coming up short on an answer. I appreciate any help y'all have for me!
 
#2
It's simple, create 3 data columns with 365 reading for 3 different places you mentioned. say c1, c2 and c3. now take d1=c2-c1 and d2=c3-c2. you can see that d1 and d2 follow normal distribution, using KS test you can prove that d1 and d2 are not significantly different. hence you proved that any 1 measurement is good enough to measure temperature.
 
#3
It's simple, create 3 data columns with 365 reading for 3 different places you mentioned. say c1, c2 and c3. now take d1=c2-c1 and d2=c3-c2. you can see that d1 and d2 follow normal distribution, using KS test you can prove that d1 and d2 are not significantly different. hence you proved that any 1 measurement is good enough to measure temperature.
Could this approach similarly be used if say instead of 3 places we have say, 100+ locations? Can we continue to follow the d(i)=c(i+1)-c(i) [i=1,...,100] formula for our d columns? Or is it because our d1 and d2 in this example have a common temperature of c2 in the equation is the only reason this works?