Normalization Question

#1
Hello all,

I am data mining Wikipedia to discern which titles are edited in the most countries by geolocating edits performed with IP addresses. I am only interested in the top 100 titles edited in the most countries. I am arguing that these titles represent global ideas because their edits are the most spatially widespread. With these counts, I can then measure per country how many of these global titles are edited in that particular country. This then can be used to create a type of globalization index per country (e.g. Germany edited 95 of the titles edited in the most countries). I eventually would like to do a correlation of this index with a well established globalization index that relies on counting objects crossing borders (e.g., import/export). My argument is that the higher the connectivity of a country, the higher the globalized title index. I am only interested in the subject matter and discourses in the top 100 titles, so I need my same to be manageable.

My question is regarding the normalization of data. The size of the population does effect the number of titles edited per country. However, this is not a normal per capita situation, for example a murder rate is all murders/population. I am arbitrarily selecting only the top one hundred titles on a list of titles per number of countries in which they are edited. It would be analogous to setting a murder rate to the 100 most gruesome murders/population. A title that might be 101st in rank on the list could still be considered global is this aspect, but it just didn't make it to the top 100. So, I am uneasy about normalizing the data.

What would be the best way to normalize this data by size of population or edits within Wikipedia per country given the situation that the numerator is an arbitrarily delimited group of a phenomenon?

Your help is greatly appreciated,
Tom