The most likely cause is due to an elevated family-wise error rate from making multiple comparisons. Even when you correct for this, there is still a possibility for a false positive. A true difference should have been apparent in the ANOVA also. Another cause might be under borderline scenarios where one p-value was just above alpha while the other was just below.
It just fells like grasping at straws. How big and certain can the pairwise difference really be if it was not picked up by the omnibus test? And would it really be repeatable in a new sample of the same size? I feel like there are always ways to squeak something out, if desired.
I saw that, but as I commented and @hlsmith commented, it cannot be a big effect if it was not detected by the omnibus. I feel the difference must lie in the false positive or marginal cases where there is a slight difference in power.
There are also so many corrections for pairwise comparisons, how can we be sure they all preserve power. Hochberg (sp?) always seems shady to me, the one where you order the results. I just usually go Bonferroni and if I miss something, i dont consider it an issue!