Yes, there is nothing wrong with CIs. They are just nearly worth nothing (IMHO), and might be even more misleading (if possible) than p-values. As for the assessment of precision - this is one thing I do not understand, since it is not a tool for assessment of the precision of an estimate, as far as I know; and I have never read a discussion which used CIs as assessement of precision, except those who misinterpreted CIs. But it might be possible to use CIs that way. I simply don't know.
With kind regards
Karabiner
The standard error for any sample statistic is often referred to and can be interpreted as a measure of precision. The fact that it's incorporated in the confidence interval allows for an assessment of the precision of that estimate. A wider interval, all else constant, indicates less precision of the estimated parameter. A narrower confidence interval, all else constant, would indicate more precision in the estimate. Remember that precision is a way of measuring the "tightness" or "spread" (a lower SE indicates less variability, higher precision) whereas accuracy is more about hitting the mark (think of a biased vs unbiased estimator). The easiest way to think of this is that for a sampling distribution with n approaching infinity, the SE approaches zero, indicating practically no variation in sample statistics of size= near infinity. This is precise, but it doesn't tell us anything regarding the accuracy (you can have a precise and unbiased estimate or a precise and biased estimate [or just lack precision all together]). Some people also use the target practice analogy. Irrespective of where I hit the target, a tight grouping indicates more precision in the shot (less variability) and a wider grouping indicates less precision in the shot (more variability).
At least, that's what I heard from many places (and I don't know if I've actually heard it from an actual statistician [Ph.D. or MS in stats/biostats]). I could be wrong. Any chance you can entertain more thoughts on this possibly?
Edit: this is interesting
http://stats.stackexchange.com/questions/204530/what-do-confidence-intervals-say-about-precision-if-anything?rq=1 I'm somewhat disregarding the point from the author of the paper (Morey), because we know his viewpoint. Some other posters noted that there isn't a
necessary connection of precision to a CI, but there nearly always is between standard errors and precision. This is what I had thought after the first few times I heard a CI described as indicating precision-- the connection seemed natural in my mind, but I never gave it much serious thought past that. Definitely interesting.
Edit2: I usually consider Minitab a generally credible source as they often employ statisticians, so I'll include this too (describing a narrower confidence interval as more precise).
http://support.minitab.com/en-us/minitab/17/topic-library/basic-statistics-and-graphs/introductory-concepts/confidence-interval/make-ci-more-precise/
Edit3: So I spoke with a statistician today, and he more or less confirmed that it's not necessarily indicating precision (width of a CI). He agreed with the stackexchange comments about an empty or infinite interval and that in general, you can't say a narrow confidence interval indicates precision in an estimate. He also said people should be more clear in what they refer to as "precision," also. If you compared standard errors you could talk about the precision, though.