I mentioned earlier that any individual meta-analysis should never be treated as the last word. Rather, it is best to treat a meta-analytic study as a tentative assessment of the state of a particular research literature at that particular moment. One obvious reason for my stance simply comes down to one of the available sample of studies testing a particular hypothesis at any given time. Presumably, over time, more studies that attempt to replicate the hypothesis test in question will be conducted and ideally reported. In addition, search engines are much better at detecting unpublished studies (what one of my mentors referred to as the "fugitive literature") than they once were. That's partially due to technological advances and partially due to individuals making their unpublished work (especially null findings) available for public consumption to a greater degree. To the extent that is the case, we would want to see periodic updated meta-analyses to account for these newer studies.
The second obvious reason is that meta-analysis itself is evolving. The techniques for synthesizing studies addressing a particular hypothesis are much more sophisticated than when I began my graduate studies, and are bound to continue to become more sophisticated going forward. The techniques for estimating mean effect sizes are more sophisticated, as are the techniques for estimating the impact of publication bias and outlier effects. If anything, recent meta-analyses are alerting us to what should have been obvious a long time ago: we have a real file drawer problem, and the failure to publish null findings or findings that are considered no longer "interesting" is leading us to have a more rose-colored view of our various research literatures than is warranted. Having said that, it is also very obvious that since we cannot quite agree among ourselves as to what publication bias analyses are adequate, and these techniques themselves can potentially yield divergent estimates of publication bias, it is best to use some battery of publication bias effect estimation techniques for the time being.
Finally, there is the nagging concern I have that once a meta-analysis gets published, if it is treated as the last word, future research pertaining to that particular research question has the potential to effectively cease. Yes, some isolated investigators will continue to conduct research, but with much less hope of their work being given its due than it might have otherwise. I suspect that we can look at research areas where a meta-analysis has indeed become the proverbial "last word" and find evidence that is exactly what transpired.Given reasons one and two above, that would be concerning, to say the least. There is at least one research literature with which I am intimately familiar where I suspect one very important facet of that literature effectively halted after what became a classic meta-analysis was published. At some point in the near future, I will turn to that research literature.