You will get efficient and thoughtful service from Jiayi Pharmaceutical.
Dear Dr. TaylorRobinson, Dr. Maayan, Dr. SoaresWeiser, Dr. Donegan, and Dr. Garner:
We are writing to clarify several points that you raise in your recent Cochrane review of deworming regarding our paper "Worms: Identifying impacts on education and health in the presence of treatment externalities" in Econometrica.
In particular, we have four main concerns about the discussion of our piece in the recent review, and believe that they could change the assessment of the quality of the evidence presented in our paper. We list these points here in the letter below, with a brief discussion of each point. We then discuss several additional points in the attached document below, following this letter. We hope that these detailed responses to your review will start a productive discussion about the interpretation of the evidence in the Miguel and Kremer () paper.
(All page numbers listed below refer to the July version of your review, with "assessed as uptodate" as May 31, .)
We recognize that writing a Cochrane review is a major undertaking, and we appreciate the time you have taken to read our paper, and the dozens of other papers covered in the review. We hope that this note can serve as the starting point for discussion, both in writing and via , if appropriate.
Our four points all relate to the claim made on page 6 of your review, and repeated throughout the review, about the Miguel and Kremer () paper:
"Miguel (Cluster) has a high risk of bias for sequence generation, allocation concealment, blinding, incomplete outcome data and baseline imbalance."
We have serious concerns about the claims you make about the risk of bias for baseline imbalance, incomplete outcome data, and sequence generation. We discuss these in turn below.
Point (1): A leading issue is your current assessment of the quality of evidence on school attendance and participation, which is the main outcome measure in the Miguel and Kremer () trial. Several concerns are raised, including: a lack of baseline values for these measures (leading to a risk of baseline imbalance), and statistically significant impacts for only one of the comparisons considered. The quotes from your review are as follows:
[p. 21] "For school attendance (days present at school): (Miguel (Cluster) Table 6; Analysis 5.4) reported on end values for attendance rates of children (, Group 1 versus Group 3), and found no significant effect (mean difference 5%, 95% CI 0.5 to 10.5). No baseline values were given so there is potential for any random differences between the groups to confound the end values."
[p. 24] "Similarly, for school attendance, the GRADE quality of the evidence was very low. One quasirandomized trial (Miguel (Cluster) reported an effect, which was apparent in only one of the two comparisons in up to a year of follow up, and not apparent in the one comparison after one year. Miguel (Cluster) measured attendance outcomes directly, unlike the other two trials (Simeon ; Watkins ) which measured attendance using school registers, which may be inaccurate in some settings. However, in Miguel (Cluster), the values for school attendance were end values and not corrected for baseline. Thus random differences in baseline attendance between the two groups could have confounded any result."
We feel that these concerns are misplaced, and explain why here. We first discuss concerns about "baseline imbalance".
First, we in fact do have baseline data on school participation (our preferred measure) for one of the comparisons that you focus on. The authors of the Cochrane appear to have missed this data in our paper. In Table VIII, Panel A, there is a comparison of school participation for both Group 2 and Group 3, when both were control schools. There is no statistically significant difference in school participation across Group 2 and Group 3 in , and if anything school participation is slightly lower in Group 2 (0.037, s.e. 0.036). This makes the difference between Group 2 and Group 3 in (0.055, s.e. 0.028), when Group 2 had become a treatment school, even more impressive, since at baseline Group 2 had slightly lower school participation. We respectfully request that the authors of the Cochrane review include this data as evidence of baseline balance in our key outcome measure, school participation, and that they edit their claim that we do not have any such evidence.
It is interesting to note that, if we take the difference between Group 2 and Group 3 at baseline seriously, then the overall effect for this "year 1" comparison is 3.7 + 5.5 = 9.2 percentage points. This is almost exactly the same as the 9.3 percentage point effect in the other "year 1" comparison that the Cochrane authors focus on (Group 1 versus Groups 2 and 3 in ). Taken together, this is quite striking evidence that the first year of deworming treatment significantly improves school participation. The Cochrane authors' repeated concerns in their review about baseline balance being critical in randomized experiments suggests (to us) that they might find it methodologically preferable to use a "differenceindifference" design that explicitly controls for any baseline differences across treatment groups, rather than the standard unbiased "endline" comparison across treatment groups. If this is in fact the case, then the relevant year 1 deworming treatment effect for the Group 2 versus Group 3 comparison (for which we have baseline data, as noted above) is the 9.2 percentage point estimate, which we note is significant at 99% confidence.
Second, regarding baseline data on school attendance, we discuss that there is indeed evidence from school registers that recorded attendance is indistinguishable in the three groups of schools in early (in Table I). While the register data has its weaknesses precisely the reason we developed the much more rigorous approach of unannounced school participation checks, combined with tracking of school transfers and dropouts it is used in other trials, and in fact the Cochrane review considers school register data sufficiently reliable to include a trial (Watkins ) that uses it in their metaanalysis of school attendance.
We are puzzled as to why the evidence in the Watkins () trial is included at all in the Cochrane review if similar register data is considered unreliable when Miguel and Kremer () use it. If school register data is considered (largely) unreliable, then the Watkins () article should be excluded from the review, in which case the "metaanalysis" of school attendance and participation impacts will yield estimated effects that are much larger and statistically significant (since the Watkins impact estimates are close to zero). If the register data is considered (largely) reliable, then the Watkins () trial should be included in the review, but the baseline register data in Miguel and Kremer () should be considered as evidence that we do in fact have baseline balance on school participation. But there is an inconsistency in how register data is considered across the two trials. This seemingly inconsistent approach taken by the authors raises questions about the evenhandedness of the Cochrane review.
In fact, the appropriate use of school register data is more subtle than the Cochrane authors currently consider, since its use as baseline data may in fact be appropriate even if it is inappropriate for use as outcome data. There are at least two reasons why. First, one of the major weaknesses of the school register data used in Watkins () is that it excludes any students who have dropped out, potentially giving a misleading picture about school participation over time. However, this concern about dropouts is irrelevant when we use school register data at baseline, since the universe of students considered in the Miguel and Kremer () article was restricted to those currently enrolled in school in January (at the start of the school year), and thus the exclusion of dropouts is not a concern. Note that our use of the school register data at the start of the school year is a likely explanation for why the baseline average attendance rates we obtain using this data are much higher than the average school participation rate that we estimate over the course of the entire school year.
A second related issue is the quality of measured school attendance data conditional on student enrollment in school. Note that to the extent that differences in attendance recordkeeping prior to the introduction of the program are random across schools, they will not bias estimates of treatment impact and any "noise" in these measures will be correctly captured by reported standard errors. However, there are plausible concerns about the quality of school register data collected in treatment versus control schools in the context of an experimental evaluation, with a leading concern being that school officials could erroneously inflate figures in the treatment group. Yet once again these concerns are irrelevant in the Miguel and Kremer () trial context since the baseline school register data that we present (in Table I, Panel B) was collected before any interventions had even been carried out in the sample schools, once again making the baseline school register data potentially more reliable than school register data used as an outcome.
While the data and measurement issues here are somewhat subtle, if anything they argue in favor of including the baseline school register data in assessing the baseline balance in the Miguel and Kremer () paper, while excluding the school register outcome data in Watkins () as potentially unreliable. Instead, the Cochrane authors completely dismiss the baseline register data in Miguel and Kremer () as unreliable evidence for baseline balance, while including the Watkins () data in their metaanalysis of school participation impacts, giving it equal weight with the Miguel and Kremer () school participation impact evidence (which uses more rigorous outcome data). Once again, the seemingly selective approach taken by the authors raises questions about the evenhandedness of the Cochrane review.
An important final point has to do with the claim that there might have been "random differences" across groups. Given the randomized design of Miguel and Kremer (), there is no systematic difference to expect there to have been such random differences. The endline comparison of outcomes across treatment groups yields unbiased treatment effect estimates. The remarkable balance across the three groups in terms of dozens academic, nutritional, and socioeconomic outcomes at baseline (Table I) makes it even more unlikely that there were large differences in school participation solely by chance. If the Cochrane authors would like to consider other characteristics (other than school participation) to gauge the likelihood that Groups 1, 2 and 3 in our trial are in fact balanced at baseline they should look at the whole range of outcomes presented in Table I of Miguel and Kremer (). The lack of significant baseline academic test scores across Groups 1, 2 and 3 in our sample (Table 1, Panel C) is particularly good evidence that schooling outcomes were in fact balanced at baseline, for instance. It is not clear to us why the Cochrane authors remain so concerned about baseline imbalance issues given the experimental design (which leads to unbiased estimates) and the remarkable balance we observe along so many characteristics in Table I of Miguel and Kremer (), and their review does not provide compelling justification for their concerns.
Moreover, in the standard statistical methods that we use, only those differences across groups that are too large to have been generated "by chance" are considered statistically significant impacts. In other words, the standard errors generated in the analysis itself are precisely those that address the risk of imbalance "by chance" given our research design and sample size. Of course, random variation that is orthogonal to treatment assignment does not alone generate bias.
Speculating about the possibility that there were simply positive impacts "by chance" in order to cast doubt on one set of results, but not doing the same when there are zero estimated impacts, again raises questions about the evenhandedness of the Cochrane review. (For instance, perhaps the zero impacts on Hb outcome measures in our sample were zero simply "by chance", when the real point estimates are in fact strongly positive, like the large school participation impacts we estimate. Yet this possibility is not mentioned in the Cochrane review.) In our view, the Cochrane authors do not provide sufficient justification for their fears about imbalance "by chance" in our sample, and we feel further concrete details about these concerns are needed to substantiate their assertions.
Taken together, the Cochrane review's claim that there is a "high risk of bias for baseline imbalance" (the claim made on p. 6 and p. 136, and throughout the review) appears highly misleading to us, given the: balance in school participation we observe between Group 2 and Group 3 in ; the balanced school attendance based on register data across Groups 1, 2 and 3 at baseline; the balance in other measures of academic performance (including academic test scores) as well as multiple socioeconomic and nutritional characteristics at baseline; and most importantly given the randomized experimental design, which implies that there is no systematic reason why the three treatment groups would differ significantly along unobservable dimensions.
We respectfully request that the authors of the review consider these factors and reconsider their assessment regarding the claimed "high risk of bias for baseline imbalance" in Miguel and Kremer ().
Point (2): There is also an important methodological point to make regarding how the authors of the Cochrane review assess the school participation evidence. At several points they note that only some of the school participation comparisons are statistically significant at 95% confidence. To be specific, the comparisons they focus on have the following estimated impacts and standard errors (from p. 130131 of their review):
School participation outcomes measured £ 1 year:
9.3 percentage point gain (s.e. 3.1 percentage points)
5.5 percentage point gain (s.e. 2.8 percentage points) School participation outcomes measured > 1 year:
5.0 percentage point gain (s.e. 2.8 percentage points)
It is unclear to us why the reviewers separate out the three comparisons, rather than combining the groups in a single analysis using standard analytical methods, as their principal assessment of the impact of deworming on school participation. They give no clear methodological justification for this separation. Pooling data from three valid and unbiased "comparisons" still yields an unbiased treatment effect estimate, but with much greater statistical precision, and is thus a methodologically preferable approach. At a minimum, the Cochrane authors should discuss the pooled estimates (which are the focus of Miguel and Kremer ) in addition to the three separate comparisons.
One simple approach to doing so that maintains the "comparisons" above, and at least goes part of the way towards using the full sample, would be to pool and data for the Group 1 versus Group 3 comparison, since Group 1 is treatment during this entire period and Group 3 is control for the entire period. The distinction between < 1 year and > 1 year outcomes seems rather artificial to us, as discussed further below. It is unclear to us why the Cochrane authors never present this comparison of Group 1 versus Group 3 for and pooled together.
The preferred analysis in the Miguel and Kremer () paper pools multiple years of data, and all groups, to arrive at the most statistically precise estimated impact of deworming on schooling outcomes. This includes both school participation outcomes, as well as academic test score outcomes (which the Cochrane authors currently exclude since in the paper we only present these pooled test score results, rather than the simple differences across treatment groups). If the Cochrane authors would like to see the simple differences across treatment groups for the academic test scores, we would be delighted to share the data with them. (To be clear, the test score impact estimates in Miguel and Kremer () come from a regression analysis that relies on the experimental comparison between the treatment and control groups, and is not a retrospective analysis based on nonexperimental data.)
In our view, the Cochrane authors do not provide adequate statistical justification for splitting results into the different "comparisons", or into "year 1" versus "year 2" impacts. "Pooling" these different comparisons, as we do in the Miguel and Kremer () paper, is standard with longitudinal (panel) data analysis with multiyear panels, and is appropriate for those that care about deworming impacts at multiple time frames, ie at less than one year and at more than one year of treatment. Use of our full sample would immediately lead to the conclusion that there are in fact positive impacts of deworming on school participation in our sample, with very large impact magnitudes and high levels of statistical significance. This is the conclusion of the Miguel and Kremer () paper, and a quick look at the comparisons presented above also indicate that there are strong impacts: all three of the comparisons have large impact estimates and all three are statistically significant at over 90% confidence, with one significant at over 99% confidence and another nearly significant at 95% confidence (despite the data being split up into the three different comparisons). By treating each comparison independently and in isolation, the authors are reaching inappropriate conclusions, in our view.
To illustrate why the approach taken by the current version of the Cochrane review is inappropriate, imagine the simple thought experiment of splitting up the data from Miguel and Kremer () into "quarters" (three month intervals) rather than years of treatment. There is no obvious a priori reason why this should not be as valid an alternative approach as the >1 year and <1 year approach in the Cochrane review, as some other reviewers might instead have been interested in the impact of deworming treatment over intervals shorter than one year. Then we would have 2 comparisons in quarter 1 of treatment (Group 1 versus Groups 2 and 3 in early , and Group 2 versus Group 3 in early ), 2 comparisons in quarter 2 of treatment, 2 comparisons in quarter 3, 2 comparisons in quarter 4, and 1 comparison in each quarter from 5 through 8 (Group 1 versus Group 3 in ). This approach would generate 12 valid "comparisons" of treatment and control schools over multiple time periods, but by slicing up the data ever more finely and reducing the sample size considered in each comparison, it is almost certain that none of these comparisons would yield statistically significant impacts of deworming on school participation at 95% confidence, even though the average estimated effect sizes would remain just as large. This would clearly not be an attractive methodological approach. You could even imagine considering a month by month treatment effect estimate, which would yield 36 different comparisons, all of which would be severely underpowered statistically.
However, we view the Cochrane review's slicing of our full dataset into three comparisons (two for year 1 treatment, and one for year 2), rather than conducting the analysis in the full dataset in much the same way. As we show in Miguel and Kremer (), when the data from all valid comparisons is considered jointly, in order to maximize statistical precision using standard longitudinal (panel) data regression methods, the estimated impacts are large and highly statistically significant. Just to be clear, we do not use any controversial statistical methods, and our results do not rely on any nonexperimental comparisons. The regression analyses in our paper rely entirely on the variation in treatment status induced by the experimental design of the trial, and thus are just as appropriate analytically as the simple "treatment minus control" differences that the Cochrane authors focus on. In our view, the most robust analytical approach should use our full dataset, rather than the (in our view) more fragmented way of presenting the results in Table 6 of your review, which leads to less statistical precision and no greater insight.
If the Cochrane authors feel that there is a strong a prior reason to focus on year 1 treatment results separately from year 2 treatment results, then at a minimum they should consider both of the year 1 "comparisons" that they focus on jointly (ie Group 1 versus Groups 2 and 3 in , and Group 2 versus Group 3 in ), in order to improve statistical precision and thus generate impact estimates with tighter confidence intervals. If they wish to strictly employ the same exact "comparison" groups over time, then they should at a minimum pool the and data and focus on the Group 1 versus Group 3 comparison. Doing either would yield an unambiguous positive and statistically significant impact of deworming on school participation in our sample.
We respectfully request that the authors of the review consider these suggestions and reconsider their assessment regarding the claimed lack of statistically significant school participation impacts in Miguel and Kremer ().
Point (3): The Cochrane review concludes that our trial has a "high risk of bias for incomplete outcome data" (p. 90). We believe this point is simply incorrect when applied to our school participation data, as we explain here. The review authors focus on the lack of detail in Miguel and Kremer () regarding the collection of Hb data, but then unfairly use this lack of clarity to downgrade the reliability of all data in the trial, including the school participation data. The exact quote from the review is as follows:
[p. 15] However, results for health outcomes were presented for the comparison of Group 1 (25 schools) versus Group 2 (25 schools). Details of the outcomes we extracted and present are:
Haemoglobin. This was measured in 4% of the randomized population (778/20,000). It was unclear how the sample were selected.
The Hb sample was a random (representative) subsample of the full sample, chosen by a computer random number generator. Appendix Table AI of the Miguel and Kremer () paper does discuss how the parasitological and Hb surveys were collected jointly in early . Table V mentions that the parasitological data in was collected for a random subsample. A random subset of those individuals sampled for parasitological tests also had Hb data collected; this was not explicitly stated but should have been. The reason for the relatively small sample for Hb testing was simply that a random (representative) subsample was selected for this testing. For both Hb and parasitological tests, the time and expense of testing the entire sample of over 30,000 school children was prohibitive, hence the decision to draw a representative subsample. Collection of this data for a representative sample should reduce concerns about bias due to incomplete outcome data and selective attrition.
[p. 15] Weight and height. This was measured in an unknown sample of the 20,000 children. No sampling method was given.
Section 3.1 of Miguel and Kremer () does state explicitly that the anthropometric data was collected during pupil questionnaires at school during and . These were collected in standards (grades) 38, rather than in all grades, and for that reason there is only data on a subset of the full sample. Height and weight data was collected on all individuals in standards 38.
We acknowledge that the discussion of sampling for hemoglobin outcomes was unclear in Miguel and Kremer (). However, the fact that we only have Hb data for a random subset in no way affects the attrition rate for school participation data, which was collected for the entire sample. There is no problem with attrition in the main outcome measure in the Miguel and Kremer () trial, namely, school participation. In fact the school participation data is unusually rigorous. We tracked individuals as they transferred across schools, or dropped out of schools, and collected school attendance on unannounced visit days to get a more representative picture of actual school participation. This is in sharp contrast to most other trials.
For instance, Watkins (), which shows smaller school attendance impacts than Miguel and Kremer (), only considers school attendance based on register data, among those attending school regularly, missing out on school dropouts and transfers entirely. Yet that trial surprisingly received equal weight with Miguel and Kremer () in the metaanalysis of school attendance carried out in this Cochrane review.
Taken together, the claim that there is a "high risk of bias for incomplete outcome data" (the claim made on p. 6 and p. 136, and throughout the review) appears incorrect to us, given the remarkably high quality of follow up data for school participation, which serves as the main outcome of the trial, and the collection of a representative sub sample for both Hb and nutritional measures.
We respectfully request that the authors of the review consider these factors and reconsider their assessment regarding the claimed high risk of bias for incomplete outcome data in Miguel and Kremer (), especially in regards to the school participation data.
(One small point: In the summary of findings table on page 5, it is stated that we only have school participation data for 50 clusters, rather than 75 clusters. This is incorrect, since even using the Cochrane authors' three "comparisons", there are 75 distinct clusters that contribute to the year 1 evidence for Group 1 versus Groups 2 and 3 in , for instance.)
Point (4): The Cochrane review also considers the Miguel and Kremer () trial to have a high risk of bias for sequence generation [p. 6].
In particular, it discusses the quasirandom allocation of the 75 clusters:
[p. 14] "Eight trials were cluster randomized (Alderman (Cluster); Awasthi (Cluster); Awasthi (Cluster); DEVTA (unpublished); Hall (Cluster); Rousham (Cluster); Stoltzfus (Cluster)), one was a trial with quasirandom allocation of the 75 clusters (Miguel (Cluster))".
It is never clearly specified why the randomization approach makes the trial "quasirandomized". It may be due to the use of an alphabetical "list randomization" approach, rather than a computer random number generator, but if so, this is never laid out explicitly by the Cochrane authors. The remarkable baseline balance on a wide range of characteristics (educational, nutritional, socioeconomic, etc. shown in Table I of Miguel and Kremer ) across 75 clusters and over 30,000 individuals surely helps alleviate these concerns. We would like to obtain more detailed information from the Cochrane authors on why the research design in Miguel and Kremer () is considered to have a "high risk of bias". This is never explicitly discussed in the review.
We respectfully request that the authors of the review consider these factors and reconsider their assessment regarding the claimed "high risk of bias for sequence generation" in Miguel and Kremer ().
We carefully read through the entire document and noted additional instances where we had questions and concerns below (following this letter), and note the relevant page numbers in your review.
Finally, we also would like to briefly mention two working papers that we believe could usefully be incorporated into future versions of the Cochrane review on deworming. One working paper (Baird et al.) trials longterm impacts of deworming treatment on labor market outcomes. We are both coauthors on this paper. We are currently finishing the write up of this paper and hope to submit it to a working paper series and a journal in , and at that point we will share that paper with your group. That trial shows very large longrun impacts of deworming treatment on labor market outcomes, up to ten years after the start of the primary school deworming project that we trial. The second is a working paper by Dr. Owen Ozier of the World Bank, which examines longrun educational impacts on individuals who were very young children at the start of the Kenya deworming project, and finds large positive test score effects. One advantage of Oziers trial is his ability to compare outcomes across schools and across birth cohorts within those school communities, allowing him to include "school fixed effects" that control for any baseline differences across schools. This methodological approach addresses any lingering concerns about baseline "imbalance" across treatment groups.
We look forward to starting a discussion of these issues with your team, and we thank you for the time you have taken to consider them. We realize that this is an extremely timeconsuming process for your entire team, given the detailed reading you need to carry out for literally dozens of trials, and we appreciate your willingness to consider these points.
Additional comments on the Cochrane review: (Cochrane text noted in italics, page numbers noted)
The Cochrane authors have the following discussion of the exam score data and school sample:
[p. 67] "Participants Number analysed for primary outcome: Unclear for exam performance and cognitive tests Inclusion criteria: none explicitly stated. Nearly all rural primary schools in Busia district, Kenya, involved in a NGO deworming programme were studied, with a total enrolment of 30,000 pupils aged six to eighteen. Exclusion criteria: girls > 13 years old".
The claim that there was no explicit inclusion criteria stated in the paper for the exam data appears inaccurate. Section 7.2 of Miguel and Kremer () discusses our attempts to test all students, including efforts to administer exams even to those students who had since dropped out of school (see footnote 52).
In terms of the inclusion of schools in the sample, there were a total of 92 primary schools in the trial area of Budalangi and Funyula divisions in January . Seventyfive of these 92 schools were selected to participate in the deworming program, and they form the analysis sample here. The 17 schools excluded schools from the program (and thus the analysis) include: town schools that were quite different from other local schools in terms of student socioeconomic background; singlesex schools; a few schools located on islands in Lake Victoria (posing severe transportation difficulties); and those few schools that had in the past already received deworming and other health treatments under an earlier smallscale ICS (NGO) program.
The Cochrane authors make the following point about worm infection rates, which relates to potential baseline imbalance across treatment groups:
[p. 68] "Group 1 schools have an overall prevalence of 38% heavy/moderate worm infection in , compared to the initial survey in control schools in , where it was 52%."
This is a misleading comparison. The comparison of Group 1 worm infection in versus Group 2 worm infection in is simply inappropriate, given the wellknown variability across seasons and years in worm infection rates (as a function of local weather, precipitation, temperature, etc.). There is abundant health and nutritional data from pupil surveys for Groups 2 and 3 at baseline in , and they indicate that these groups appear very similar to Group 1 at baseline (see Table I of Miguel and Kremer ) but no parasitological data was collected for Groups 2 and 3 in , nor for Group 3 in , since it was considered unethical to collected detailed worm infection data in a group that was not scheduled to receive deworming treatment in that year. Once again, standard errors for the comparison of outcomes among different treatment groups take into account the possibility of random differences at baseline, and thus statistical significance levels already reflect the possibility that there is some random baseline variation across schools, but this variation alone of course does not cause bias.
The Cochrane authors have the following discussion of our health data:
[p. 68] "However, in a personal correspondence the authors state that there is no health data for Group 3 schools for ."
This claim is not entirely accurate, and must be the result of a misunderstanding. There is abundant health and nutritional data from pupil surveys for Group 3 in , but no parasitological data was collected for Group 3 in , since it was considered unethical to collected detailed worm infection data in a group that was not scheduled to receive deworming treatment in that year.
[p. 68] 27/75 schools were involved in other NGO projects which consisted of financial assistance for textbook purchase and classroom construction, and teacher performance incentives. The distribution of these other interventions is not clear, but the authors state that these schools were stratified according to involvement in these other programmes.
[p. 70] The intervention was a package including deworming drugs for soil transmitted helminths, praziquantel to treat schistosomiasis in schools with > 30% prevalence, and health promotion interventions. In addition 27/75 schools were involved in other NGO projects which consisted of financial assistance for textbook purchase and classroom construction, and teacher performance incentives. The distribution of the latter interventions is not clear. These cointerventions confound the potential effects of deworming drugs to treat STHs. However, the authors kindly provided a reanalysis of their data, with the praziquantel treated schools removed from the analysis. This represents as subgroup analysis of the original quasirandomized comparison".
Given that these other interventions had no measurable impacts on educational outcomes (as reported in several other articles), and that they are balanced across our treatment groups, these prior interventions are not a major concern for the analysis.
Sincerely,
Ted Miguel and Michael Kremer
I agree with the conflict of interest statement below:
I certify that we have no affiliations with or involvement in any organization or entity with a financial interest in the subject matter of our feedback.
Want more information on Praziquantel to Treat Eczema? Feel free to contact us.
Dear Dr. TaylorRobinson, Dr. Maayan, Dr. SoaresWeiser, Dr. Donegan, and Dr. Garner:
We are writing to clarify several points that you raise in your recent Cochrane review of deworming regarding our paper "Worms: Identifying impacts on education and health in the presence of treatment externalities" in Econometrica.
In particular, we have four main concerns about the discussion of our piece in the recent review, and believe that they could change the assessment of the quality of the evidence presented in our paper. We list these points here in the letter below, with a brief discussion of each point. We then discuss several additional points in the attached document below, following this letter. We hope that these detailed responses to your review will start a productive discussion about the interpretation of the evidence in the Miguel and Kremer () paper.
(All page numbers listed below refer to the July version of your review, with "assessed as uptodate" as May 31, .)
We recognize that writing a Cochrane review is a major undertaking, and we appreciate the time you have taken to read our paper, and the dozens of other papers covered in the review. We hope that this note can serve as the starting point for discussion, both in writing and via , if appropriate.
Our four points all relate to the claim made on page 6 of your review, and repeated throughout the review, about the Miguel and Kremer () paper:
"Miguel (Cluster) has a high risk of bias for sequence generation, allocation concealment, blinding, incomplete outcome data and baseline imbalance."
We have serious concerns about the claims you make about the risk of bias for baseline imbalance, incomplete outcome data, and sequence generation. We discuss these in turn below.
Point (1): A leading issue is your current assessment of the quality of evidence on school attendance and participation, which is the main outcome measure in the Miguel and Kremer () trial. Several concerns are raised, including: a lack of baseline values for these measures (leading to a risk of baseline imbalance), and statistically significant impacts for only one of the comparisons considered. The quotes from your review are as follows:
[p. 21] "For school attendance (days present at school): (Miguel (Cluster) Table 6; Analysis 5.4) reported on end values for attendance rates of children (, Group 1 versus Group 3), and found no significant effect (mean difference 5%, 95% CI 0.5 to 10.5). No baseline values were given so there is potential for any random differences between the groups to confound the end values."
[p. 24] "Similarly, for school attendance, the GRADE quality of the evidence was very low. One quasirandomized trial (Miguel (Cluster) reported an effect, which was apparent in only one of the two comparisons in up to a year of followup, and not apparent in the one comparison after one year. Miguel (Cluster) measured attendance outcomes directly, unlike the other two trials (Simeon ; Watkins ) which measured attendance using school registers, which may be inaccurate in some settings. However, in Miguel (Cluster), the values for school attendance were end values and not corrected for baseline. Thus random differences in baseline attendance between the two groups could have confounded any result."
We feel that these concerns are misplaced, and explain why here. We first discuss concerns about "baseline imbalance".
First, we in fact do have baseline data on school participation (our preferred measure) for one of the comparisons that you focus on. The authors of the Cochrane appear to have missed this data in our paper. In Table VIII, Panel A, there is a comparison of school participation for both Group 2 and Group 3, when both were control schools. There is no statistically significant difference in school participation across Group 2 and Group 3 in , and if anything school participation is slightly lower in Group 2 (0.037, s.e. 0.036). This makes the difference between Group 2 and Group 3 in (0.055, s.e. 0.028), when Group 2 had become a treatment school, even more impressive, since at baseline Group 2 had slightly lower school participation. We respectfully request that the authors of the Cochrane review include this data as evidence of baseline balance in our key outcome measure, school participation, and that they edit their claim that we do not have any such evidence.
It is interesting to note that, if we take the difference between Group 2 and Group 3 at baseline seriously, then the overall effect for this "year 1" comparison is 3.7 + 5.5 = 9.2 percentage points. This is almost exactly the same as the 9.3 percentage point effect in the other "year 1" comparison that the Cochrane authors focus on (Group 1 versus Groups 2 and 3 in ). Taken together, this is quite striking evidence that the first year of deworming treatment significantly improves school participation. The Cochrane authors' repeated concerns in their review about baseline balance being critical in randomized experiments suggests (to us) that they might find it methodologically preferable to use a "differenceindifference" design that explicitly controls for any baseline differences across treatment groups, rather than the standard unbiased "endline" comparison across treatment groups. If this is in fact the case, then the relevant year 1 deworming treatment effect for the Group 2 versus Group 3 comparison (for which we have baseline data, as noted above) is the 9.2 percentage point estimate, which we note is significant at 99% confidence.
Second, regarding baseline data on school attendance, we discuss that there is indeed evidence from school registers that recorded attendance is indistinguishable in the three groups of schools in early (in Table I). While the register data has its weaknesses precisely the reason we developed the much more rigorous approach of unannounced school participation checks, combined with tracking of school transfers and dropouts it is used in other trials, and in fact the Cochrane review considers school register data sufficiently reliable to include a trial (Watkins ) that uses it in their metaanalysis of school attendance.
We are puzzled as to why the evidence in the Watkins () trial is included at all in the Cochrane review if similar register data is considered unreliable when Miguel and Kremer () use it. If school register data is considered (largely) unreliable, then the Watkins () article should be excluded from the review, in which case the "metaanalysis" of school attendance and participation impacts will yield estimated effects that are much larger and statistically significant (since the Watkins impact estimates are close to zero). If the register data is considered (largely) reliable, then the Watkins () trial should be included in the review, but the baseline register data in Miguel and Kremer () should be considered as evidence that we do in fact have baseline balance on school participation. But there is an inconsistency in how register data is considered across the two trials. This seemingly inconsistent approach taken by the authors raises questions about the evenhandedness of the Cochrane review.
In fact, the appropriate use of school register data is more subtle than the Cochrane authors currently consider, since its use as baseline data may in fact be appropriate even if it is inappropriate for use as outcome data. There are at least two reasons why. First, one of the major weaknesses of the school register data used in Watkins () is that it excludes any students who have dropped out, potentially giving a misleading picture about school participation over time. However, this concern about dropouts is irrelevant when we use school register data at baseline, since the universe of students considered in the Miguel and Kremer () article was restricted to those currently enrolled in school in January (at the start of the school year), and thus the exclusion of dropouts is not a concern. Note that our use of the school register data at the start of the school year is a likely explanation for why the baseline average attendance rates we obtain using this data are much higher than the average school participation rate that we estimate over the course of the entire school year.
A second related issue is the quality of measured school attendance data conditional on student enrollment in school. Note that to the extent that differences in attendance recordkeeping prior to the introduction of the program are random across schools, they will not bias estimates of treatment impact and any "noise" in these measures will be correctly captured by reported standard errors. However, there are plausible concerns about the quality of school register data collected in treatment versus control schools in the context of an experimental evaluation, with a leading concern being that school officials could erroneously inflate figures in the treatment group. Yet once again these concerns are irrelevant in the Miguel and Kremer () trial context since the baseline school register data that we present (in Table I, Panel B) was collected before any interventions had even been carried out in the sample schools, once again making the baseline school register data potentially more reliable than school register data used as an outcome.
While the data and measurement issues here are somewhat subtle, if anything they argue in favor of including the baseline school register data in assessing the baseline balance in the Miguel and Kremer () paper, while excluding the school register outcome data in Watkins () as potentially unreliable. Instead, the Cochrane authors completely dismiss the baseline register data in Miguel and Kremer () as unreliable evidence for baseline balance, while including the Watkins () data in their metaanalysis of school participation impacts, giving it equal weight with the Miguel and Kremer () school participation impact evidence (which uses more rigorous outcome data). Once again, the seemingly selective approach taken by the authors raises questions about the evenhandedness of the Cochrane review.
An important final point has to do with the claim that there might have been "random differences" across groups. Given the randomized design of Miguel and Kremer (), there is no systematic difference to expect there to have been such random differences. The endline comparison of outcomes across treatment groups yields unbiased treatment effect estimates. The remarkable balance across the three groups in terms of dozens academic, nutritional, and socioeconomic outcomes at baseline (Table I) makes it even more unlikely that there were large differences in school participation solely by chance. If the Cochrane authors would like to consider other characteristics (other than school participation) to gauge the likelihood that Groups 1, 2 and 3 in our trial are in fact balanced at baseline they should look at the whole range of outcomes presented in Table I of Miguel and Kremer (). The lack of significant baseline academic test scores across Groups 1, 2 and 3 in our sample (Table 1, Panel C) is particularly good evidence that schooling outcomes were in fact balanced at baseline, for instance. It is not clear to us why the Cochrane authors remain so concerned about baseline imbalance issues given the experimental design (which leads to unbiased estimates) and the remarkable balance we observe along so many characteristics in Table I of Miguel and Kremer (), and their review does not provide compelling justification for their concerns.
Moreover, in the standard statistical methods that we use, only those differences across groups that are too large to have been generated "by chance" are considered statistically significant impacts. In other words, the standard errors generated in the analysis itself are precisely those that address the risk of imbalance "by chance" given our research design and sample size. Of course, random variation that is orthogonal to treatment assignment does not alone generate bias.
Speculating about the possibility that there were simply positive impacts "by chance" in order to cast doubt on one set of results, but not doing the same when there are zero estimated impacts, again raises questions about the evenhandedness of the Cochrane review. (For instance, perhaps the zero impacts on Hb outcome measures in our sample were zero simply "by chance", when the real point estimates are in fact strongly positive, like the large school participation impacts we estimate. Yet this possibility is not mentioned in the Cochrane review.) In our view, the Cochrane authors do not provide sufficient justification for their fears about imbalance "by chance" in our sample, and we feel further concrete details about these concerns are needed to substantiate their assertions.
Taken together, the Cochrane review's claim that there is a "high risk of bias for baseline imbalance" (the claim made on p. 6 and p. 136, and throughout the review) appears highly misleading to us, given the: balance in school participation we observe between Group 2 and Group 3 in ; the balanced school attendance based on register data across Groups 1, 2 and 3 at baseline; the balance in other measures of academic performance (including academic test scores) as well as multiple socioeconomic and nutritional characteristics at baseline; and most importantly given the randomized experimental design, which implies that there is no systematic reason why the three treatment groups would differ significantly along unobservable dimensions.
We respectfully request that the authors of the review consider these factors and reconsider their assessment regarding the claimed "high risk of bias for baseline imbalance" in Miguel and Kremer ().
Point (2): There is also an important methodological point to make regarding how the authors of the Cochrane review assess the school participation evidence. At several points they note that only some of the school participation comparisons are statistically significant at 95% confidence. To be specific, the comparisons they focus on have the following estimated impacts and standard errors (from p. 130131 of their review):
School participation outcomes measured £ 1 year:
9.3 percentage point gain (s.e. 3.1 percentage points)
5.5 percentage point gain (s.e. 2.8 percentage points) School participation outcomes measured > 1 year:
5.0 percentage point gain (s.e. 2.8 percentage points)
It is unclear to us why the reviewers separate out the three comparisons, rather than combining the groups in a single analysis using standard analytical methods, as their principal assessment of the impact of deworming on school participation. They give no clear methodological justification for this separation. Pooling data from three valid and unbiased "comparisons" still yields an unbiased treatment effect estimate, but with much greater statistical precision, and is thus a methodologically preferable approach. At a minimum, the Cochrane authors should discuss the pooled estimates (which are the focus of Miguel and Kremer ) in addition to the three separate comparisons.
One simple approach to doing so that maintains the "comparisons" above, and at least goes part of the way towards using the full sample, would be to pool and data for the Group 1 versus Group 3 comparison, since Group 1 is treatment during this entire period and Group 3 is control for the entire period. The distinction between < 1 year and > 1 year outcomes seems rather artificial to us, as discussed further below. It is unclear to us why the Cochrane authors never present this comparison of Group 1 versus Group 3 for and pooled together.
The preferred analysis in the Miguel and Kremer () paper pools multiple years of data, and all groups, to arrive at the most statistically precise estimated impact of deworming on schooling outcomes. This includes both school participation outcomes, as well as academic test score outcomes (which the Cochrane authors currently exclude since in the paper we only present these pooled test score results, rather than the simple differences across treatment groups). If the Cochrane authors would like to see the simple differences across treatment groups for the academic test scores, we would be delighted to share the data with them. (To be clear, the test score impact estimates in Miguel and Kremer () come from a regression analysis that relies on the experimental comparison between the treatment and control groups, and is not a retrospective analysis based on nonexperimental data.)
In our view, the Cochrane authors do not provide adequate statistical justification for splitting results into the different "comparisons", or into "year 1" versus "year 2" impacts. "Pooling" these different comparisons, as we do in the Miguel and Kremer () paper, is standard with longitudinal (panel) data analysis with multiyear panels, and is appropriate for those that care about deworming impacts at multiple time frames, ie at less than one year and at more than one year of treatment. Use of our full sample would immediately lead to the conclusion that there are in fact positive impacts of deworming on school participation in our sample, with very large impact magnitudes and high levels of statistical significance. This is the conclusion of the Miguel and Kremer () paper, and a quick look at the comparisons presented above also indicate that there are strong impacts: all three of the comparisons have large impact estimates and all three are statistically significant at over 90% confidence, with one significant at over 99% confidence and another nearly significant at 95% confidence (despite the data being split up into the three different comparisons). By treating each comparison independently and in isolation, the authors are reaching inappropriate conclusions, in our view.
To illustrate why the approach taken by the current version of the Cochrane review is inappropriate, imagine the simple thought experiment of splitting up the data from Miguel and Kremer () into "quarters" (three month intervals) rather than years of treatment. There is no obvious a priori reason why this should not be as valid an alternative approach as the >1 year and <1 year approach in the Cochrane review, as some other reviewers might instead have been interested in the impact of deworming treatment over intervals shorter than one year. Then we would have 2 comparisons in quarter 1 of treatment (Group 1 versus Groups 2 and 3 in early , and Group 2 versus Group 3 in early ), 2 comparisons in quarter 2 of treatment, 2 comparisons in quarter 3, 2 comparisons in quarter 4, and 1 comparison in each quarter from 5 through 8 (Group 1 versus Group 3 in ). This approach would generate 12 valid "comparisons" of treatment and control schools over multiple time periods, but by slicing up the data ever more finely and reducing the sample size considered in each comparison, it is almost certain that none of these comparisons would yield statistically significant impacts of deworming on school participation at 95% confidence, even though the average estimated effect sizes would remain just as large. This would clearly not be an attractive methodological approach. You could even imagine considering a month by month treatment effect estimate, which would yield 36 different comparisons, all of which would be severely underpowered statistically.
However, we view the Cochrane review's slicing of our full dataset into three comparisons (two for year 1 treatment, and one for year 2), rather than conducting the analysis in the full dataset in much the same way. As we show in Miguel and Kremer (), when the data from all valid comparisons is considered jointly, in order to maximize statistical precision using standard longitudinal (panel) data regression methods, the estimated impacts are large and highly statistically significant. Just to be clear, we do not use any controversial statistical methods, and our results do not rely on any nonexperimental comparisons. The regression analyses in our paper rely entirely on the variation in treatment status induced by the experimental design of the trial, and thus are just as appropriate analytically as the simple "treatment minus control" differences that the Cochrane authors focus on. In our view, the most robust analytical approach should use our full dataset, rather than the (in our view) more fragmented way of presenting the results in Table 6 of your review, which leads to less statistical precision and no greater insight.
If the Cochrane authors feel that there is a strong a prior reason to focus on year 1 treatment results separately from year 2 treatment results, then at a minimum they should consider both of the year 1 "comparisons" that they focus on jointly (ie Group 1 versus Groups 2 and 3 in , and Group 2 versus Group 3 in ), in order to improve statistical precision and thus generate impact estimates with tighter confidence intervals. If they wish to strictly employ the same exact "comparison" groups over time, then they should at a minimum pool the and data and focus on the Group 1 versus Group 3 comparison. Doing either would yield an unambiguous positive and statistically significant impact of deworming on school participation in our sample.
We respectfully request that the authors of the review consider these suggestions and reconsider their assessment regarding the claimed lack of statistically significant school participation impacts in Miguel and Kremer ().
Point (3): The Cochrane review concludes that our trial has a "high risk of bias for incomplete outcome data" (p. 90). We believe this point is simply incorrect when applied to our school participation data, as we explain here. The review authors focus on the lack of detail in Miguel and Kremer () regarding the collection of Hb data, but then unfairly use this lack of clarity to downgrade the reliability of all data in the trial, including the school participation data. The exact quote from the review is as follows:
[p. 15] However, results for health outcomes were presented for the comparison of Group 1 (25 schools) versus Group 2 (25 schools). Details of the outcomes we extracted and present are:
Haemoglobin. This was measured in 4% of the randomized population (778/20,000). It was unclear how the sample were selected.
The Hb sample was a random (representative) subsample of the full sample, chosen by a computer random number generator. Appendix Table AI of the Miguel and Kremer () paper does discuss how the parasitological and Hb surveys were collected jointly in early . Table V mentions that the parasitological data in was collected for a random subsample. A random subset of those individuals sampled for parasitological tests also had Hb data collected; this was not explicitly stated but should have been. The reason for the relatively small sample for Hb testing was simply that a random (representative) subsample was selected for this testing. For both Hb and parasitological tests, the time and expense of testing the entire sample of over 30,000 school children was prohibitive, hence the decision to draw a representative subsample. Collection of this data for a representative sample should reduce concerns about bias due to incomplete outcome data and selective attrition.
[p. 15] Weight and height. This was measured in an unknown sample of the 20,000 children. No sampling method was given.
Section 3.1 of Miguel and Kremer () does state explicitly that the anthropometric data was collected during pupil questionnaires at school during and . These were collected in standards (grades) 38, rather than in all grades, and for that reason there is only data on a subset of the full sample. Height and weight data was collected on all individuals in standards 38.
We acknowledge that the discussion of sampling for hemoglobin outcomes was unclear in Miguel and Kremer (). However, the fact that we only have Hb data for a random subset in no way affects the attrition rate for school participation data, which was collected for the entire sample. There is no problem with attrition in the main outcome measure in the Miguel and Kremer () trial, namely, school participation. In fact the school participation data is unusually rigorous. We tracked individuals as they transferred across schools, or dropped out of schools, and collected school attendance on unannounced visit days to get a more representative picture of actual school participation. This is in sharp contrast to most other trials.
For instance, Watkins (), which shows smaller school attendance impacts than Miguel and Kremer (), only considers school attendance based on register data, among those attending school regularly, missing out on school dropouts and transfers entirely. Yet that trial surprisingly received equal weight with Miguel and Kremer () in the metaanalysis of school attendance carried out in this Cochrane review.
Taken together, the claim that there is a "high risk of bias for incomplete outcome data" (the claim made on p. 6 and p. 136, and throughout the review) appears incorrect to us, given the remarkably high quality of followup data for school participation, which serves as the main outcome of the trial, and the collection of a representative sub sample for both Hb and nutritional measures.
We respectfully request that the authors of the review consider these factors and reconsider their assessment regarding the claimed high risk of bias for incomplete outcome data in Miguel and Kremer (), especially in regards to the school participation data.
(One small point: In the summary of findings table on page 5, it is stated that we only have school participation data for 50 clusters, rather than 75 clusters. This is incorrect, since even using the Cochrane authors' three "comparisons", there are 75 distinct clusters that contribute to the year 1 evidence for Group 1 versus Groups 2 and 3 in , for instance.)
Point (4): The Cochrane review also considers the Miguel and Kremer () trial to have a high risk of bias for sequence generation [p. 6].
In particular, it discusses the quasirandom allocation of the 75 clusters:
[p. 14] "Eight trials were cluster randomized (Alderman (Cluster); Awasthi (Cluster); Awasthi (Cluster); DEVTA (unpublished); Hall (Cluster); Rousham (Cluster); Stoltzfus (Cluster)), one was a trial with quasirandom allocation of the 75 clusters (Miguel (Cluster))".
It is never clearly specified why the randomization approach makes the trial "quasirandomized". It may be due to the use of an alphabetical "list randomization" approach, rather than a computer random number generator, but if so, this is never laid out explicitly by the Cochrane authors. The remarkable baseline balance on a wide range of characteristics (educational, nutritional, socioeconomic, etc. shown in Table I of Miguel and Kremer ) across 75 clusters and over 30,000 individuals surely helps alleviate these concerns. We would like to obtain more detailed information from the Cochrane authors on why the research design in Miguel and Kremer () is considered to have a "high risk of bias". This is never explicitly discussed in the review.
We respectfully request that the authors of the review consider these factors and reconsider their assessment regarding the claimed "high risk of bias for sequence generation" in Miguel and Kremer ().
We carefully read through the entire document and noted additional instances where we had questions and concerns below (following this letter), and note the relevant page numbers in your review.
Finally, we also would like to briefly mention two working papers that we believe could usefully be incorporated into future versions of the Cochrane review on deworming. One working paper (Baird et al.) trials longterm impacts of deworming treatment on labor market outcomes. We are both coauthors on this paper. We are currently finishing the write up of this paper and hope to submit it to a working paper series and a journal in , and at that point we will share that paper with your group. That trial shows very large longrun impacts of deworming treatment on labor market outcomes, up to ten years after the start of the primary school deworming project that we trial. The second is a working paper by Dr. Owen Ozier of the World Bank, which examines longrun educational impacts on individuals who were very young children at the start of the Kenya deworming project, and finds large positive test score effects. One advantage of Oziers trial is his ability to compare outcomes across schools and across birth cohorts within those school communities, allowing him to include "school fixed effects" that control for any baseline differences across schools. This methodological approach addresses any lingering concerns about baseline "imbalance" across treatment groups.
We look forward to starting a discussion of these issues with your team, and we thank you for the time you have taken to consider them. We realize that this is an extremely timeconsuming process for your entire team, given the detailed reading you need to carry out for literally dozens of trials, and we appreciate your willingness to consider these points.
Additional comments on the Cochrane review: (Cochrane text noted in italics, page numbers noted)
The Cochrane authors have the following discussion of the exam score data and school sample:
[p. 67] "Participants Number analysed for primary outcome: Unclear for exam performance and cognitive tests Inclusion criteria: none explicitly stated. Nearly all rural primary schools in Busia district, Kenya, involved in a NGO deworming programme were studied, with a total enrolment of 30,000 pupils aged six to eighteen. Exclusion criteria: girls > 13 years old".
The claim that there was no explicit inclusion criteria stated in the paper for the exam data appears inaccurate. Section 7.2 of Miguel and Kremer () discusses our attempts to test all students, including efforts to administer exams even to those students who had since dropped out of school (see footnote 52).
In terms of the inclusion of schools in the sample, there were a total of 92 primary schools in the trial area of Budalangi and Funyula divisions in January . Seventyfive of these 92 schools were selected to participate in the deworming program, and they form the analysis sample here. The 17 schools excluded schools from the program (and thus the analysis) include: town schools that were quite different from other local schools in terms of student socioeconomic background; singlesex schools; a few schools located on islands in Lake Victoria (posing severe transportation difficulties); and those few schools that had in the past already received deworming and other health treatments under an earlier smallscale ICS (NGO) program.
The Cochrane authors make the following point about worm infection rates, which relates to potential baseline imbalance across treatment groups:
[p. 68] "Group 1 schools have an overall prevalence of 38% heavy/moderate worm infection in , compared to the initial survey in control schools in , where it was 52%."
This is a misleading comparison. The comparison of Group 1 worm infection in versus Group 2 worm infection in is simply inappropriate, given the wellknown variability across seasons and years in worm infection rates (as a function of local weather, precipitation, temperature, etc.). There is abundant health and nutritional data from pupil surveys for Groups 2 and 3 at baseline in , and they indicate that these groups appear very similar to Group 1 at baseline (see Table I of Miguel and Kremer ) but no parasitological data was collected for Groups 2 and 3 in , nor for Group 3 in , since it was considered unethical to collected detailed worm infection data in a group that was not scheduled to receive deworming treatment in that year. Once again, standard errors for the comparison of outcomes among different treatment groups take into account the possibility of random differences at baseline, and thus statistical significance levels already reflect the possibility that there is some random baseline variation across schools, but this variation alone of course does not cause bias.
The Cochrane authors have the following discussion of our health data:
[p. 68] "However, in a personal correspondence the authors state that there is no health data for Group 3 schools for ."
This claim is not entirely accurate, and must be the result of a misunderstanding. There is abundant health and nutritional data from pupil surveys for Group 3 in , but no parasitological data was collected for Group 3 in , since it was considered unethical to collected detailed worm infection data in a group that was not scheduled to receive deworming treatment in that year.
[p. 68] 27/75 schools were involved in other NGO projects which consisted of financial assistance for textbook purchase and classroom construction, and teacher performance incentives. The distribution of these other interventions is not clear, but the authors state that these schools were stratified according to involvement in these other programmes.
[p. 70] The intervention was a package including deworming drugs for soil transmitted helminths, praziquantel to treat schistosomiasis in schools with > 30% prevalence, and health promotion interventions. In addition 27/75 schools were involved in other NGO projects which consisted of financial assistance for textbook purchase and classroom construction, and teacher performance incentives. The distribution of the latter interventions is not clear. These cointerventions confound the potential effects of deworming drugs to treat STHs. However, the authors kindly provided a reanalysis of their data, with the praziquantel treated schools removed from the analysis. This represents as subgroup analysis of the original quasirandomized comparison".
Given that these other interventions had no measurable impacts on educational outcomes (as reported in several other articles), and that they are balanced across our treatment groups, these prior interventions are not a major concern for the analysis.
Sincerely,
Ted Miguel and Michael Kremer
I agree with the conflict of interest statement below:
I certify that we have no affiliations with or involvement in any organization or entity with a financial interest in the subject matter of our feedback.
Contact us to discuss your requirements of China Ivermectin Raw Manufacturers. Our experienced sales team can help you identify the options that best suit your needs.