Jump to content

Talk:Race and intelligence

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Quizkajer (talk | contribs) at 21:12, 25 August 2006 (moved from accusations of bias section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Template:Todo priority


Failed "good article" nomination

This article failed good article nomination. This is how the article, as of August 25, 2006, compares against the six good article criteria:

1. Well written?: High level of writing
2. Factually accurate?: Appears good but contested
3. Broad in coverage?: Very well referenced
4. Neutral point of view?: Topic is inherently controversial, difficult to maintain a neutral POV though there are many level editors trying
5. Article stability? Unstable, multiple revert/edit wars. [Reason for failing GA]
6. Images?: Multiple graphs, good photgraphs

When these issues are addressed, the article can be resubmitted for consideration. Thanks for your work so far. --Ifnord 14:23, 25 August 2006 (UTC)[reply]

Archive
Archives

Archive index

1, 2, 3, 4, 5, 6, 7, 8, 9, 10

11, 12, 13, 14, 15, 16, 17

18, 19, 20, 21, 22, 23, 24


FAQ: article name change?

See:Talk:Race and intelligence/Archive 22#The_Huge_Problem_with_this_article:_IQ and Archive_13.


Bogus arguments

IQ tests scores are not an absolute measure of intelligence; they tend to ignore many aspects of human cognition and the cognitive process. Things like creatively, wisdom, ability to learn, ability to adapt and practical skills are not gauged by these tests in a meaningful way. IQ tests also fail to measure the same construct among all people to whom the tests are applied, the more culturally distinct the group (I.E. Truckers, and Musicians) the greater the discrepancy. To apply a single test to an entire population of distinct individuals from varying backgrounds is unbelievably biased unless used to gauge a particularly relevant skill. Example: Race horses are not gauged for their poker skills. - Just as Sociologists are not measured by their ability to paint.

The fact of the matter is intelligence does vary among humans, but this can be for many reasons: prenatal care, subjective interpretation, interest factors, differing environments, life circumstances etc. My concern is not with differences among individuals, but with claims that imply that group differences involving subjective and highly bias testing situations can amount to genetic differences in the traits being tested.

How does one compare the intelligence of a gifted painter with that of a mediocre Physicist? According to the narrow methods and perspectives used and held by many Psychometricians, the Mediocre Physicist is likely to be perceived the more intelligent. Why, because this is what the testing situation demands that they believe/think.

Psychometric tests do not and can not measure the number of years spent in practice, nor can they measure interest, motivation, interpretation, diet, home & social life, daily activities etc.; nor do they try! Despite these obvious and fundamental short comings this model is often presented as valid and unbiased by many practitioners.

Cole, Gay, Glick and Sharp (1971:233) made the following insightful observation: “ Cultural differences in cognition reside more in the situations to which particular cognitive processes are applied than in the existence of a process in one cultural group, and its absence in another.

Robert Sternberg and his colleagues ask the experts to define “intelligence” according to their beliefs. Each of the roughly two dozen definitions produced in each symposium was different. There were some common threads, such as the importance of adaptation to the environment and the ability to learn, but these constructs were not well specified. According to Sternberg, very few tests measure adaptation to environment and ability to learn; nor do any tests except dynamic tests involving learning at the time of the test measure ability to learn. Traditional tests focus much more on measuring past learning which can be the result of many factors, including motivation and available opportunities to learn (Sternberg, Grigorenko, and Kidd, American Psychologist, 2005). - IQ test items are largely measures of achievement at various levels of competency (Sternberg, 1998,1999, 2003). Items requiring knowledge of the fundamentals of vocabulary, information, comprehension, and arithmetic problem solving (Cattell, 1971;Horn, 1994).

Further more, IQ is not a fixed quantity; it can be raised (It is not as difficult to rise, as it is to maintain). This has been demonstrated numerously through studies involving environmental stimulation.

Examples of such studies:

In 1987 Wynand de Wet (now Dr. de Wet), did his practical research for an M.Ed. (Psychology) degree on the Audiblox program at a school for the deaf in South Africa. The subject of the research project concerned the optimization of intelligence actualization by using Audiblox. Twenty-four children with learning problems participated in the study, and were divided into 3 groups.

The children in Group A received Audiblox tuition. The children were tutored simultaneously in a group by means of the Persepto for 27.5 hours between April 27 and August 27, 1987. The first edition of the group application of the Audiblox program was followed. No diagnostic testing was done beforehand.

The children in Group B received remedial education. They were tested beforehand and based on the diagnosis each child received individualized tuition on a one-on-one basis for 27.5 hours between April 27 and August 27, 1987.

The children in Group C were submitted to non-cognitive activities for 27.5 hours during this period.

All 24 children were tested before and after on the Starren Snijders-Oomen Non-verbal Scale (SSON), a non-verbal IQ test that can be used for deaf children. Dr. de Wet reported that he could do nearly all the Audiblox exercises without adaptations, except the auditory exercises. Because he had to use sign-language, the children could not close their eyes. The average scores of the three groups on the SSON test were as follows:

Average IQ's before intervention, after intervention, and general Increase

IQ scores Group A (Audiblox group): 101.125 - - 112.750 - - 11.625 Group B (Remedial group): 107.125 - - 116.250 - - 9.125 Group C (Non-cognitive): 104.250 - - 108.875 - - 4.625

Reports received from the teachers indicated that the improvements achieved through remedial education and through Audiblox transferred to the general school performance of the children. The transfer scored through the Audiblox, however, was superior to that of the remedial education, says Dr. de Wet. Finally, because Audiblox can be applied in a group setting, it is much more cost effective that remedial education, he says.

Reference: De Wet, W., The Optimization of Intelligence Actualization by Using Audiblox (M.Ed. (Psychology) Thesis: University of Pretoria, 1989).

The Glenwood State School

A particularly interesting project on early intellectual stimulation involved twenty-five children in an orphanage. These children were seriously environmentally deprived because the orphanage was crowded and understaffed. Thirteen babies with an average age of nineteen months were transferred to the Glenwood State School for retarded adult women and each baby was put in the personal care of a woman. Skeels, who conducted the experiment, deliberately chose the most deficient of the orphans to be placed in the Glenwood School. Their average IQ was 64, while the average IQ of the twelve who stayed behind in the orphanage was 87.

In the Glenwood State School the children were placed in open, active wards with the older and relatively bright women. Their substitute mothers overwhelmed them with love and cuddling. Toys were available, they were taken on outings and they were talked to a lot. The women were taught how to stimulate the babies intellectually and how to elicit language from them.

After eighteen months, the dramatic findings were that the children who had been placed with substitute mothers, and had therefore received additional stimulation, on average showed an increase of 29 IQ points! A follow-up study was conducted two and a half years later. Eleven of the thirteen children originally transferred to the Glenwood home had been adopted and their average IQ was now 101. The two children who had not been adopted were reinstitutionalized and lost their initial gain. The control group, the twelve children who had not been transferred to Glenwood, had remained in institution wards and now had an average IQ of 66 (an average decrease of 21 points). Although the value of IQ tests is grossly exaggerated today, this astounding difference between these two groups is hard to ignore.

More telling than the increase or decrease in IQ, however, is the difference in the quality of life these two groups enjoyed. When these children reached young adulthood, another follow-up study brought the following to light: ┨e experimental group had become productive, functioning adults, while the control group, for the most part, had been institutionalized as mentally retarded.⼢r> Other Examples of IQ Increase

Other examples of IQ increase through early enrichment projects can be found in Israel, where children with a European Jewish heritage have an average IQ of 105 while those with a Middle Eastern Jewish heritage have an average IQ of only 85. Yet when raised on a kibbutz, children from both groups have an average IQ of 115.

In another home-based early enrichment program, conducted in Nassua County, New York, an instructor made only two half-hour visits a week for only seven months over a period of two years. He spent time showing parents participating in the program how best to teach their children at home. The children in the program had initial IQⳠin the low 90s, but by the time they went to school they averaged IQⳠof 107 or 108. In addition, they have consistently demonstrated superior ability on school achievement tests.

Further References: • Clark, B., Growing Up Gifted (3rd ed.), (Columbus: Merrill, 1988). • Dworetzky, J. P., Introduction to Child Development (St. Paul: West Publishing Company, 1981). • Skeels, H. M., et al., “A study of environmental stimulation: An orphanage preschool project,” University of Iowa Studies in Child Welfare, 1938, vol. 15(4).

Leon J. Kamin (Bell Curve Wars, 1995 p.92): “Extensive practice at reading and calculating does affect, very directly, one's IQ score.”r>

Robert Sternberg on the matter of IQ gains (Interview with Skeptic magazine): "I think it's hard to maintain the IQ gains. But if you think environment is important in the development of intelligence, and you put people in a really good program and you raise their IQ, and then take them out of the program and put them back in the poor environment in which they started, chances are you are going to lose a lot of the beneficial effect. If you give someone antibiotics for a disease, cure them, then put them back in the original septic environment, the disease will return. We've seen this when we work with children with parasitic infections. We can give them Albendazol and it will cure their parasitic infection. But if you put them back in the environment in which they acquired the infection, they will just acquire it again."

I personally do not agree with his comparing of IQ with disease or infection, but his point is valid; I am sure the same can be said for a good music program or art school. I think the main problem here is maintenance. Example: If a body builder does not exercise for some time his muscle mass will decrease. Or, if an artist does not paint for some years his/her skill will diminish. In other words, “use it or loose it.”

There are many other studies that prove IQ to be a non static phenomenon of little genetic value; one of the most notable and well known being the Flynn Effect: This study of IQ tests scores for different populations over the past sixty years, James R. Flynn discovered that IQ scores increased from one generation to the next for all of the countries for which data existed (Flynn, 1994). This interesting phenomenon has been called "the Flynn Effect."

”Research shows that IQ gains have been mixed for different countries. In general, countries have seen generational increases between 5 and 25 points. The largest gains appear to occur on tests that measure fluid intelligence (Gf) rather than crystallized intelligence (Gc).”⼢r>

http://www.indiana.edu/~intell/flynneffect.shtml

This being said, how well do IQ tests predict real world success? - According to Stephen J. Gould the only thing an IQ test can accurately predict is how well a person scores on the test. Many others have made similar statements

Robert Sternberg on the matter of intelligence etc: My first set of interests is in higher mental functions, including intelligence, creativity, and wisdom. - I have proposed a triarchic theory of successful intelligence, and much of the work we do at the PACE Center is in validations of this theory. The theory suggests that successfully intelligent people are those who have the ability to achieve success according to their own definition of success, within their sociocultural context. They do so by identifying and capitalizing on their strengths, and identifying and correcting or compensating for their weaknesses in order to adapt to, shape, and select environments. Such attunement to the environment uses a balance of analytical, creative, and practical skills. The theory views intelligence as a form of developing competencies, and competencies as forms of developing expertise. In other words, intelligence is modifiable rather than fixed.

We use a variety of converging operations to test the triarchic theory--componential (information-processing) analyses, exploratory and confirmatory factor analysis, cultural and cross-cultural studies, instructional studies, and field studies in the workplace. The results of all of these kinds of studies have been encouraging.

Key References: Sternberg, R. J. (1977). Intelligence, information processing, and analogical reasoning: The componential analysis of human abilities.Hillsdale, NJ: Erlbaum. Sternberg, R. J. (1985). Beyond IQ: A triarchic theory of human intelligence. New York: Cambridge University Press. Sternberg, R. J. (1990). Metaphors of mind: Conceptions of the nature of intelligence. New York: Cambridge University Press. Sternberg, R. J. (1997). Successful intelligence. New York: Plume. Sternberg, R. J. (1999). The theory of successful intelligence. Review of General Psychology, 3, 292-316. Sternberg, R. J., Forsythe, G. B., Hedlund, J., Horvath, J., Snook, S., Williams, W. M., Wagner, R. K., & Grigorenko, E. L. (2000).Practical intelligence in everyday life. New York: Cambridge University Press. Sternberg, R. J., & Grigorenko, E. L. (2000). Teaching for successful intelligence. Arlington Heights, IL: Skyligh

http://www.yale.edu/rjsternberg/

Robert J. Sternberg (b. 8 December 1949) is a psychologist and psychometrician and the Dean of Arts and Sciences at Tufts University. He was formerly IBM Professor of Psychology and Education at Yale University and the President of the American Psychological Association. Dr. Sternberg has also been the editor or co-editor of well over 50 psychological Journals.

Sternberg is also the author or coauthor of several college-level textbooks in psychology:

• In Search of the Human Mind, now in its second edition (1998) and published by Harcourt Brace College Publishers, is a full-length introduction to psychology suitable for courses in introductory psychology or general psychology. It is based on Sternberg’s triarchic theory of intelligence, and approaches psychology from the standpoint both of the evolution of organisms and the evolution of ideas. The textbook emphasizes the importance of the dialectic in how ideas evolve. This text comes with a full set of ancillaries available from the publisher. •

• Pathways to Psychology, now in its second edition (2000) and published by Harcourt Brace College Publishers, is an abbreviated introduction to psychology suitable for courses in introductory psychology or general psychology. It is based on Sternberg’s triarchic theory of intelligence, and approaches psychology from the standpoint of the multiple pathways that converge on an understanding of psychology—multiple theoretical paradigms, multiple methodologies, multiple styles of learning. This text comes with a full set of ancillaries available from the publisher. •

• Cognitive Psychology is now in its second edition (1999) with a new, second edition to be published for 1999 by Harcourt Brace College Publishers. It is an introduction to cognitive psychology suitable for courses such as cognitive psychology and cognition. It is based on Sternberg’s triarchic theory of intelligence, and emphasizes the importance of intelligence as an integrating concept in the study of intelligence. This text comes with a brief instructor’s manual and with a test bank. •

• Introduction to Psychology is now in its first edition (1997) and is published by Harcourt Brace College Publishers in their College Outline Series. This text is intended as a review of psychology, and is suitable as an ancillary for students taking the introductory course, or as a review for students studying for various examinations, such as the Advanced Placement psychology text or the GRE Advanced Test in psychology.

Major Honors Include:

• Early Career and McCandless Awards of American Psychological Association • Outstanding Book, Research Review, and Sylvia Scribner Awards of American Educational Research Association • Palmer O. Johnson Award, American Educational Research Association • Cattell Award of Society for Multivariate Experimental Psychology • Distinguished Scholar Award of National Association for Gifted Children • Past-Editor, Psychological Bulletin • Editor, Contemporary Psychology • Past-Associate Editor, Child Development, Intelligence • Past-President, Divisions 1 (General Psychology) and 15 (Educational Psychology) of the American Psychological Association • Distinguished Lifetime Contribution to Psychology Award, Connecticut Psychological Association • James McKeen Cattell Award, American Psychological Society • President-Elect, Division 24 (Theoretical and Philosophical Psychology), American Psychological Association • President, Division 10 (Psychology and the Arts), American Psychological Association • Guggenheim Fellowship • National Science Foundation Graduate Fellowship • National Merit Scholarship


- Also see work by Harvard University's Howard Gardener.


Sternberg on Psychometric G (a quote from his interview with skeptic magazine): “What I found at that time was that if you use the kinds of tasks that are used in intelligence tests, then you will get the g factor. That statement reflected analyses we did that instead of using individual difference analysis used process analysis. Even using process analysis, we got a general factor. So if you were to ask me, "Do I think that there is general factor in the kinds of tests that psychometricians use?" I would say "Yes." That is a different question from, "If you define intelligence, not just as IQ, but as involving more than what the IQ tests in fact test, is there then a general factor?" then I would say the answer is "No." So the way psychometricians operationalize it, you get a g factor.”

Note: There are three major schools of psychometric interpretation and only one supports the view of g and IQ.


Race and Genetics:

- Osbonre and Suddick (1971, as reported in Loehlin, 1975) attempted to use 16 blood-groups genes known to have come from European ancestors. Testing two samples the authors found that the correlation over the 16 genes and IQ scores was not highly positive as would have been predicted if European genes in Blacks increased IQ scores. In Fact, the correlations were -.38 and +.01. Because the results were not significant, the authors concluded that European genes lower IQ scores.

- Zuckerman (1990) demonstrated the dubiousness of results obtained through race premises. He found much more variation within groups designated, and, like many other species, humans showed considerable geographical variation in morphology (p.1134). Yee, et al. (1993) further concludes this.

- A study conducted by Tizard and colleagues involving Caribbean children showed that there was no genetic basis for IQ differences between black & whites. The IQ of the children at the Orphanage was: Blacks 108, Mixed 106, and White 103 (Flynn, 1980; also see Richard E. Nisbett, Race, Genetics and IQ; The Bell Curve wars, 1995).

- Adjustments for socioeconomic conditions almost completely eliminate differences in IQ scores between black and white children. Co-investigators include Jeanne Brooks-Gunn and Pamela Klebanov of Columbia's Teachers College, and Greg Duncan of the Center for Urban Affairs and Policy Research at Northwestern University.

- According to most geneticists human populations have never been separated long enough for anything but the most superficial traits to have developed; human psychical traits over lap and graduate into one another. As well, there is as much or more diversity and genetic difference within any "racial" group as there is between people of different racial groups. Traits like height and body shape offer much more genetic information than anything we use to designate the racial groups here in North America and elsewhere. Also, what is considered black America could be considered white in Africa; that is, social ideas involving race differ from population to population." (See, Cavalli-Sforza, Menozzi, Piazza, 1994 & 2000; Davis, 1991; Allen & Adams, 1992. Yee, Fairchild, Weizmann and Wyatt, 1993; Also see Dryna, D.Manichaikul, De Lange, Snieder, and Spector, 2001; Holden, 2001)

- Also, IQ differences in the U.S are not as drastic as some have you believe. Many researchers put the difference between 7-10 points (Richard Nisbett, 2005; Vincent, 1991; Thorndike et al, 1986; Leon J. Kamin, The Bell curve wars, 1995). As well, this conclusion is only reached after lumping the entire population together as a single body. The truth is blacks from different regions in the U.S. differ markedly in culture and achievement.

-In more than a dozen studies from the 1960s and 1970s analyzed by Flynn (1991), the mean IQs of Japanese- and Chinese American children were always around 97 or 98; none was over 100. These studies did not include other Asian groups such as the Vietnamese, Cambodians, or Filipinos; who tend to achieve less academically and perform poorly on conventional Psychometric tests.

-Stevenson et al (1985), comparing the intelligence-test performance of children in Japan, Taiwan and the United States, found no substantive differences at all. Given the general problems of cross-cultural comparison, there is no reason to expect precision or stability in such estimates.



Much evidence against Rushton and Lynn to come! Until then, see empirical evidence against Rushton, here:

Reply to Rushton: Review by Douglas Wahlsten, University of Alberta:

http://www.cjsonline.ca/articles/wahlsten.html


I suggest that this article be removed!!

Flynn

I don't know how to do this and this entry is almost certainly not in the correct place. Jim Flynn did NOT "discover" the increase in scores over time. Earlier authors included Thorndike with the Binet (who offers various explanations) and Raven. So the entry in the Flynn box needs to be modified to say that he "publicised and extended the accumulating evidence that there had been a dramatic rise in the scores on some components of general intelligence". Unfortunately, I can't see how to get into that box to modify it.

Also, after the statement that he doubts whether the increase reflects an increase in "real" intelligence, there should be a sentence raising the question of what on earth he means by "real" intelligence and, secondly, asking whether the parallel increases in height, athletic ability, and, most importantly, life expectacy over the same period might also be termed "unreal". Back projection of these trends to the time of the Greeks would yield equally absurd results!

Again, I cannot insert this because when I open the "edit" file I discover a whole pile of stuff in here which does not show up in the visible text although it is indeed relevant.

Quester67 08:23, 25 August 2006 (UTC)[reply]

belief, data, conclusion

re: H&M calculations... there's no good reason to single out this single published conclusion from all others as being less firm

a belief is the weakest possible description. beliefs can be false or true, justified or not. moreover, "belief" connotes a lack of justification -- Belief is usually defined as a conviction to the truth of a proposition without its verification, therefore it is a subjective mental interpretation of the perception results, own contemplation/reasoning or communication.

a hypothesis is the same thing as a theory which is the same thing as a conclusion. all conclusions are hypothese/theories. none are 100% established, nor 0% established (unless they can be known a priori).

data is the rawest form of a measurement. everything else involves some kind of model building and thus is moving across the gray line from data to conclusion.

counterfactual conclusions are not intrinstically less knowable than factual conclusions. indeed, counterfactuals lie at the heart of causal reasoning. in this case, i know of no arguments against the resampling simulation done by H&M to establish the effects of changing the average IQ (ceteris paribus). --Rikurzhen 06:26, 2 August 2006 (UTC)[reply]

I agree with Rik, but maybe a useful formulation would be something like "According to their model, gnats cause foo" instead of the current "They find that gnats cause foo" or "They believe that gnats cause foo". Arbor 06:31, 2 August 2006 (UTC)[reply]
Anything like that is fine. Arbor, is there a name for their resampling technique? I recognize it as being akin to bootstraping, but not really. --Rikurzhen 06:36, 2 August 2006 (UTC)[reply]
I have to pass on that. I work in a theoretical field and my patchy understanding of statistics comes via probability theory only. The nitty-gritty details of actually doing statistics in real life are beyond my terminological expertise. :-) Arbor 09:20, 2 August 2006 (UTC)[reply]

Did some minor editing - let's make clear that their conclusions are according to their model, and aren't incontrovertible fact. Clearly, they cannot prove direct causality, and are only making assumptions based on observed correlation. There is a large difference between a statistical model, and fact, and this need to be clear to the reader. We know, for example, for a fact, that when you combine baking soda and vinegar you get carbon dioxide. This experiment can be repeated, and verified independently by anyone at STP. Statistical models are not subject to independent, repeatable experimentation to validate causality, especially when dealing with such complex subjects. Much of the confusion in the world about science is due to the blurring of this line in popular media, and it behooves us to do a better job on Wikipedia. --JereKrischel 19:47, 2 August 2006 (UTC)[reply]

JereKrischel, these aren't "minor edits". You are turning a standard presentation of a scientific result into an unreadable mess. There is simply no way you will get away with such biased wording. Almost all science is done this way, it the original formulation is the way it is normally reported. On Wikipedia. And in scientific survey articles. The article itself takes no sides in what is true or not. We say "A and B [16] find that gnats cause foo." That's the way we are supposed to write that. If anybody disagreed, we would also be supposed to write "A and B [16] find that gnats cause foo, but C, D and E [18] find that snarks cause foo instead." (Assuming that [18] is a notable publication with a comparable degree of trustworthiness to [16].) Or "A and B [16] find that gnats cause foo, but G and H [342] have questioned their methodology." What we cannot do is to report this subject using selectively biased language, as you are supposing. You would have to rewrite a lot of encyclopaedias if you wanted to do that. Arbor 19:58, 2 August 2006 (UTC)[reply]


I've tried to use compromise wording, while making explicit that they are talking about their simplified model showing effects, but not watering it down as mere "prediction", which seems to be causing some consternation. Unfortunately, I think that if we don't make things sufficiently explicit, the reader is left with the wrong impression. I know it makes for a wordier article, but I'm sure that together we can find a decent compromise between brevity, and accuracy. --JereKrischel 01:02, 3 August 2006 (UTC)[reply]

I've tried to fix this up. Reasoning is in the edit summary. Do you have access to TBC to see what they've written? --Rikurzhen 01:47, 3 August 2006 (UTC)[reply]

JereKrischel, your changes are not NPOV as you're disparaging their results by your description; all else equal (ceteris paribus) is at the root of scientific inquiry, as it is impossible to model everything; there should be no discussion of correlation as that term has a specific meaning in statistics which doesn't apply here; random sampling isn't a "random technique"; etc --Rikurzhen 03:52, 3 August 2006 (UTC)[reply]

Again, the resampling calculation to look at what a populuation would be like on variables Y after a change in variable x is limited only in that it looks only at the first-order effects. A higher-order effect would be of the kind where (for example) changing average IQ causes a change in the relationship between IQ and marriage. Put another way, the calculation assumes that the world would continue to be structured the same way it is now only there would be more or less people at each level of IQ -- all else equal but changing IQ. --Rikurzhen 18:15, 3 August 2006 (UTC)[reply]

Again, I think there has to be something inbetween "all else equal" and "ignoring other factors" - it seems like not enough of a caveat of the theoretical and simplistic nature of their model and calculation. Certainly the authors admit as much regarding their model - such hesitations on the part of the authors themselves should be prominent. It is misleading and WP:NPOV#undue weight to present it as if it were an actual fact, in the way that if you take 25 cents away from a dollar you get 75 cents. By not making clear that it is a simulated model, I think we do a grave disservice to the reader. I guess I see it in the same way as climatologists who use models to predict the weather - it is a prediction, not a fact, that it will be 94 degrees next Tuesday in Los Angeles. Is there some compromise langugage we can find that will make it clear that these aren't facts being presented, but statistical models? The current language seems both POV pushing and misleading. --JereKrischel 22:56, 3 August 2006 (UTC)[reply]
Are there any publications which criticize this analysis (in specific or general). Otherwise, we're talking about crafting article content on the basis of personal opinion. You should read the relevant section of TBC -- maybe you'll see something I'm missing. I've tried to use their words and phrases, which AFAIK is the appropriate things to do wrt NPOV -- per Arbor above, H&M say foo is heart of NPOV. --Rikurzhen 06:38, 4 August 2006 (UTC)[reply]
I'm sorry, I don't think I'm being clear. We're not talking about any criticism of their analysis, in specific or general - merely properly reporting what they actually did. What they did was simulate a 3-point drop (and I suppose a 3-point raise) in IQ, and measured the effect their simulation had on certain metrics. They did not actually prove in any way that such a modification to IQ would have the effects their simulation calculations predicted - neither did they do any sort of experiment to validate their simulation (in fact, as you point out, they explicitly stated they ignored other effects). It is POV pushing to present their work as if it represents incontrovertible fact and calculation. Perhaps you would care to quote them directly instead of paraphrasing them in a POV pushing way that misleads the reader about what they actually did? Maybe that would be an appropriate compromise... --JereKrischel 06:46, 4 August 2006 (UTC)[reply]
It's a lengthy passage--too long. I cannot even imagine the "experiment" to test their predictions. (How do you change average IQ of a population experimentally?) They report their results as a valid first-order approximation in the context of a demonstration of the large tail-effects from a small change in mean IQ. ... I think your line of reasoning may be missing something crucial about what's reasonable to expect from a social science investigation, which is why I asked about whether you knew of some particular or general criticism. What we cannot do is water down their published conclusions. --Rikurzhen 07:52, 4 August 2006 (UTC)[reply]
Wait a second, I think JereKrischel's point is that they aren't actually performing many if any experiments, the crux or key point is they are basically manipulating data on a piece of paper (theoretical models etc...)? If there exists citeable criticisms of their allegedly unscientific methodology of course to be neutral we must present this issue differently than they do, right? Isn't neutrality more important than the desire to unquestioningly repeat what they represent as a "conclusion"? Cruxtaposition 10:57, 4 August 2006 (UTC)[reply]
I'm sorry, Cruxtaposition, thank you for your support, but I'm not stating that their methodology was unscientific at all - all I'm concerned about is that their scientific *simulation* be presented as the simulation it was. One could very well repeat their procedure on additional data sets, and validate or challenge their initial conclusions in the context of a simplified, simulated model, and in that sense it is scientific. However, it does not come close to definitively demonstrating that a 3 point move in IQ would have any specific effects, which is what I think the current langugage leads the reader to believe. I'm not in favor of watering down any published conclusions, but I am in favor of presenting those conclusions accurately. Their conclusions, simply put, were that if you simulated a 3-point drop or rise, ignoring secondary effects, you got certain changes in other metrics. Can we just make sure we state that clearly? --JereKrischel 17:19, 4 August 2006 (UTC)[reply]

Ok JK, you have a valid point about making sure to report scientific/data simulations as experiments(?) rather than as conclusive determinations but my concern is even larger, after recently reading the scientific racism article I think this article completely obfuscates the citable allegation that "race and intelligence" research is racist propaganda fabricated to have the appearance of science. What do you think? Cruxtaposition 17:36, 4 August 2006 (UTC)[reply]

I can only respond to that in one way, Cruxtaposition - assume good faith. Rikurzhen and Nectar, although we may disagree violently, are trying to do their best to make a good article. I think that they honestly believe in their cause, and want it presented in an open and honest manner. I think that they tend to push POV when they summarize, paraphrase, and report on various studies, but I think that it isn't intentional but just a matter of ingrained perspective. I believe that they are open to changes in the presentation, but they have severe concerns about painting the research they cite as completely invalid. I'd like to make it clear that it is questionable, but not so much as to lead the reader to believe that it is completely unfounded. I suppose I'd like to have the article leave a reasonable doubt.
My personal opinion is that much of the basic research both Nectar and Rikurzhen cite, and even much of the basic research cited by some of the Pioneer Fund grantees (who in many cases are just doing meta-analysis of other studies, not original work), is sound. There are biogeographic genetic differences between people, and it is clear that these can have practical effects. That being said, all too often this basic research gets twisted into a form which asserts classical "race" stereotypes and paints with an overly broad brush given the actual data behind the meta-analysis. On the one hand, you have people who get angry about any idea that biogeographic regions could have different characteristics, and on the other hand, you have people who want to simplify the complex nature of biogeographic differences and talk about "Black", "White", "East Asian" as if they are monolithic, immutable groups.
In the end, I think it would be very helpful if Nectar and Rikurzhen tried writing from the other POV (the one which wants things presented with reasonable doubt, instead of incontrovertible fact), but I can understand that it can be difficult. I try my best to not have my edits imply complete invalidation of the research, but I know sometimes I cross the line as well. I'm sure as we go forward, we can work things out appropriately.
Oh, a final P.S. - I got cited for 3RR during my attempt to find compromise langugage (with several different edits being considered identical because they had two words repeated 3 times, and a close approximation twice), and I felt that was unfair on the part of the accuser - please assume good faith on my part as well. I'm not here to edit war, I'm really trying to find common ground. Thanks everyone! --JereKrischel 18:05, 4 August 2006 (UTC)[reply]
"Assume good faith" is the worst kind of policy if it causes you and/or others to allow a suggestive and scientificly rascist article to remain unchallenged. Judging from the (inexplicable) tone of your response I think you underestimate all of the problems with this article. The priority should be to eliminate misleading dichotomies and suggestive and racism inducing word choices from Wikipedia and not perpetuate Rikurzhen and Nectar's denial, especially since I have not seen any instance of them ever debating in good faith. There is strong evidence that "race and intelligence" is nothing but intentionally misleading propaganda, this should be reported on and their word choices cleaned up or caveatted. The fact that Rikurzhen and Nectar have, perhaps unwittingly, systematically denied, downplayed and mischaracterized all fundamental criticism of "race and intelligence" is an extremely serious concern and massive violation of Wikipedia policy. Given the extreme violation of the principle of neutral presentation occuring in this article your response is vastly inexplicable. If few new readers are reading this article and are likely to be infected by its intentionally misleading presentation and your only concern is to wake Rikurzhen and Nectar up softly then I suppose I defer to your plan, but it seems a vastly insufficient way of conveying the extreme degree to which this article and area of "research" presents the issue misleadingly, in my interpretation. Cruxtaposition 23:18, 4 August 2006 (UTC)[reply]
AGF is not an easy policy, to be sure, and it has its flaws, but like democracy, it is the worst possible system, except for all others that have been tried. As much as I disagree with Rikurzhen and Nectar, they have been conscientous in engaging in dialog, and I have great hope for future compromises. I think the important part is for us to clearly understand how, in good faith, someone could disagree with us. In Nectar/Rikurzhen's case, I believe I understand their legitimate concerns - and the only way anyone is going to be able to find compromise with them is to acknowledge those concerns as legitimate, and find ways to address them. I firmly believe it is possible to make this article more NPOV, accurate, and less misleading than it is right now, and for it to be done in such a way that does not discredit the concerns of Nectar and Rikurzhen, and other editors who may be, in their private lives, proponents of the hereditarian stance.
It is not easy, to be sure, and I rely heavily on the good faith of the hereditarian editors, but in the end, I have to believe we share common ground in our desire to contribute and improve Wikipedia. Hopefully, as hard as it may be, you'll be able to try and AGF and have positive results. --JereKrischel 23:42, 4 August 2006 (UTC)[reply]
Assume good faith or give the benefit of the doubt (but don't stop doubting) is perhaps a good initial policy if you lack evidence to the contrary which in this case is absolutely not true. In my interpretation the only explanation for Rikurzhen and Nectar's position is they've been infected by (or brain washed into mindlessly supporting) what "race and intelligence" publications represent as conclusivity and the way those publications have errantly framed the entire issue around "race" (intentionally confusing description with causality). Cajoling proponents of one "stance" can not possibly be justification for the exclusion, downplaying and mischaractization of alternative stances and fundamentally disputed points. It is Wikipedia policy that a "stance" should be presented inside a generic article that presents the subject neutrally, which this article is the anti-thesis of. Cruxtaposition 00:01, 5 August 2006 (UTC)[reply]
All I can suggest is that you give AGF a try. I know that in the end, so long as we're willing to discuss things, we can move forward. --JereKrischel 02:30, 5 August 2006 (UTC)[reply]
AGF has been relentlessly abused by POV pushers and apparent propagandists who rarely if ever debate in good faith, AGF is in tatters along with many of the principles behind most Wikipedia policies. Though, I am always open to discussion, but I must note that I still interpret your position to be inexplicable, it does not compute that you would "disagree strongly" with Nectar and Rikurzhen and yet you allow the misleading presentation of this subject (that they, perhaps unwittingly, support) to remain unchallenged. Cruxtaposition 02:50, 5 August 2006 (UTC)[reply]

Late response... JK's last edits (+simulation) are fine. --Rikurzhen 04:21, 9 August 2006 (UTC)[reply]

User:Zen-master sockpuppet

I've been following the discussion here for over a day and it occured to me to ask Arbor exactly how JereKrischel's changes are "biased wording"? Everything JereKrischel is pointing out about this "research" is true. The article needs more critical sources and every disputed point and disputed word choice should be changed or caveatted. Pristine Clarity 22:48, 2 August 2006 (UTC)[reply]
Yes, everything pointed out is true. The problem is that it's already pointed out in the original text, and pointing the same thing out an additional 15 times in a small text isn't improving readability and distracts the reader from what is actually written. --Zero g 22:59, 2 August 2006 (UTC)[reply]
Your response doesn't make sense to me, every occurence of non neutral presentation requires a caveat or re-wording, right? We need to disassociate presentation from conclusivity. In my interpretation this article (and area of "research") is so vastly non scientific and utilizes a minefield of propaganda-esque suggestive language it's staggering. Unfortunately, it is difficult to illustrate just how wrong the way this issue has been presented. Pristine Clarity 23:14, 2 August 2006 (UTC)[reply]
No, everything needs to be placed in the proper context which the text in question currently does. If you can't illustrate how wrong (pov) this issue has been presented adding weasel words isn't going to improve the article. While I see room for improvement I'd suggest basing any refutations on expert opinion and keeping repetition to a minimum. --Zero g 01:20, 3 August 2006 (UTC)[reply]
How should someone go about trying to convince you what you consider to be the "proper context" is wrong? "Expert opinion" is not allowed to violate Wikipedia's presentation neutrality policies, right? What you interpret to be "weasel words" are actually necessary caveats, though they don't come anywhere close to conveying the extreme problems with the way this issue is misleadingly structured. Pristine Clarity 01:53, 3 August 2006 (UTC)[reply]

Everything JereKrischel is pointing out about this "research" is true. That's pretty much the definition of a NPOV/NOR violation. --Rikurzhen 00:27, 3 August 2006 (UTC)[reply]

How is that an NPOV/NOR violation? Please attempt to defend what others (including critical sources) interpret to be a vastly non-neutral method of presentation? You seem to be confusing "original research" with determinations of presentation neutrality violations? Pristine Clarity 01:27, 3 August 2006 (UTC)[reply]
a vastly non-neutral method of presentation? --Rikurzhen 01:32, 3 August 2006 (UTC)[reply]
Apparently you disagree with my interpretation but that doesn't make it inaccurate, nor an NPOV violation. I am only trying to get you and everyone to see the (extreme) lack of neutral presentation on this issue. Pristine Clarity 01:38, 3 August 2006 (UTC)[reply]

Scientific presentation?

This article does not utilize anything resembling a fair, neutral, nor scientific presentation of the issue. The current version of this article presents this issue as regurgitated from pro Pioneer Fund sources, Wikipedia's neutrality policies should be a much much much higher standard. Given a highly disputed and controversial subject an article should explicitly note each and every disputed point and explain to the reader exactly where the controversy begins. One of the biggest controversies surrounding this subject (that the current version of the article fails to mention) has to do with language, this article absolutely should not exclusively use pro Pioneer Fund sources' word choices. Pristine Clarity 23:04, 2 August 2006 (UTC)[reply]

Zen Master? --Rikurzhen 00:26, 3 August 2006 (UTC)[reply]
Please attempt to explain the current version of this article's vastly unscientific paradigm of presentation? Pristine Clarity 01:30, 3 August 2006 (UTC)[reply]
Sockpuppet, our job is just to report the issues in the accepted language used in the literature.--Nectar 01:50, 3 August 2006 (UTC)[reply]
Everyone who has ever criticized this article and area of "research" has been trying to tell you there is no such thing as "accepted language" on anything having to do with this issue. You seem to be arguing Wikipedia should just regurgitate what race and intelligence researches claim? That can not possibly be neutral. Pristine Clarity 01:57, 3 August 2006 (UTC)[reply]
You seem to be arguing Wikipedia should just regurgitate what race and intelligence researches claim Indeed, that's exactly what NPOV requires. --Rikurzhen 02:45, 3 August 2006 (UTC)[reply]

Pioneer Fund Research

The Pioneer Fund conspiracy theory of race and intelligence research appears to have originated with a 4-minute ABC News clip from 1994 related to The Bell Curve.[1] Attempts to find documentation about PF being associated with "bias" turned up a single published report which denied that PF caused bias in researchers, but suggested that PF had an influence by who they selected to fund. Are there published accounts to support the claim that the PF connection is not simply a "conspiracy theory" in the pejorative sense, but an actual conspiracy? --Rikurzhen 04:03, 3 August 2006 (UTC)[reply]

Isn't the evidence on Pioneer Fund sufficient? And isn't any label of conspiracy necessarily pejorative? In the end, the practical difference between funding biased researchers, and biasing researchers with funding doesn't seem all that much... --JereKrischel 04:16, 3 August 2006 (UTC)[reply]
No. I'll repeat the NOR in a nutshell to make the point clear. A claim that PF is bad + PF funds R&I doesn't give us R&I research is bad. --Rikurzhen 04:20, 3 August 2006 (UTC)[reply]
Wait a second...the Pioneer Fund article shows published arguments, concepts, data, ideas, statements and theories in their citations. The claim that PF=bad PF=R&I > R&I=bad is not original thought at all - it is well published and understood. What do you think isn't already published? What particular synthesis isn't found in the notes and references of the Pioneer Fund article? --JereKrischel 04:34, 3 August 2006 (UTC)[reply]
What do you think isn't already published? That criticisms of PF can be taken as reflecting badly on the R&I science they support. AFAIK, for example, Tucker argues against this specifically. --Rikurzhen 04:36, 3 August 2006 (UTC)[reply]
Here's a quote from one of the references on the Pioneer Fund page:

He cautions that the Pioneer Fund is "a Fund that was founded by supporters of Hitler's policies against ethnic minorities and handicapped people and that provided money for introducing Nazi propaganda into the United States; it still sponsors research (and projects) that have striking similarities to the work that provided the scientific basis for Nazi measures."[12] Benno Muller-Hill, author of Murderous Science: Crimes against Germany's Ethnic Minorities, echoes Kuhl; Muller-Hill writes that the Death Camps of Hitler's Germany were not the result of a crazed minority of empty-headed bumpkins, but rather "the result of the work of leading scholars of international repute ... Nazi racial policies were the work of trained scholars, not ignorant fanatics" - it was a science gone mad.

http://www.antipasministries.com/html/file0000042.htm

--JereKrischel 04:43, 3 August 2006 (UTC)[reply]

Another specific example of published connections made between Pioneer Fund and the research they support:

The Pioneer Fund has its origins in the eugenics movement of the late 19th century. This branch of science held that mankind could be genetically improved by proper breeding, ideally of white people with other white people. Its founder was Wickliffe Draper, the reclusive and, as it turns out, racist heir to a New England textile fortune. Draper's foundation was established to encourage "racial hygiene" and at points his money helped distribute a 1930s Nazi film on the subject.

William H. Tucker, a professor at Rutgers University's Camden campus, wrote a book on Draper and the Pioneer Fund. What he found was an interconnection between almost every academic with a strong racial theory and Pioneer.

"Everywhere I went where there was a scientist who had a racist sensibility, Pioneer had gotten in touch with him," Tucker said.

If Pioneer could not openly fund a cause, Draper, often using the staff from Pioneer, would funnel some of his own money. He gave $350,000 to help William Shockley, the Nobel laureate who invented the transistor, develop his theories about lower black intelligence. When Earnest Sevier Cox of The White American Society wanted to promote his campaign to repatriate American blacks in Africa, the money came out of Draper's pocket.

When Arthur Jensen of the University of California at Berkeley sought to prove Shockley's theories, Pioneer funded him. When Wesley Critz George of the University of North Carolina at Chapel Hill needed to put out an anti-civil rights pamphlet called "Biology of the Race Problem," Draper quietly underwrote the project, sending his staff to arrange the transfer of money.

"It was Pioneer in all but name," Tucker said.

http://www.post-gazette.com/pg/05030/450021.stm

--JereKrischel 04:52, 3 August 2006 (UTC)[reply]

That first quote is from an organization of apocalyptic Christians[2] who aren't citable in a science article.
The second quote states the Pioneer Fund is immoral and funds race and intelligence research, which is different than arguing criticisms of PF affect researchers' results.--Nectar 05:17, 3 August 2006 (UTC)[reply]
The fact that apocalyptic christians cited someone doesn't make the source uncitable (they cite Kuhl and Muller-Hill). Similarly, Tucker's quote, "Everywhere I went where there was a scientist who had a racist sensibility, Pioneer had gotten in touch with him,", is a direct criticism of PF grantees. Again, I'll assert that funding biased researchers is indistinguishable from biasing researchers with funding. In either case, the argument being made (by others, not me), is that the research done by Pioneer Fund grantees has a predetermined racist agenda - whether or not that is because the Pioneer Fund finds racists, and funds them, or finds normal scientists, and encourages them to racist conclusions with the carrot of their funding, is an open question, don't you think? --JereKrischel 05:38, 3 August 2006 (UTC)[reply]
The Nazism Kuhl attributes to R&I research doesn't derive from its support from the PF, which is what we're looking for.
William Tucker's book on the fund is AFAIK the criticism with the most academic influence and summarizes the Pioneer issue here:
If the fund has done no more than provide resources to universities for scientific research of high quality, then Pioneer may have been victimized by an intellectually stultifying pressure to conform to political orthodoxy. On the other hand, if the many grants made by Pioneer—not only to a number of well-known scientists but also to a host of obscure academics [see next paragraph] who similarly maintain that blacks are intellectually inferior to whites—mask other, less laudable goals, then the fund may be hiding an oppressive political agenda behind the protection of academic freedom.[3]
. . .
Of course, neither the existence of an odious agenda on Pioneer's part nor the desire of some grant recipients to assist in promoting it justifies attempts to harass researchers, impede their work, or prevent them from obtaining support from the fund. The sort of treatment accorded to Rushton, for example, by both his own department and the administration at the University of Western Ontario has been nothing short of disgraceful. . . Other Pioneer scientists, such as Jensen and Eysenck, have undergone similar harassment for arriving at politically unpopular conclusions.[4]
A final indication of Pioneer's interest solely in science, according to Weyher, was the fact that the fund has supported "only the top experts." It was true that the fund could cite a list of distinguished researchers such as Arthur Jensen, Hans Eysenck, Linda Gottfredson, and others who could point to accomplishments besides their studies of racial differences. But there is an equally long list of other Pioneer grantees, including Robert Kuttner, Donald Swan, Roger Pearson, Ralph Scott, and Frank McGurk—all obscure academics lacking any major scientific achievements and notable primarily for their contributions to a string of racist and neo-Nazi causes. . . As the Satterfield plan envisioned, the purpose of the former group of grantees has been to provide scientific conclusions that can be offered to dignify the policies advocated by the latter.
This seems to be a criticism of race and intelligence research in the sense expressed in the utility of research section (racists' use of the research results). — Preceding unsigned comment added by Nectarflowed (talkcontribs)

[edit conflict]

I'm pretty sure that the claim that some researcher (e.g., Jensen, Rushton, etc.) "is racist" is pretty well documented outside of the context of PF. What's not documented in any scholarly literature (AFAIK) is an argument from PF to a criticism of the science of R&I. (Note that calling someone racist is not the same as saying that they are wrong about science.) --Rikurzhen 05:52, 3 August 2006 (UTC)[reply]
Calling someone a racist is not the same as saying they are wrong about the science...but comparing the research the Pioneer Fund promotes to the discredited "sciences" of phrenology, eugenics, and other Nazi atrocities *is* saying that they are wrong on the science. Again, you may disagree, but I think it is clear that such thoughts (as wrong as they may be, as ad-hominem as they may be) are published, and not OR. --JereKrischel 23:02, 3 August 2006 (UTC)[reply]
Who exactly funds the researchers that try to prove the culture only hypothesis? I think the article might benefit from a more global view on the money that goes around to the various parties. --Zero g 09:36, 3 August 2006 (UTC)[reply]
I agree - it would be informative if we had the funding trail for all of the research. My guess though, is that it is much like tobacco "research" at the heyday of RJR - you'll have industry funded folk, and then everyone else. This doesn't make the industry folk inherently wrong, of course - skeptics of human causation of global warming are in a tiny minority, but that doesn't discredit their research necessarily. I'd be very interested to see any data on funding, to see if there is an anti-Pioneer Fund out there. --JereKrischel 23:02, 3 August 2006 (UTC)[reply]
It is well known that the source of funding influences the results of research. This has been shown in many fields.Ultramarine 10:47, 3 August 2006 (UTC)[reply]
And frankly, regardless if it has or hasn't influenced specific research, accusations of that nature have been published outside of wikipedia, and are citeable. --JereKrischel 23:02, 3 August 2006 (UTC)[reply]
Well, what precisely has and has not been published is at the root of this issue. We have to get it precisely right, and I think that the PF-->bias notions that keeps recurring on the talk page and in the article has not been published (except perhaps the ABC News story I cited). As I said above Are there published accounts to support the claim that the PF connection is not simply a "conspiracy theory" in the pejorative sense, but an actual conspiracy? --Rikurzhen 06:41, 4 August 2006 (UTC)[reply]
I'm not sure what you're asking for here, Rikurzhen - any published accounts claiming that the Pioneer Fund is a racist conspiracy are of course in the realm of theory. They can and do present evidence for their theory, but the only way to definitively prove their theory is correct would be direct admissions from the conspirators, right? I think you're going past the requirement for a published citation where someone calls them a conspiracy, and presents whatever evidence they have, into the realm of unreasonable expectations where you demand that their evidence be incontrovertible proof. Were we to hold that same standard for some of the racial science being quoted here, wouldn't we run into the same issue? The r-K selection theory between "races" is published, but should we avoid citing it until there is incontrovertible proof and repeatable experiments that show that it is "actual r-k selection"? --JereKrischel 17:24, 4 August 2006 (UTC)[reply]
I think the PF connection by itself is enough to taint any of their research, and certainly way more than enough for us to report the possibility of taint. I suggest everyone take a look at the scientific racism article, the crux or key point is the fact that publications have been fabricated to have the veneer or appearance of science but upon closer examination are intentionally misleading propaganda with zero scientific value. Cruxtaposition 11:14, 4 August 2006 (UTC)[reply]
Zen-master, if all we have are fringe claims that the APA and intelligence research and genetics communities are all in on a conspiracy, such claims cannot be presented as anything more than fringe without violating NPOV and NOR.--Nectar 20:37, 4 August 2006 (UTC)[reply]
I don't think that's the point. I think the point is that, given the PF's history and the roster of its fundees, there is legitimate suspicion that it selectively promotes one side of the current debate, for reasons possibly related to its historical position. Whether it does this by supporting specifically one type of research results, or by selectively supporting researchers with a specific mindset, people don't agree on. I believe however that a lot of people agree that there is at least the appearance of bias. I think the blurb in the intro and the PF section at this point cover the issue quite nicely. I wouldn't think it necessary to add anything, but I don't think it's warranted to remove anything on that point.--Ramdrake 22:32, 4 August 2006 (UTC)[reply]

In the world of science it's a serious accusation to accuse people of being bribed. Wikipedia shouldn't level that gun unless it's being levelled in the literature. Mainstream scientific opinion creates disciplinary constraints that prevent scientists from believing just whatever they want to (Gordon 1997). Editors who don't like that idea may give preference to figures who have no disciplinary or science-professional constraints, such as the non-notable anti-racist activist currently given prominence in the introduction, but scientific standards would require preference be given to those who have mainstream, scientific credibility (Tucker). Not doing so is an example of Wikipedia promoting minority POVs at the expense of mainstream POVs.--Nectar 06:55, 9 August 2006 (UTC)[reply]

Nectar, you're unfortunately still missing the point. As I think nobody here will contest, race and intelligence research has ample social ramifications. Thus, it is normal to also give voice to anti-racist viewpoints (as they are involved in the ramifications of the issue), and not just to the "majority" viewpoint of R&I researchers. We must depict all significant viewpoints in the article to respect NPOV, and I think it's been demonstrated time and again this is a very significant viewpoint.
Also, if you'll re-read, the presumption is not one of bribery per se, but of facilitating and encouraging a specific type of scientifically controversial result, either directly or through funding scientists with specific views. And yes, in some cases, scientists can and do believe whatever they want. Rushton's r/K hypothesis has many, many problems with it yet a few scientists still think it's right: paleoclimatological, historical, behavioral evidence all concur to disprove it.--Ramdrake 12:22, 9 August 2006 (UTC)[reply]
Ramdrake, I am sure we understand. The only interesting question is what WP should do with it. The only thing we can do is to attribute the viewpoint you are sketching to somebody else and report it. I suggested Why people believe weird things, where a good part of a chapter is devoted to this argument. What we can not do is to have our presentation influenced by the conclusion. Finally, and tangentially, I could level the charge of "encouraging a specific type of ... result" at every researcher who works under the AAA umbrella. After all, the AAA has gone public with a view of R&I research that is much more explicit than what the Pioneer Fund has ever formulated. By implication, all research associated with the AAA (especially, all research by AAA members) becomes tainted with the same kind of questionable motivation that ostensibly discredits Pioneer-funded results. Pompous rhetorical question: should this silly observation of mine have any consequences for how we present the viewpoints of cultural anthropologists? (Answer: no. That would violate WP:NPOV.) It's a similar debate to whether we need to discredit Gould's POV by saying that he's a communist. We can mention that somebody else (Bernard Davis, for example) has formulated this reservation, and that's all we can do. Arbor 12:46, 9 August 2006 (UTC)[reply]

(edit conflict)

I agree. This is why I went and fetched yet another cited reference for what the Pioneer Fund does. That way, it is demonstrably a reported opinion rather than an editorial opinion (which wouldn't do at all in Wikipedia, we agree). My point is, we can't start dismissing reported opinions because they're "not part of the mainstream", or "not based on scientific research". This is a subject that touches deeply in both science and society and where there are two significant, opposite viewpoints (sorry for belaboring the point, I know everybody understands). Not sure where you'e getting at with the AAA. Their statement, AFAIK was mostly that race as it is traditionnally defined was a social, not a biological construct. This is very much in line with Cavalli-Sforza's interpretation of his own results. I don't see how that can introduce a bias.--Ramdrake 12:59, 9 August 2006 (UTC)[reply]

As have been stated many times before, the Pioneer Fund is not only important for possible bias. It is an important part of the history of the research, of how hte the reserach if funded, for media image, policy implications, and so on.Ultramarine 12:55, 9 August 2006 (UTC)[reply]

Hypothesis vs Conclusion?

One problem with the current version of this article and "research" is it presents the issue extremely conclusively and suggestively. This area of "research" presents this issue using the anti-thesis of the scientific method in my interpretation. Does the current version of this article and do these "researchers" present the issue as a "working hypothesis" or are they attempting to portray everything as conclusive? This entire issue seems presented inside a dichotomy and paradigm that prevents readers from significantly considering alternative/environmental causes, it's as if everything is described in terms of "race" to presuppose that as the cause or main cause. This is a massive violation of neutral presentation.

Plus, the foundation of "race and intelligence" research is IQ testing which is itself fundamentally disputed so I don't see how any conclusions can even be proposed until those disputes are resolved? How can there possibly be "practical consequences", as the first sentence of the current version of this article claims, if the premise and foundation of this research is fundamentally disputed? I think this subject should be re-written, re-organized and re-titled around where the abstract foundation of this subject is so we properly convey where the scientific dispute begins, something like IQ test results disparity could work (I am open to alternative suggestions)? And it's additionally disputed whether current "IQ tests" actually measure the abstract concept of "intelligence". Just because someone named their test "intelligence quotient" doesn't mean it accurately measures intelligence (if even possible). Cruxtaposition 22:07, 5 August 2006 (UTC)[reply]


Zen-master sockpuppet has been blocked.--Nectar 23:05, 5 August 2006 (UTC)[reply]

Nectar, where is the evidence that Cruxtaposition is a sock for Zen-master???--Ramdrake 00:23, 6 August 2006 (UTC)[reply]

I think it's just based on the recent creation of that account, and the immediate interest in this article. I've never done any sock-puppeting (and don't ever intend to), but I would suppose the smart thing to do would be to create a bunch of accounts randomly in time, use them in different areas for a few months, and have them converge "coincidentally" on articles. Otherwise, it's just too easy to see new accounts pop up after old accounts have been blocked. That being said, although Cruxtaposition's POV represents one extreme, there is merit to some of the concerns - we need to be very careful about how we present the research that is available, so as not to overstate their results. Hopefully some of the pro-hereditarian editors can help us move towards NPOV by "writing for the enemy" a bit - both Rikurzhen and Nectar seem very well educated on the subject, and I believe that they both could do a good job in mellowing out the language, and still presenting the research fairly. --JereKrischel 01:08, 6 August 2006 (UTC)[reply]
I've read Zen-master repeat his/her same fringe demands over and over again. S/he was blocked by the arbitration committee because s/he wasn't willing to participate productively or within the rules of the community. I understand how you guys feel. I think the vast majority of WP editors are nice people and npov issues are solveable.--Nectar 02:34, 6 August 2006 (UTC)[reply]
I'll say it again, Nectar, I greatly appreciate your willingness to engage in dialogue, and work together despite our vastly disparate opinions. I hope you will forgive me if any of my comments seem overly harsh or critical - my sincere hope is to express my concerns in such a way that they are understandable and clear, and sometimes that desire for clarity may come across as aggressive and opinionated. I also endeavor to clearly understand your core concerns and issues, and I hope I've demonstrated some level of understanding of your POV. --JereKrischel 04:08, 6 August 2006 (UTC)[reply]

On the Racial and Ethnic Makeup of Contributors

I can't help but wonder what the equivalent of this article looks like on the closed Chinese version of the internet (behind that nice Great Red Wall : ). However, I think that it may be a good idea to attach some relevant information in regards to the racial and ethnic makeup of contributors. There is a page here that outlines some of the forms of consistent bias that wikipedia articles are not unknown for. I would personally (like most human beings with an ounce of common sense), would feel more comfortable reading articles like this if I knew that there was at least one black man analysing this information in order to poke holes in it (having this type of an article being the creation of white society only would surely have some effects on its NPOV value).

Interesting thoughts. I suppose one could use the same sort of analysis as some of the pro-hereditarian camp has done, and discover which "race" is generally better at science, engages in less racism, and more NPOV. Being on the other side (a weak-hereditarian with big problems concerning the arbitrary social "races"), I simply can't buy into the fact that one needs to be of a certain color or ethnicity to create an NPOV article. I believe we are all human, all related, and things like "white society" and "black society" are simply baseless distinctions. Not knowing what your opinion of the article is, requesting that it be vetted by specific "races" actually shows you buy into the pro-hereditarian stance in general, even if you may disagree with specifics.
What would make me a great deal more comfortable is if the partial-genetic influence on intelligence wasn't conflated with social ideas of "race". There is plenty of good science to be done here, and mapping results into racist stereotypes just doesn't seem to help things. --JereKrischel 17:35, 8 August 2006 (UTC) (and, FWIW, I'm human, with ancestry and family from every corner of the globe. If you name an ethnicity, I've pretty much got it.)[reply]
I generally support Chomsky's opinion on that issue:
I rather doubt that the non-white, non-male students, friends, and colleagues with whom I work would be much impressed with the doctrine that their thinking and understanding differ from "white male science" because of their "culture or gender and race." I suspect that "surprise" would not be quite the proper word for their reaction.[5]
But figures in this field who don't consider group differences off-limits have included representation from Whites, Jews, Asians, and African Americans (Thomas Sowell [6]).--Nectar 17:54, 8 August 2006 (UTC)[reply]

On the lack of a "Genetics and Intelligence" wikipedia article

To me, there is nothing more reflective of the very deep-seated and racially inclined motivations behind this whole blog other than the complete lack of a "Genetics and Intelligence" article on wikipedia. Nobody even thought to divorce Racial notions from genetics when contributing to wikipedia. Even within white populations, there are smart people and stupid people - so it would obviously go without saying that there should be an article considering genetics and intelligence that is probably completely independent of race. If possible, totally avoiding any reference to race in such an article would be an indication of the maturity and high level of importance that individuals attribute intelligence in all societies. --Nukemason 11:48, 8 August 2006 (UTC)[reply]

Inheritance of intelligence. Please help improving it. Arbor 11:22, 8 August 2006 (UTC)[reply]
Thanks for your response to this. However, I have had a brief look at the article that you have referred me to and feel that it does not consider certain questions that, perhaps a genetics and intelligence article might (yes, inheritance refers to genetic aquisitions of sorts - but, in the future, not all genetic acquisitions need occur via the mechanisms of inheritance). This is an issue that is covered in the eugenics wiki (briefly, the subsection that concerns eugenics technologies). However, I don't think that certain issues are covered such as *how much* human intelligence can be realistically enhanced (in particular, a point of interest for me has always been about theoretical limitations on human intelligence that are innately tied up with the characteristics of neurons, etc... - currently a hot topic in bio and nano tech). Not so much as a 'point', but rather a smudge.

--Nukemason 23:59, 13 August 2006 (UTC)[reply]

Citation analysis

Immediately available and verifiable information isn't "original" when it's employed in relation to a published argument. Normally it wouldn't be necessary because the editors of an encyclopedia would have professional knowledge of the field, but in this case it's useful for factual statements to constrain interpretation. Making the article less factually informative isn't an appropriate task here.--Nectar 23:59, 9 August 2006 (UTC)[reply]

In this context NOR is probably best thought of as "no original thought", which is how it's characterized at Wikipedia:Verifiability. (The addition does meet the requirements of verifiability policy.)--Nectar 00:10, 10 August 2006 (UTC)[reply]

Well, we're here to report the facts, not for "constraining interpretation". That sounds an awful lot like telling the reader what to conclude from the facts. However, I'd be content to let it go if there is a consensus to do so.--Ramdrake 00:12, 10 August 2006 (UTC)[reply]
What I mean by using "factual statements to constrain interpretation" is grounding in empirical reality the many and sometimes far-ranging claims that exist in the literature. (We report both claims and facts, but claims are not the same thing as facts.) --Nectar 01:06, 10 August 2006 (UTC)[reply]

Nectar, two things - 1) the assertion you were making in the footnote is unclear. I'm assuming you were trying to say that the journals you specifically mentioned did not report any criticism of the Pioneer Fund, is that true? 2) Even if they didn't report any criticism of the Pioneer Fund, it hardly makes a noteworthy point - 99.9% of Hindu religious texts don't discuss the implications of Jerry Seinfeld on modern mores, but that wouldn't be a worthwhile cite, would it? I exaggerate, of course, but my point is that unless these journal you cited regularly criticized other funding agents, their lack of criticism of the Pioneer Fund isn't really notable, is it? The fact that a movie critic didn't criticize a given movie may be notable, but the fact that they didn't critique a new non-fiction book on the amazon rainforest wouldn't really mean anything, would it? --JereKrischel 05:32, 10 August 2006 (UTC)[reply]

The two journals mentioned are responsible for a great fraction of all published reports in this field. Hence, they would be the place to look for the scholarly treatment of PF. --Rikurzhen 05:45, 10 August 2006 (UTC)[reply]
[Edit conflict] These two journals are the two that specialize in individual differences psychology/intelligence research and play one of the most prominent roles in determining the course of this area of psychology. American Psychologist, the journal in which the the APA report and one of Flynn's more prominent papers were published in, and Journal of Personality and Social Psychology, in which some of Sternberg's more prominent papers are published in, also have no mention of criticism of the fund. The point is that, while some authors claim the criticism of PF is quite important, the criticism of the fund hasn't even been mentioned in relevant journals. (A topic's representation in the literature is considered a prominent measurement of its importance.)--Nectar 06:05, 10 August 2006 (UTC)[reply]
Have any of these journals had anything critical to say about any funding sources? It sounds as if they are focused (and rightfully so) on the work itself, not the funding sources. If that is true, one wouldn't expect them to criticize the Pioneer Fund at all, and the absence of such criticism shouldn't be seen as notable. Can you cite any criticism these journals have had of any funding source? --JereKrischel 06:26, 10 August 2006 (UTC)[reply]
The point is that, while some authors claim the arguments about the Pioneer Fund are important, in the actual field it hasn't been important. (Something that is considered important to a topic would be mentioned in the literature of the topic.)--Nectar 06:36, 10 August 2006 (UTC)[reply]
Again, without a clear indication that these journals have criticized other funding sources, and have specifically not criticized the Pioneer Fund, it doesn't seem to help make your point - it feels almost as if you're trying to prove a negative. I think it might be fair to make sure criticism of the Pioneer Fund is properly characterized, so as not to lead people to believe that critics are geneticists when they're actually psychologists or vice versa, and I think the same sort of clear indication of sources is important (noting that Rushton is not a biogeneticist, for example). Not to venture too far off track, but this kind of thing is very important when comparing the conclusions of Cavalli-Sforza versus the interpretation of Rushton of C-S's work - one is clearly *not* a geneticist.
Am I correct in understanding your concern? That is, that any criticism sources be properly identified as to whether or not they are activist groups, psychologists, geneticists, professors, students, etc? I think we can make sure we properly identify everyone's background, to give readers a clear indication of what qualifications and specialties they have. --JereKrischel 07:02, 10 August 2006 (UTC)[reply]
Yeah, the point is to draw a line between anti-racist or anti-hereditarian activists and the actual scientific field. All that needs to be noted is that, contrary to claims by these authors, the criticisms of the PF aren't important to the field.--Nectar 07:11, 10 August 2006 (UTC)[reply]
I think making the leap that the criticisms of the Pioneer Fund aren't important to the field is definitely POV pushing. Let the reader decide if they want to make that leap, we shouldn't be making that decision for them. Similarly, although Rushton may only be a psychologist, and not a geneticist, we should let the reader decide if that makes a difference in his credibility.
The real fine line we need to draw, Nectar, on both sides of this, is not to engage in ad hominem attack on sources (even if they do it to each other). Although now that you mention it, a good section on the differences between what geneticists, sociologists, and psychologists think may be interesting - much of the disagreement we have may simply be because of the various interpretations foisted on what we can both agree is real, scientific genetic research. Anyway, I'll ponder how a section like that might be constructed...in the mean time, I think it should be sufficient to clearly indicate, without asserting any conclusion as to credibility, the background of the people making the various assertions (either in the case of Pioneer Fund, or in the case of psychologists interpreting the work of geneticists). --JereKrischel 09:09, 10 August 2006 (UTC)[reply]
My wording above was made stronger to make the potential implications more explicit and is different from the wording used in the article. If psychologists on one side of a debate make the same interpretations as leading geneticists on that side of the debate (e.g. Sternberg and Lewontin or Jensen and Risch) the distinction doesn't seem particularly important. In contrast, in the case of the Pioneer Fund and how researchers who accept grants should be treated, there's significant disagreement between the positions of anti-racists and scientists who are strongly on the environmental side e.g. Sternberg, Flynn, and even Tucker.--Nectar 10:04, 10 August 2006 (UTC)[reply]
I think it would be hard to assert that psychologists and geneticsts have the exact same interpretations, although significant overlap may be there. Especially with such a sensitive field as race and intelligence, we should make things clear. --JereKrischel 17:36, 10 August 2006 (UTC)[reply]

I don't understand the reason for these claims: 1. that if there isn't criticism of other funding sources in PAID, Intelligence, et al, then there's no reason to mention that they don't contain criticism of PF. 2. that biologists and psychologists are different.

My understanding is that: 1. sometimes it important to write about what doesn't exist -- there are other instances in the lead block. in this case, the fact that PF criticism isn't a regular part of scholarly discourse is important to note. 2. i just don't get it. --Rikurzhen 17:47, 10 August 2006 (UTC)[reply]

Let me try to explain (I'll use as simple an analogy as I can):
If there are no policemen at the firestation, is it significant to find that there are no policemen from New York at the fire station? No. You possibly could have generalized from the first observation (if it is repreated often enough, that is) that a fire station is not the place you should normally find policemen to start with.
Likewise, if there is no criticism of any source of funding in those journals, (much less of the Pioneer Fund), maybe that's because that's the wrong place to look for criticism of a source of funding, in which case it is not significant that you won't find criticism of the Pioneer Fund, because you won't find any criticism of a funding source in there to start with.--Ramdrake 18:03, 10 August 2006 (UTC)[reply]
Well I get that idea, but where else but the scholarly journal should you look for such criticism? I did a quick count of the journal references for this article. Here are the top 4 hits:
  • Intelligence = 34
  • Personality and Individual Differences = 20
  • American Psychologist = 16
  • Science = 9
  • Psychology, Public Policy, and Law = 7
So we're looking in the right place so long as the scholarly literature is the right place to look -- any reason to think it is not? Not finding criticism of PF could mean several things, but one explanation seems obvious. Regardless of the explanation, the fact is certinaly of note. --Rikurzhen 23:40, 10 August 2006 (UTC)[reply]
You'd be looking in the right place if you found one criticism of any funding agency in one of those journals. Then, it would tell you such criticism is rare, but that this isn't just the wrong forum. The first place to look for criticism of any agency is of course in the media and... oh wait! There is criticism of the Pioneer Fund there. :) I would think the scholarly litterature would be the wrong place to look for criticism of funding sources, because a rather large number of scientists know their next year's research grant may come from a different source than this year's (happens all the time - been there, done that), so the last thing they want to do is to speak up against a funding agency, as they really don't want to alienate them unnecessarily. So, in fact, there is a perfectly LOGICAL reason not to find such criticisms in the journals where they publish. As a matter of fact, the last time I heard someone criticize a funding agency, it was through the media (a researcher wanting to investigate the popular appeal of creationist debates was turned down because he "didn't provide appropriate proof of evolution" or some such silliness). Like I said, until you find a criticism of a funding agency in one of those articles, it's a fairly safe bet that these journals are not the correct venue for criticisms of funding agencies.--Ramdrake 23:55, 10 August 2006 (UTC)[reply]
I'm afraid I have to disagree. You're suggesting a remarkable kind of self-censorship. In the environment following the publicatin of The Bell Curve, there was plenty of room to snipe at PF. Again, if not scholarly journals then were should we be looking? Keep in mind the article material in question is the result of the literature search, and you're argument is that we shouldn't mention the lack of mention of PF in these journals. --Rikurzhen 00:18, 11 August 2006 (UTC)[reply]
I'm afraid you're conflating criticisms. Following the Bell Curve, there was some academic criticism (of the data and/or its analysis, in journals) and a lot of social and and political criticism (of the motives of the authors, and to some extent of the data and its analysis, in the media mostly). The only thing you have to do to prove me wrong is to find one criticism of any funding agency in anyone of those journals. Until that is done, your demonstration (that PF has no criticism in the scientific journals) has no control group (existence of criticism of a funding agency in a scientific journal). What I advanced earlier was a possible explanation for the failure to find any funding agency criticism in scientific journals. The hypothesis as to the reason may be right or wrong, but so far the fact remains: the absence of PF criticism in scientific journals means something only insofar as one can demonstrate that these journals are an actual forum for such criticism.--Ramdrake 01:15, 11 August 2006 (UTC)[reply]
According to their Wikipedia articles, several of the Pioneer Fund grantees are on the editorial board of Intelligence.Ultramarine 01:43, 11 August 2006 (UTC)[reply]
The same seems to be true for Personality and Individual Differences. I think that the real question is why these journals do not require disclousre of funding in such a controversial field.Ultramarine 01:51, 11 August 2006 (UTC)[reply]
Ramdrake, what I'm saying is that I don't need to "prove" anything because it's self-evidently interesting that there's no negative discussion of PF in any of these journals. UL, you could try expanding the search to all of ISI &/or Medline, the result will most likely be the same. --Rikurzhen 02:12, 11 August 2006 (UTC)[reply]
There are several articles critical of the fund published in other journals.Ultramarine 02:17, 11 August 2006 (UTC)[reply]
Many of the articles are critical: [7]Ultramarine 02:21, 11 August 2006 (UTC)[reply]

It seems like Ultramarine has found sufficient evidence that the Pioneer Fund has been criticized in scholarly journals (American Behavioral Scientist, Vol. 39, No. 1, 44-61 (1995) for example), so the assertion that there is no criticism from the scientific community in the field is obviously refuted. That being said, I think we should make sure we clearly identify those people making both claims, and criticisms. If a psychologist is making a claim, it is important to note their background. Similarly with a geneticist. Rather than try to make a point that one type of person feels one way, another group feels another way, why don't we just identify the people making the assertions and critiques, and let the reader judge if that should have any impact on credibility. Certainly, we shouldn't be deciding for the reader what makes one credible. --JereKrischel 05:27, 11 August 2006 (UTC)[reply]

An accounting of what criticism has been published would be great (and requested by me for some time), but AFAIK none of these are journals in the field (text in the article). --Rikurzhen 05:58, 11 August 2006 (UTC)[reply]

link to U.S. google scholar:

Negative reviews:

  1. [BOOK] The Funding of Scientific Racism: Wickliffe Draper and the Pioneer Fund - WH Tucker - 2002
  2. " The American Breed": Nazi eugenics and the origins of the Pioneer Fund. - PA Lombardo - Albany Law Rev, 2002
  3. The Pioneer Fund: Financier of Fascist Research - SJ ROSENTHAL - American Behavioral Scientist, 1995
The negative review (mentioned in my previous post):
American Behavioral Scientist
Seems to be "in the field" --JereKrischel 06:58, 11 August 2006 (UTC)[reply]
That's my #3. There are thousands of "no-name" journals. Has ABS published anything else about R&I? Here's the abstract from this article: Many citations used in The Bell Curve to provide a pseudoscientific veneer for Herrnstein and Murray's academic version of The Turner Diaries for the "cognitive elite" came from advocates of eugenics, whose "research" has been supported by the Pioneer Fund. A Nazi endowment specializing in production of justifications for eugenics since 1937, the Pioneer Fund is embedded in a network of right-wing foundations, think tanks, religious fundamentalists, and global anti-Communist coalitions. This article combines Domhoff's model (1978) of how the ruling class makes public policy, Knapp and Spector's (1991) model of how and why capitalists build racism, and Oliver Cox's (1948) analysis of how and why capitalists build fascism to show that the U.S. ruling class is laying the political, ideological, economic, and paramilitary groundwork for fascism. Liberal reaction to The Bell Curve and the threat of fascism has mainly taken the form of appeasement. History suggests it is time for a different response. --Rikurzhen 07:04, 11 August 2006 (UTC)[reply]
Another ABS publication on R&I: The Bell Curve: Too Smooth to be True --JereKrischel 07:12, 11 August 2006 (UTC)[reply]
Also "in the field":
That should be sufficient, I think, to make the point. Whether something is a "no-name" journal really isn't something we should be trying to judge, don't you think? --JereKrischel 07:07, 11 August 2006 (UTC)[reply]
As per Nectar below, the journal matters a great deal. As does the actual content of the articles. A cursory review finds that these are not the kind of articles to falsify the claim that "the criticism of the fund has not been an issue in the journals in its field". --Rikurzhen 07:52, 11 August 2006 (UTC)[reply]
The question isn't 'have journals in general published criticism of the PF,' it's 'have journals that are significant to the field of intelligence research/differential psychology published criticism.' Another description would be 'journals that play a significant role in the direction of the field.' For example, the two secondary journals I cited above [American Psychologist and Journal of Personality and Social Psychology] were selected from notable publications in the publication lists of notable researchers in the field.--Nectar 07:35, 11 August 2006 (UTC)[reply]
The problem, Nectar, is that "significant to the field of intelligence research" seems to be a subjective POV judgement. Certainly, The American Journal of Psychology, Contemporary Sociology, AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY, International Journal of Health Services, and Journal of the History of the Behavioral Sciences, all are in the publication lists of notable researchers in the field - perhaps not those that you agree with, but notable nonetheless. I think it's a losing proposition to arbitrate who the "notable researchers" are, and which journals in their publication lists are "notable publications". Certainly, as pointed out earlier, some of the journals mentioned have Pioneer Fund grantees on their boards - certainly enough to quash any criticism that may have otherwise been brought up. Instead of trying to draw an OR conclusion (no journals "in the field, and in the publication lists of "notable" researchers" have criticized the Pioneer Fund, therefore such criticism is less credible), let's just report the facts as they are, and clearly identify who is criticising the Pioneer Fund, and who is making assertions of a primarily hereditarian stance (psychologists/biologists/geneticists/pundits). If we're going to try and make a point that a certain journal deigned to criticize the Pioneer Fund, then I think it's important to note if the Pioneer Fund has significant influence on that journal. --JereKrischel 18:47, 11 August 2006 (UTC)[reply]

(edit conflict)

Let's please be careful. Citations have been introduced here from science journals that criticize PF. When those were introduced, it was claimed the journals were not sufficiently close to the field of study. When more citations were found from journals which one could assume were sufficiently close to the field of study, they were deemed not to be influential or significant enough in the field. That unfortunately looks very much like a tactic called "moving the goal post". What is now required if we are to restrict the list of acceptable journals would be a list of journals in the field by an authoritative author not related to the Pioneer Fund. Otherwise, we'll be running in circles.--Ramdrake 18:54, 11 August 2006 (UTC)[reply]
The "goal post" has always been journals in the field, as that's the point that was made in the article. The four journals I've referred to have been demonstrated to be significant to the field. If you can't quantify (quantification is the opposite of subjective) any intelligence researchers - either environmental or hereditarian - who have published intelligence research in those journals than they're not intelligence research journals. Note that Snyderman and Rothman surveyed specialists instead of random figures, so it's not an original concept. It's an extraordinarily bold claim that the specialist journals in which intelligence research is primarily published suddenly lose their position because a portion of their editorial boards are composed of highly-cited researchers WP editors don't like. That's fine to note that portion in the article footnote. (Elsevier's website is malfunctioning, but the editorial board for Intelligence can be viewed in the google cache.[8])--Nectar 21:28, 11 August 2006 (UTC)[reply]
Then, please find a proper cite that says "these are the appropriate journals for the field". Also, once that is done, you would need to demonstrate that these journals accept to publish criticism of funding sources (they may have a policy not to). As for the "extraordinarily bold claim", if journal X has PF fundees on its editorial board, any time an article is submitted to that journal that would be critical to PF would cause those on the editorial board that have ties with PF to be in a potential conflict of interest situation. I don't see anything bold to this statement.--Ramdrake 22:21, 11 August 2006 (UTC)[reply]
The "citation" for which journals publish in intelligence research is represented by intelligence researchers, such as Sternberg and Flynn, publishing intelligence research in those journals. Sternberg and Flynn and other environmental editors also sit on the board of Intelligence. According to you, it would be a conflict of interest for such journals to publish hereditarian articles or criticism of environmental positions. Environmentalists like Sternberg most certainly pull out all the stops in their articles that criticize hereditarian positions, but they haven't deemed the sinister Pioneer Fund Conspiracy noteworthy enough to even mention in passing in any of their many articles in these journals. Anyway, this isn't about leading hereditarians not being blacklisted from journal editorial boards, as you would desire, as the other two journals that are significant to this field that I cited above do not have grantees on their boards, AFAIK, but still haven't discussed the PF Conspiracy.--Nectar 22:59, 11 August 2006 (UTC)[reply]

What I said is that it would a potential conflict of interest situation for PF fundees on the board of these journals to publish an article criticizing PF. I didn't say anything about an article with a pro-hereditarian or pro-environmental stance. And BTW, we still do need a proper reference (and I mean a litteral reference) as to which journals should be considered "in the field" and which shouldn't. The statement that it is "represented by intelligence researchers, (...) publishing intelligence research in those journals just isn't good enough. Your definition of "journals in the field" seems to basically boil down to about four titles. I'm not sure you appreciate how restrictive a definition that is, for any field of science.--Ramdrake 23:14, 11 August 2006 (UTC)[reply]

(1)The principle you're proposing is that a journal creates a conflict of interest by having editorial board members who take positions on either side of a debate. It's quite normal for journals to have board members who take positions, and in this case there are board members from both sides of the debate. AFAIK Intelligence and PAID have never been accused of bias.
(2) How could a journal be in intelligence research if it doesn't publish intelligence research or any articles by intelligence researchers? I looked some more. Psychological Review has published note-worthy articles by researchers on both sides, and Science and Psychology, Public Policy, and Law are the fourth and fifth most cited journals in this article, but PF criticism hasn't been mentioned in these journals. That brings the tally to 0 in these 7 journals. If you give an example of note-worthy intelligence research in another journal that you want checked for PF criticism, it can be checked.--Nectar 00:04, 12 August 2006 (UTC)[reply]
Now you're asserting that citation in a Wikipedia article makes something note-worthy enough to be included in a Wikipedia article? I'm sorry, but I don't believe your criteria for what a "note-worthy" journal is makes sense in terms of NPOV. Similarly, a journal, such as Journal of the History of the Behavioral Sciences, may not deal with R&I directly, but it certainly is important as an analysis of the field. I think the principle we should stick to, and implement throughout the article, is clearly indicating the source of criticism or assertion, be it a scholarly journal regarding behavioral sciences, or a psychologist re-interpreting the findings of a geneticist. Trying to come up with criteria for what is and isn't "note-worthy", is definitely OR. --JereKrischel 06:24, 12 August 2006 (UTC)[reply]

Verifiability

Ramdrake, citing a journal's publication list is not orginal research according to the definition given at that page:

  • "Citing sources and avoiding original research are inextricably linked: the only way to demonstrate that you are not doing original research is to cite reliable sources which provide information that is directly related to the topic of the article, and to adhere to what those sources say. . . Research that consists of collecting and organizing information from existing primary and/or secondary sources is, of course, strongly encouraged. All articles on Wikipedia should be based on information collected from published primary and secondary sources. This is not "original research"; it is "source-based research", and it is fundamental to writing an encyclopedia." WP:NOR

--Nectar 13:36, 12 August 2006 (UTC)[reply]

Actually, here's the way I see it: you surveyed a handful of publications looking for criticism of the Pioneer Fund. When you found no such criticism (and no other criticism of any other funding source for that matter, AFAIK), you concluded: "I believe these journals constitute the main of the research field's publication space, and as such not finding PF criticism in them means the research field itself is not critical of the Pioneer Fund in its publications." The result tabulation is your OR, and the belief that these journals constitute the main of the research field's publication space is yours (and not objectively demonstrated). For both these reasons, I would say your findings don't have the significance you bestow on them, and don't have a place in Wikipedia unless independently arrived at by a verifiable source.--Ramdrake 14:35, 12 August 2006 (UTC)[reply]
You appear to be saying it's just impossible to determine whether or not a journal publishes intelligence research. The solution is quite simple, per the above section: if the journal publishes significant intelligence research articles, than it publishes intelligence research. The journals selected are objectively demonstrated to have published significant intelligence research, and more examples can be very easily provided. (Science, though, is a special case compared to the other 6 journals.)--Nectar 21:01, 12 August 2006 (UTC)[reply]
OK maybe there is a misunderstanding here we can clear up and go forward. I'm quite convinced the journals you came up with are relevant and significant in the field. It is your exclusion of all other journals I have a problem with. If you can demonstrate that the journals below which were also considered somewhat "in the field" really aren't, with a reason why, I'll withdraw this objection.
  1. The American Journal of Psychology
  2. Contemporary Sociology
  3. AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY
  4. International Journal of Health Services
  5. Journal of the History of the Behavioral Sciences
These do not specialize just in the field of intelligence, granted, but their scope should include that field, and I know several of them are quite significant journals. So, please demonstrate how they're not relevant to the field (except for their lack of specialization).--Ramdrake 22:32, 12 August 2006 (UTC)[reply]
Come on. I can't be more explicit with you. A consensus version is not defined as your POV-warrior version with the other side removed. I had changed the text to refer to "intelligence research journals," so the journals we're looking for have clear criteria. The burden of proof is on you to show irrelevant journals such as the International Journal of Health Services are intelligence research journals. Showing that is quite simple: find any significant intelligence research published in the journal.--Nectar 22:45, 12 August 2006 (UTC)[reply]
[Moved from above] Biochemists can certainly make claims about electrochemistry, but if they don't participate in the research and discussions in electrochemistry then they're not part of the scientific field. The point in our case is that the journals and researchers (on both sides) that determine the course of psychometrics haven't even mentioned this criticism. Re: OR; It wouldn't be practical for Wikipedia to operate under a "all sources are equal and cannot be evaluated" policy.--Nectar 10:30, 12 August 2006 (UTC)[reply]
[Moved from above] The discussion of the 7 journals above was a reference to the articles they published, not this wiki article. For example, the Journal of Personality and Social Psychology is the journal in which Steele and Aronson 1995 (stereotype threat) was published, and the Sacket et al. 2004 response was published in American Psychologist. If you can provide other journals in which significant intelligence research is published, we can look for discussion of the PF.--Nectar 10:40, 12 August 2006 (UTC)[reply]
Define "significant intelligence research". I think it is clear that the journals listed do publish significant intelligence research, and I don't think you can reasonably say, or even begin to "prove" that they don't. Especially the "Journal of the History of the Behavioral Sciences" seems particularly relevant, since as a meta-journal (of the history of the field), they are more likely to criticize funding sources, and note historical trends one way or the other. I disagree with your narrow definition of "intelligence research journals", and think that unless you can show a reasonable cite that states that your definition is valid and accepted, it simply is OR. --JereKrischel 05:53, 13 August 2006 (UTC)[reply]
[Moved from above]Ultramarine provided the Google scholar results, and I've listed those with negative criticms of the Pioneer Fund. I think the onus is on you to show some cite that clearly states that these journals are not "journals in which significant intelligence research is published". Your definition of "significant" is ambiguous, and a POV evaluation so far, I think. --JereKrischel 05:53, 13 August 2006 (UTC)[reply]

These are the numbers of articles in these journals that contain "iq" in the abstract or title, compared to a neutral term, "influence" as a rough gauge of journal issue size and frequency (1994-2005). Journals that have published criticism of the Pioneer Fund are in italics.

"IQ", "Influence", and Ratio

  1. Intelligence 123, 52, 0.4
  2. PAID 102, 242, 2.4
  3. Psychological Assessment 29, 25, 0.9
  4. Journal of Educational Psychology 15, 74, 4.9
  5. American Psychologist 13, 50, 3.8
  6. Journal of Personality and Social Psychology 12, 263, 22
  7. Psychology, Public Policy, and Law 9, 18, 2
  8. Psychological Review 4, 33, 8.2
  9. American Behavioral Scientist 3, 72, 24*
  10. The American Journal of Psychology 2, 29, 14.5**
  11. Journal of the History of the Behavioral Sciences 1, 18, 18

*From Sage Publications; ProQuest gives a different number due to the use of shortened abstracts.

**I only have access up to 2002 for the moment, so this figure is for 1991-2002.

Journals that don't comment on IQ aren't in the field. That's fine; it just means they're in a different discipline. (Their opinion should still get reported in this article.)--Nectar 02:34, 14 August 2006 (UTC) [reply]

May I suggest that instead of comparing the frequency occurence of the string "IQ" with that of a random word like "influence", it be compared with a known common word in abstracts, such as "results" (which should be in nearly all abstracts)? Comparing a target word with a known common word sounds to me closer to a proper control.--Ramdrake 12:24, 14 August 2006 (UTC)[reply]
I'm sorry, but not finding the acronym "IQ" in the proper proportions in an abstract hardly means these journals (or other journals) are not discussing intelligence (in fact, since there were no ratios of "zero" for those critical, it seems to prove quite clearly the point that journals in the field have criticized the Pioneer Fund). I strongly disagree with your criteria (having "IQ" in the abstract or title), as a method of determining what journals are notable in the field of intelligence research. Certainly your criteria is your own invention, and OR. Or do you have a reasonable cite that states, "One may determine the notability of a journal in the field of intelligence research by measuring the number of times the term "IQ" is present in the abstract or title of articles contained in such journal compared to the term "influence""? --JereKrischel 05:49, 14 August 2006 (UTC)[reply]
You don't seem to understand much about these topics or science in general. Anyway, here are the number of articles on "intelligence" in the top two of each category of the above journals.
  1. Intelligence 283
  2. PAID 278
  3. The American Journal of Psychology 12
  4. American Behavioral Scientist 7
Re:"specialist intelligence research journals identified by our editors as notable." When we have quantifiable and verifiable measures, you don't need to rely on a WP editor's opinion that some journals are intelligence research journals and some are not. Journals that publish a minor article on intelligence once a year are not "intelligence research journals." --Nectar 07:12, 14 August 2006 (UTC)[reply]
Nectar, may I remind you of WP:NPA? "You don't seem to understand much about these topics or science in general" sounds like an ad hominem if I ever saw one.--Ramdrake 11:27, 14 August 2006 (UTC)[reply]
You can interprete it as an "attack" if you want, but the point is you guys are making the Wikipedia process unworkable by consistently not understanding either the topic or the arguments on the talk page but insisting on edit-warring anyway. If you don't have experience in these areas, being POV-warriors isn't a workable or civil approach. --Nectar 12:01, 14 August 2006 (UTC)[reply]
I think we understand both the topic and your arguments; we just happen not to agree with several of the latter. And BTW, I also have some experience in writing science papers. And as far as POV-warring is concerned, if you consider our edits POV-warring, yours are just as much. Qui sème le vent récolte la tempête. Now, please let's get back to discussing the article, not what we know or don't know about science.--Ramdrake 12:24, 14 August 2006 (UTC)[reply]
You have determined these metrics, Nectar, out of your own personal opinions. Whether or not a journal publishes minor or major articles on any given frequency is *not* an according to hoyle definition of "intelligence research journals". --JereKrischel 07:40, 14 August 2006 (UTC)[reply]
That is wrong. Frequency arguments like the one that Nectar uses are the standard way of evaluating scope and quality of scientific publications. That being said, WP is not the place to make such a pronouncement. We can use it merely to decide how much attention we should give to this argument. And currently we are giving far too much to it, based on several metrics. In this, summary-style, article, the PF funding issue deserves a paragraph under Accusations of bias, at most. (And a sentence, at least. But it certainly belongs there, I maintain that the chapter in Why people believe weird things is the best argument for its inclusion.) More extended coverage can be given in the appropriate subarticles. Arbor 09:16, 14 August 2006 (UTC)[reply]
Looking at the intro, two sentences and 9 lines are devoted to the criticism of the Pioneer Fund, while at least 7 sentences and 29 lines are spent defending it (including footnotes), thus it may be needed to rebalance both sides. It was already discussed that some criticism of the Pioneer Fund was needed in the intro, and given its weight in intelligence research (especially on the pro-hereditarian side) a small section was also deemed appropriate.--Ramdrake 11:19, 14 August 2006 (UTC)[reply]
Well, to me, appropriate for the introduction would be a half-sentence, at most. Something like "Criticism of R&I research includes accusations of bias based on assumptions about the political ideals of the researchers or the funding agencies." Add "such as the Pioneer Fund" if it makes you happy. The current introduction is too long and detailed anyway and is bound to be cut to pieces next time we submit to Peer Review. There is no way a point-and-counterpoint debate can survive in three first paragraphs of a summary-style article, and the editors trying to fight that war are wasting their time. Write a single good paragraph under Accusations of bias, and write an proper section in the appropriate subarticle. Arbor 11:35, 14 August 2006 (UTC)[reply]
As per my comment directly below, I concur with Arbor on this point. I would second his suggestion to replace the lead text block with a sentence. Move the debate to the relevant section of the article. --Rikurzhen 23:11, 14 August 2006 (UTC)[reply]


Evaluating (for appropriateness) and summarizing sources is a "power" (so to say) that WP grants editors. saying what journal do and do not say is within that power. we have good reason (after reviewing the literature) to believe that PF has not been an issue in the journals were intelligence research is commonly published. i have at times asked for a similar summary accounting of where and by who PF has been criticized. such a summary might make an appropriate addition. however, i'll renew my objection that the lead text is dedicating too much space to describing PF (pro and con, but of course neutrality is essential). the amount of space dedicated to PF in the intro is disproportionate to its prominence in the article and importance in the field. --Rikurzhen 07:25, 14 August 2006 (UTC)[reply]

Summarizing sources is fine, but arbitrarily determining what is a "notable" journal and what isn't, according to a completely made up metric of "IQ" versus "influence" in abstracts and titles is OR, and POV pushing. --JereKrischel 07:40, 14 August 2006 (UTC)[reply]
JereKrischel, if you have nothing to say please stop wasting our time. Notable is a standard term used in science to refer to influence within a topic. Anyway, note that the term under discussion is "intelligence research." There can't be intelligence research without research on intelligence. A biochemistry journal publishes on biochemistry. An intelligence research journal publishes on the research of intelligence at more than a frequency of 1% of it's articles. The frequency of research of intelligence being written on in a journal can be measured by the frequency of articles that discuss intelligence (because intelligence is a necessary part of intelligence research).--Nectar 23:03, 14 August 2006 (UTC)[reply]
Notable is not a standard term, it is a subjective one, Nectar. "Intelligence Research" certainly includes the journals mentioned with criticisms, albeit not at the frequency which you would like to arbitrate. I think as pointed out earlier, you are moving the bar. Your original point was that people "in the field" did not have such criticisms of the Pioneer Fund. This has been refuted by the examples given. Now you want to marginalize the scientific opinions of people published in scientific journals by asserting that such journals are not sufficiently dedicated to "intelligence research".
Frankly, the point that "intelligence journals that publish research on intelligence at a frequency of more than 1% have not criticized the Pioneer Fund" doesn't really give any useful information to the reader. I've removed the offending section, and hope we can move on from here. --JereKrischel 23:56, 14 August 2006 (UTC)[reply]
So the non-offensive wording you prefer is "Journals that devote more than 1% of there content to intelligence research have not criticized the fund"?--Nectar 08:25, 15 August 2006 (UTC)[reply]
Or.. what are you talking about? In response to your unreasonable claims, the text in the article was "specialist intelligence research journals." Journals that publish 1% of their content in a discipline aren't specialists in that discipline.--Nectar 08:39, 15 August 2006 (UTC)[reply]
BTW, standard English is to refer to quotations from authors in the present tense. [9] --Nectar 08:33, 15 August 2006 (UTC)[reply]
I'm happy with your agreement on the non-offensive wording. I'll make the change. --JereKrischel 15:25, 15 August 2006 (UTC)[reply]
Maybe you didn't see this. A journal publishing reviews of books doesn't constitute taking official positions on an issue unless they state they're doing so.--Nectar 09:40, 16 August 2006 (UTC)[reply]
These weren't all reviews of books and some journals did take position, so the original sentence is appropriate.--Ramdrake 12:23, 16 August 2006 (UTC)[reply]
'tis not. PF-based criticism is noteworthy, and formulated primarily outside the scientific community. Our description should reflect that. Adding "scientific journals" to the list is misleading because it implies that the specialist scientific community voices this criticism. (I don't understand why you want to peddle an extrascientific opinion in the first place. Moreover, why you insist on giving it the veneer of scientific respectability is beyond me. In any case, you are promoting a highly skewed presentation of an amateur viewpoint. Don't.) Arbor 13:21, 16 August 2006 (UTC)[reply]
Please read above the many criticisms found in various scientific journals (although not deemed specialist journals) about the Pioneer Fund. Part of the scientific community does criticize the PF and this whole area of research, in addition to the extrascientific criticism. Trying to marginalize or totally occult this opposition as "amateur" or "extrascientific" is simply POV.--Ramdrake 13:33, 16 August 2006 (UTC)[reply]
If you want to write "... and in scientific journals outside the field of expertise" or something like that, I would no longer call it misleading. (Honestly, I don't see how such a qualification makes the statement less denigrating about PF-based criticism.) In any case, I am strongly in favour of moving the entire debate into the relevant subsection, which is why my edit simply removed the attribution to "scientific journals" instead of adding another qualification. I won't make this article even more unreadable by participating in the point-counterpoint-qualificationOfCounterpoint-minuteDetailOfQualification exercise that is currently going on. Arbor 13:39, 16 August 2006 (UTC)[reply]
I agree with you, the debate is getting specious. Since this is a scientific debate with huge social implications, I am also starting to wonder what is the point of drawing such a fine line in the sand between the societal and scientific sources of criticism. It is well-known that scientists in general are not versed in social critic, especially not in science journals, and on the other hand one does not need to be a specialist in the field to level legitimate criticism in this area of research. To me, the distinction is getting more and more artificial.--Ramdrake 14:01, 16 August 2006 (UTC)[reply]
Ramdrake, please provide a reference that makes the false statement that a journal publishing either an article or a book review means the journal takes an official position. Thanks for not introducing scientifically illiterate statements into the article.--Nectar 14:03, 16 August 2006 (UTC)[reply]
You said it yourself: A journal publishing reviews of books doesn't constitute taking official positions on an issue unless they state they're doing so. Criticism of the Fund was found in some science journals (not directly in the field, granted, but not limited to book reviews). That criticism was just referenced, that's all. As a rule, journals (except newspapers in some occasions) do not take "official" position on anything, but may contain criticism which is usually attributed to the authors. By the same token since journals do NOT take official positions except in rare cases, the absence of criticism likewise is also meaningless. Thank you for clarifying that.--Ramdrake 14:15, 16 August 2006 (UTC)[reply]
It's not "by the same token" because nobody claimed an absence of criticism in the specialist literature represented official positions on the parts of journals. Wikipedia's job is to represent the literature accurately, which includes the presence or lack of presence of PF criticism in the literature.--Nectar 14:35, 16 August 2006 (UTC)[reply]
So, for us to report that a journal does not contain criticism of the PF, nothing special is required, but for us to report that a journal has published criticism of the PF, an oficial endorsement of the criticism by the journal is a requirement? That's a double standard. You know as well as I do that there is presence of criticism of the Pioneer Fund in journals that are peripheral to this specific field of research. But first deeming these journals not significant in the field of intelligence research and then requiring that the journal take official position against the Pioneer Fund is not the right way to quelch the criticism, not to mention a blatant example of moving the goalpost. Please stop doing that.--Ramdrake 14:49, 16 August 2006 (UTC)[reply]

Should we make clear that the absence of criticism in specialist literature does not represent the official position of the specialist journals in question? "Criticism in journals with over 1% of their published articles in the field of intelligence research has not been found, but neither have those journals taken any official stance absolving the Pioneer Fund from any potential wrongdoing"? Would that satisfy you, Nectar? --JereKrischel 00:51, 17 August 2006 (UTC)[reply]

I think the point is to summarize the literature, not advocate positions. If a criticism doesn't appear in journals in the discipline that would commonly be taken to mean it's not a significant issue in the discipline. Adding "including critiques published in scientific journals" seems a little silly, as scientists are already assumed to publish in scientific journals.--Admissions 06:06, 17 August 2006 (UTC)[reply]
Scientists supporting a topic in journals in normal; it would be notable if scientists had only supported the topic outside of journals.--Nectar 08:20, 17 August 2006 (UTC)[reply]

The notable point here is that scientists very rarely criticize a funding agency (any funding agency), whether inside a journal or in the popular media. That criticism of the PF not only exists in popular media but has found its way in some journals is notable, even if the journals in question are only peripheral to the field of research, and even if the journals did not take an official stance criticizing or endorsing the Pioneer Fund. Now, if one wants to make the point that the few journals considered notable to the specialized field of research do not contain any such critic of the Fund, that's fine too. But the existence of such criticism in any journal is in and of itself notable, regardless of all the caveats one wants to append to it.--Ramdrake 12:04, 17 August 2006 (UTC)[reply]

I agree with Nectar that we should summarize the literature, and not editorialize it. Arguing in an editorial tone in a reference, leading the reader to the believe that Ulrich Neisser meant a certain conclusion other than what he actually said, is really inappropriate. I further agree with Ramdrake that if we are going to state that specialist journals with greater than 1% intelligence research did not publish any criticism, we should also identify the criticism that occured in other scientific journals, not so concentrated directly on the field as per the 1% criteria. Our other possible alternative is to remove the assertion of the negative (seeing as the lack of criticism by specific journals that publish > 1% intelligence research really isn't all that important of a point, and quite possibly OR, since nobody has published a list of > 1% intelligence research journals or done that research yet), and remove the specification that scientists who have criticized the Pioneer Fund were published in scientific journals. In either case, it is a logical fallacy to appeal to authority to bolster a position. --JereKrischel 16:23, 17 August 2006 (UTC)[reply]
I wouldn't mind removing both specifications, as long as we keep the same standard for both issues. It would also make for a more legible text.--Ramdrake 18:12, 17 August 2006 (UTC)[reply]
WP editors aren't so helpless and blind as you would imagine. The citation for the 1% is the journals' article lists. This may seem exotic to you, but a discussion at Wikipedia_talk:No_original_research#Citation_indexes confirms the reasonable conclusion. This section is going to need to be both NPOV and scientifically literate, whether you POV warriors like it or not. If we need a third party to comment in order to do that, that's something we can do.--Nectar 18:32, 17 August 2006 (UTC)[reply]
Nectar, the suggestion here was to remove all references to criticism (or lack of criticism) of the PF from the article. If you want to take this up and call in the WP:Mediation Cabal, I don't mind: let's do it. And this calling us "POV warriors" is very much the pot calling the kettle black.--Ramdrake 18:44, 17 August 2006 (UTC)[reply]
Nectar, the citation for the 1% is original research, judging from article lists by a novel, non-standard criteria, what is and isn't "intelligence research", and then compiling, based upon a limited sample, a list of journals. Why not 2%? Why not 3%? Why not define an article on "intelligence research" as one that actually performs a direct study (rather than a study of studies)? Or perhaps define an article on "intelligence research" as one which uses reaction time tests in addition to IQ as a proxy for general intelligence? Unless you can find a reasonable cite that states, "By generally accepted definition, research journals are considered "in the field" if they produce more than 1% of their articles directly on the field based on the title and abstract contents", you're doing OR. --JereKrischel 00:33, 18 August 2006 (UTC)[reply]
(1)This article is on intelligence. Intelligence research is on intelligence. The word "intelligence" plays a central role in the last two sentences and is sufficient to gauge articles on intelligence research. OK?
(2)Can you agree there is a categorical distinction between general journals that publish 1% of their articles on intelligence, and specialist journals that publish a majority of their articles on intelligence?--Nectar 05:04, 18 August 2006 (UTC)[reply]
1) Judging an article's content simply by finding the word "intelligence" in it isn't sufficient to tell whether that article deals with intelligence. There are of course many single word synonyms, as well as phrases which could mean the same thing. 2) There very well may be a categorical distinction between journals that publish 1% of their literature and 51% or more of their literature in a given category, but can't the same be said about journals that publish 2% of their literature and journals that publish 45% or more of their literature? How far does that gap have to be before it is sufficient, or insufficient? Determining those numbers, and the criteria of words in a title or abstract which indicate an article is "intelligence" related is OR, don't you agree? --JereKrischel 05:25, 18 August 2006 (UTC)[reply]
(1)Intelligence is the most commonly used term in the literature. Articles give a summary of the article in the abstract, so searching abstracts is only searching articles that deal with intelligence in a significant way.
(2)If the only articles we have discussing a topic are around 1%, the hypothetical question of 'what if we had a journal that was around 5%' is not important. What we know is that journals that have a tendency to publish articles on intelligence haven't discussed this issue.--Nectar 05:41, 18 August 2006 (UTC)[reply]
1) Do you have a cite for that? Or would it take further original research to make that claim? 2) Perhaps the only articles we've examined so far critical of the pioneer fund happen in journals which publish only 1% of their articles with "intelligence" in the abstract or title (speaking of which, why not count page length of the actual articles, instead of just the number of articles?), but it is only a hypothetical statement to assert that there are none others at higher percentages, right? What we do know is that scientific journals of good repute have published regarding this issue, and there is no expectation that more highly focused journals which do not analyze the impact of funding sources on bias in research would publish any articles on the matter. Frankly, what we should find is a list of journals that analyze and publish articles on critiques of funding sources (let's say, Journal of the History of the Behavioral Sciences), and use them as a measure of how important and relevant the issue has been to the scientific community. Can you assert that no journals which publish at least 1% or more of it's articles on funding source critiques have ever criticized the Pioneer Fund? It seems to me that you're trying to make a point that really doesn't have a substantial affect on the validity of the criticism of the Pioneer Fund that has been published - just because a select few journals, by whatever criteria you wish to identify them, haven't examined the issue doesn't make it any less relevant, important, or valid, don't you agree? --JereKrischel 06:11, 18 August 2006 (UTC)[reply]
These questions don't seem relevant from an academic point of view. The journals discussed are the results of reviews of the literature. Can you give a brief summary of your argument for the RfC section below?--Nectar 07:14, 18 August 2006 (UTC)[reply]

Request for Comment: Journals in the field

[Both sides of the debate were asked to give brief summaries of their arguments. Listed 21:56, 20 August 2006 (UTC)]


Criticism of the Pioneer Fund (PF) has been limited to some general journals, and hasn't been raised in the specialist journals that deal with intelligence research regularly. The issue is whether the categorical distinction can be made (in relation to another published statement) that the criticism "has not been an issue in the journals in the field." Alternatives have been "journals in intelligence research."

The set of journals which have published the PF criticism have published [in between 2.5% to less than 1%] of their articles dealing with intelligence: The American Journal of Psychology, American Behavioral Scientist, Journal of the History of the Behavioral Sciences. In contrast, the second set of journals publish a majority of their articles dealing with intelligence, or have published significant articles in the field: Intelligence, Personality and Individual Differences, Psychological Assessment, Journal of Educational Psychology, American Psychologist, Journal of Personality and Social Psychology. The number of articles with "intelligence" in the abstract or title in the first two journals of these two sets, for example, are 13 and 7, and 283 and 278 (from 1994-2005).--Nectar 07:14, 18 August 2006 (UTC)[reply]

The absence of criticism of the Pioneer Fund in an arbitrary set of journals dictated by an editor is not noteworthy. This arbitrary set of journals may indeed focus on the topic at hand, but that same focus makes them poor candidates for finding any criticism of any funding source. A more noteworthy observation would be to find a lack of criticism of the Pioneer Fund in journals dedicated to publishing critiques of funding sources. This observation has not been made.
The implication trying to be presented as I understand it is the following - 1) specialist journals identified on the topic of "intelligence" are the best authority for information regarding the field; 2) these specialist journals identified have not been observed to criticize the Pioneer Fund; 3) Therefore, criticism of the Pioneer Fund is less authoritative for being observed only in other scientific journals not identified as "specialist". This comes across like a POV push intended to mitigate or discredit criticism of the Pioneer Fund based on an arbitrary criteria. I contend that the implication being presented is incorrect on its very basis - non-specialists science journals are no less authoritative or credible than specialist science journals when it comes to the criticism of funding sources and their potential bias on research results.
One might just as well identify only specialist tobacco research journals, and claim that because they don't contain criticism of tobacco companies, such criticisms are somehow less credible. It has been noted by other editors that the some of the specialist journals identified have close ties to the Pioneer Fund.
The current suggestion on the table is to remove any language that would try to discredit criticism of the Pioneer Fund by appealing to the authority of "specialist" journals, as well as remove any language that would try to bolster criticism of the Pioneer Fund by appealing to the authority of "science journals". It seems more appropriate to make clear which person is making the critique or defense (psychologist, geneticist, political pundit), rather than attaching undue authority or lack thereof due to the particular publication such commentary was printed in. --JereKrischel 08:22, 18 August 2006 (UTC)[reply]
I'll respond briefly. The identified journals were the results of a review of the literature, which is the only way to summarize the literature. The point is not to discredit an argument, but to summarize the literature accurately so as to not imply statements are more widely supported than they are (by researchers on either side in the field).--Nectar 08:55, 18 August 2006 (UTC)[reply]
Two points to add to the RfC:
  1. Criticism of any funding agency is usually rare in any journal, and some journals may exclude it out of policy. One possible reason for this is that a researcher openly critical of a funding agency is likely to alienate this organism as a potential future funding source for his research. Of the journals deemed "specialist" in the field, none were found to have criticism of the Pioneer Fund, but neither did they contain any criticism of any other funding agency, which begs the question as to whether these journals would publish any criticism of any funding agency.
  2. Journals deemed "specialist" in the field were validated using a citation analysis technique comparing the relative frequency of two words in their abstracts: target word "IQ" and control word "influence". There was no cross-check using another pair of words (such as "intelligence" and a known common word like "results") which could have yielded different results. Moreover, only a few select journals were analyzed this way, which opens the possibility of some journals that have not been tested also possibly qualifying. In addition, no firm reference as to what the proper word ratio would be for inclusion or exclusion of a journal has been supplied. Lastly, the issue was raised at Wikipedia_talk:No_original_research#Citation_indexes as to whether this was an acceptable practice for Wikipedia and not in breach of the WP:NOR rule, and there was clearly no consensus in the feedback, thus making the No Original Research objection a valid concern.--Ramdrake 13:24, 18 August 2006 (UTC)[reply]
I'll respond briefly. (1)It is certain that anti-hereditarian researchers such as Sternberg would include the criticism in their very strongly worded responses to hereditarian research if they thought it would benefit their criticism. (They've stated they don't support these kinds of criticisms.) Intelligence, which critics Sternberg and Flynn sit on the board of, did publish discussion of the issue in an editorial, but only in the form of criticism of media presentation of the fund. (2)This is a reference to a previous discussion. The description of journals in this section refers to the number of articles they've published that have "intelligence" in the abstract or title. The proposal at WP:NOR to specify in the policy citation indexes as permittable was successful.[10]--Nectar 23:14, 18 August 2006 (UTC)[reply]
Comment on the response by Nectar:
  1. Response to my point 1 (that criticism would be mentioned if it existed) is simply an assumption by another editor, and not substantiated by any cited source. To the contrary, a search on Google Scholar of the words "Pioneer Fund" and "critic" will retrieve about 200 cites, and while there is a certain amount of redundancy in these 200 cites, it will turn up a good number of criticisms of the Pioneer Fund in the scientific press, although whether these are in journals that can be considered specialist journals in the field or not is a current matter of dispute.
  2. "The proposal at WP:NOR to specify in the policy citation indexes as permittable was successful." This is just plain incorrect if I read the section in question:
-One editor made the comment this wasn't the right place to bring up this discussion.
-Another (anonymous) strongly hinted this was OR.
-A third one agreed with Nectar, but hinted the majority of editors probably wouldn't agree.
-A fourth editor (involved in the current issue of this page) plainly disagreed with Nectar.
I don't see how that can be construed as a "successful" proposal. Also, for the record, as can be seen by referring above, the word search across abstracts was NOT for intelligence but for IQ. There is a small but meaningful distinction here.--Ramdrake 00:53, 19 August 2006 (UTC)[reply]
(1) The first two of the non-criticising journals published positive reviews of Lynn's history and defense of the fund, so the issue is clearly considered within their scope. (2) I believe reviewing the proposal at WP:NOR shows it was successful. It's been stated in this and the previous section, the figures now under discussion are for the term intelligence. (The disparity is larger for IQ).--Nectar 06:01, 19 August 2006 (UTC)[reply]

[To the reviewer: please ask if anything is unclear.]

Comment on the RfC as officially listed

Nectar, I don't think the question you listed in the (public RfC) represents accurately the debate we've had so far. However, the question as spelled out at the beginning of the RfC section material is the right one. I'd appreciate to see them both be the same (i.e. as it is currently reflected above here). Thanks!--Ramdrake 22:09, 20 August 2006 (UTC)[reply]

The categorical distinction itself was removed from the article after the disputed terminology was removed, which seems to mean the categorical distinction was disputed. Can the categorical distinction be put back in the article?--Nectar 22:44, 20 August 2006 (UTC)[reply]
Then, the way I see it, if commentators find that there is a categorical difference between the two, then we need to report on the presence or absence of criticism in both categories separately. If commenters find no categorical difference between the two, then we must report that "some scientific journals are critical of the PF" without qualification. Is that what you want? If it is, I can certainly live with it.--Ramdrake 23:09, 20 August 2006 (UTC)[reply]
The categorical distinction in question was regarding a specific topic - that is to say, is there a categorical distinction between criticism that occurs only in general journals and not in specialist journals. Nobody was debating whether or not there is a distinction that could be made between general and specific journals, but what the nature of that distinction was. I've asserted that the nature of that distinction does not include making criticisms any less important or notable for not being published in a specialist journal. --JereKrischel 23:17, 20 August 2006 (UTC)[reply]
So you claim there is a categorical distinction between general journals and specialist journals, but that there's no categorical distinction between general opinion and specialist opinion? OK. Lulu provided a reference in his Martin quote that there is such a distinction and that distinction is important.--Nectar 01:35, 22 August 2006 (UTC)[reply]
Whether or not there is a categorical difference between specialty and non-specialty science journals is one thing, but whether one may use this distinction, if it exists, to discount opinion found in non-specialty science journals is the real question. Please don't confuse the issue. And I'd like to see how you can construe the quote from Gardner to vindicate your point that "there is a difference and that the difference is important"?--Ramdrake 01:54, 22 August 2006 (UTC)[reply]
(1) I'm sure your aware non-specialist opinion was never "discounted" or stated in the article to be "less important." However, stating the facts about the literature is allowed, and you regularly argue we should state the facts and let readers make up their minds. Censoring such facts because an editor feels they imply a favored argument is "less important" is not an option.
(2) Martin: "As a consequence, he finds himself excluded from the journals and societies, and almost universally ignored by competent workers in the field." We can be certain that specialist opinion and non-specialist opinion are not treated the same in academia, and simply noting specialist opinion is certainly permissible.--Nectar 02:26, 22 August 2006 (UTC)[reply]
(1)If non-specialist opinion wasn't discounted, why was the reference that the Pioneer Fund had been criticized in science journals removed about 5 times? So, who was censoring whom?
(2)The only thing I see there is the mention of in the field. Those words can have several interpretations: the "field" could be R&I research, psychometrics, psychology, etc. It contains no definite level of specialization. This certainly does not advocate a distinction between "specialist" and "non-specialist" journals.--Ramdrake 13:47, 22 August 2006 (UTC)[reply]

have you read the PF criticism papers?

if not, read them. most appear to be book reviews, historical narratives, etc. in fact, there appear to be only 2 or 3 (including Tucker) straightward critical pieces aimed primarily at PF. what is still not clear to me is that these criticisms have anything but a tangential relation relationship to this article. if that relationship cannot be made more explicit (without violating NOR) then this debate may be moot. --Rikurzhen 08:50, 18 August 2006 (UTC)[reply]

As have been stated many times before, the Pioneer Fund is not only important for possible bias. It is an important part of the history of the research, for media image, policy implications, and so on.Ultramarine 14:05, 18 August 2006 (UTC)[reply]
And also, without a clear indication of how frequent or how rare such criticism of a funding source is in science in general, "only" 2 or 3 may indeed be very significant.--Ramdrake 14:18, 18 August 2006 (UTC)[reply]
In addition, this search [11] shows numerous critical articles, certainly not "2 or 3".Ultramarine 14:22, 18 August 2006 (UTC)[reply]
The reason I asked "have you read the PF criticism papers?" is that I skimmed them and found little to support a connection with this article. At various times, I've asked for an argument to be outlined as to how these sources relate to this aritcle, if only for our consideration on the talk page. The point of my comment is not to say that 2 or 3 isn't enough (one reliable, important and relevant source is a good enough) but to say that their importance appear tenuous. A reply that consists of short quotes with precise references in a logical framework is what I'm looking for. --Rikurzhen 20:08, 18 August 2006 (UTC)[reply]

Considering that the critiques expose potential bias and challenge the validity of Pioneer Fund funded research, I think it has a direct rel ation to this article - R&I research, as funded by the Pioneer Fund is criticized as possibly being biased and inaccurate. How much more of a direct relation can you get, than a direct criticism of any findings made because of bias possibly introduced by your funding source? Would you consider criticism of tobacco industry funded research only "tangential" to the research they conducted? What would be a direct relation in your POV? --JereKrischel 02:20, 22 August 2006 (UTC)[reply]

the critiques expose potential bias and challenge the validity of Pioneer Fund funded research -- where does this come from? who would you cite to support this claim? --Rikurzhen 02:34, 22 August 2006 (UTC)[reply]

How about American Behavioral Scientist, 1995? Google scholar lists several others as well. --JereKrischel 03:35, 22 August 2006 (UTC)[reply]
Summary from that article of the author's view: This article documents the central role played by the Pioneer Fund in the propagation of academic racist ideology. It shows that the Pioneer Fund is embedded in a network of fascist-oriented foundations, think tanks, publishers, global anti-Communist political coalitions, religious fundamentalists, and paramilitary organizations. The Bell Curve thus comes out of a complex fascist movement whose pedigree is clearly linked to World War II era fascism. This fascist movement is closely tied to and part of capitalist-controlled American political institutions. Fascism therefore is best understood not as a spontaneous "populist" working-class or middle-class movement, but as a politically orchestrated and well-funded instrument of the capitalist ruling class. I'm not sure that the anti-anti-Communist POV of anti-PF/TBC constitutes even a significant minority under WP:NPOV. --Rikurzhen 08:28, 22 August 2006 (UTC)[reply]
As per above, I'm aware that there are articles about PF in the lit. What I don't see in them is a connection to "bias" or a challenge to the "validity" of the science which PF has supported on the basis of it being PF supported. (It's a rather precise argument. The mere juxtaposition of criticism of R&I as bad with the criticism of PF as bad would not be support.) In fact, I would find it strange that anyone would attempt to make such an argument given that it would essentially suggest fraud on the part of PF-supported researchers. Per above, what's needed is the outline of the argument with supporting quotes and references. If this can't be provided, I would take this as evidence that there's no support for the existence of critiques [that] expose potential bias and challenge the validity of Pioneer Fund funded research. --Rikurzhen 04:50, 22 August 2006 (UTC)[reply]
I'm sorry, but I think you've read the abstract and completely missed the point. Asserting that the Pioneer Fund propagates academic racist ideology is a direct challenge to the validity of the research it funds, and a clear assertion of bias (whether causal or coincidental). "The Bell Curve" is particularly vilified, with the assertion that it comes out of...facism. Note that in the context here, "facism" is not compatible with "valid" or "unbiased". I understand that a dry reading, considering "facism" as simply another form of government without any other negative context, can make it seem like nothing is particularly being said, but I think that the argument is fairly clear from the quote...although I'm more than happy to outline it more deliberately if you wish. --JereKrischel 08:51, 22 August 2006 (UTC)[reply]
A political and moral condemnation is not equivalent to a scientific one. Evil <> wrong. That's why we have separate subsections to discuss each. I believe the R&I/PF "is evil" POV is sufficiently covered. --Rikurzhen 08:55, 22 August 2006 (UTC)[reply]
I think that's exactly your misunderstanding - let me see if I can be more clear stepwise: Valid and unbiased research == good. Invalid and biased research == bad. Fasicm/facist == bad. Pioneer Fund == facism. -> Pioneer Fund research == facist. Therefore, Pioneer Fund research == invalid and biased. The abstract you quoted does not say, "Pioneer Fund is evil but funds unbiased and valid research." The political and moral condemnation is directly challenging the validity of their scientific results and inherent bias - even though it is primarily an ad hominem attack, that is the charge that is being made. They are not criticizing Pioneer Fund grantees for being evil in regards to things like abusing their children, or stealing from babies - they are criticizing Pioneer Fund grantees for being evil in regards to the invalidity and bias in their research and conclusions. --JereKrischel 17:44, 22 August 2006 (UTC)[reply]
The argument you present is not valid. [Stove = hot; Sun = hot --> Stove = Sun is not valid] They are arguing that PF research is the product of Fascism and that it's bad, but bad <> wrong. They are criticizing them for what they regard as the political aims of the organization. A political criticism is not identical to a factual criticism. Again, evil <> wrong. A similar line of reasoning wrt Hitler and Darwinism has recently made news. That Hitler used Darwinism to justify evil doesn't mean that Darwinism is wrong. Factual claims are not made true or false by their political or moral implications. Read the "Utility of research and racism" section, where we've covered this topic in detail. --Rikurzhen 21:10, 22 August 2006 (UTC)[reply]
You're making a false analogy - the fact that Hitler used Darwinism doesn't mean that Darwinism is wrong...but, the fact that Hitler concluded that other races were inferior, and supported research that would "validate" his POV, certainly is an attack on research conclusions regarding inferiority. Nobody is saying that because the Pioneer Fund supports researchers that use genetics, that genetics are wrong - they're saying that because the Pioneer Fund supports researchers who use genetics to come to "facist" conclusions, the conclusions are invalid and biased. In this case, it is clear that by "fascist" they mean "invalid and biased". Regardless if their published arugment can be shown to be a logical fallacy (ad hominem), that is the argument they are making. We're just stating their argument, not judging its correctness. --JereKrischel 21:47, 22 August 2006 (UTC)[reply]
But I don't think that the formulation you've given is actually their argument. If it was, they should have made it clearer. Merely making a political/moral criticism, which is what they do, is not sufficient for a scientific criticism -- you've done nothing to show how it would (other than to make arbitrary distictions between evolution and genetics on one hand and R&I on the other, begging the question). --Rikurzhen 21:56, 22 August 2006 (UTC)[reply]
You're again, conflating their method of criticism with what they are criticizing. You are correct that they are not criticizing their conclusions based on scientific means (that is to say, they aren't illustrating how they have miscalculated, or misunderstood data - though others make those arguments), but the *are* criticizing their conclusions, and the *are* asserting that their conclusions are incorrect, and they *are* asserting that their conclusions are biased. I know it may be difficult to understand the concept of criticizing one's research and results by criticizing the funding source (and nothing else), but just try and imagine it in parallel to critics of tobacco industry scientists - the research conclusions are being challenged, even if on an ad hominem basis. I think if you can grasp the concept that it is a political/moral criticism of research results (rather than a political/moral criticism of how they treat their chilren), you'll understand clearly that regardless of the method they use to challenge the validity and bias of their research, they are in fact challenging it. --JereKrischel 06:20, 24 August 2006 (UTC)[reply]

JK, again, I strongly suspect you are misreading. The best solution is (per above) to read them carefully, write summaries of the most notable ones that make use of a few inline quotations to show what argument they acutally make. From what I've read, they say that they are wrong and they say that they are evil, but they don't acutally say that they are wrong because of bias. (For example, the word "bias" doesn't appear in the A.B.S. article linked above. It does say that TBC is a fascist declaration of war providing "anti-working-class ideological cover for the 'Contract on America' and for the systematic dismantling of the welfare state" -- referring to the "Contract with America" and the Welfare reform bill that's recently been in the news because of its anniversary.) --Rikurzhen 06:45, 24 August 2006 (UTC)[reply]

Rikurzhen, let me ask you: what do you consider must a paper critical of R&I research (whether it be the PF, the Bell Curve or anything else) contain for you to consider it contains criticism of R&I research or science? I've heard a lot of why you consider the paper under discussion is not scientific criticism (as opposed to social/ethical criticism), but not about what it must contain to qualify as scientifc criticism. I think that would help turning up the right papers.--Ramdrake 16:26, 24 August 2006 (UTC)[reply]
You misunderstand, but it's partly my fault. My intended emphasis is on what these papers don't say -- they don't seem to say that PF funding causes bias, or a variety of other proposed formulations around "bias" -- not on what they do. Coincidently, what I've read from them seems to be political/moral condemnation, largely directed at the poltical implications sections of TBC. We can only use them to cite support for the arguments they do make. Maybe someone can point out the text I'm missing where they comment on bias. --Rikurzhen 18:00, 24 August 2006 (UTC)[reply]
Again, I've read several of those criticisms extensively, and can point to where the accusations of bad science (bad in the sense of scientifically misconstrued, not morally bad), are, and it's actually a bit larger and different than your definition of "bias" (which is incidentally narrower than mine, but that's somewhat beside the point). However, I would absolutely need from you a definition of what you would accept as an argument that the research, or its results, or its conclusions (or any combination of these, as the case maybe) are scientifically wrong. Please note that I'm redirecting the concept of "bias" (on which we have diferent definitions) to the more general concept of "bad science" (on which I'm hoping we can more readily agree).--Ramdrake 18:17, 24 August 2006 (UTC)[reply]
Sorry, just re-read your comment there, and maybe you have a point that the point most of those articles make is that PF-funded research contains a lot of bad/biased science, whether it be the research itself, the results, their interpretation or any combination thereof. The affirmation PF thus biases research is actually an inference, but a totally warranted one under the circumstances. --Ramdrake 18:33, 24 August 2006 (UTC)[reply]
Ramdrake, then perhaps you want to point out some relevant excerpts and explain exactly what you think they are saying. Thus far, I've only seen arguments of the form R&I is bad and thus is wrong. Re: inference -- Ultramarine once argued that a similar inference could be made on the basis of Tucker's work. We have since resolved that it cannot.
Rikurzhen, I still need you to define what you would accept as an argument that the research (or its results or it conclusions) is scientifically wrong. Please humor me: it's the third time I'm asking this question.--Ramdrake 12:28, 25 August 2006 (UTC)[reply]
Actually, since the Pioneer Fund does not do the science directly, as it funds researchers, we must turn to criticism of Pioneer grantees to see solid accusations of what I call is bad science.
This is a criticism of the poor science and misrepresentation of data of a prominent Pioneer Fund grantee, Richard Lynn. The criticism is by Leon Kamin. [12]

Lynn's 1991 paper describes a 1989 publication by Ken Owen as "the best single study of the Negroid intelligence." The study compared white, Indian and black pupils on the Junior Aptitude Tests; no coloured pupils were included. The mean "Negroid" IQ in that study, according to Lynn, was 69. But Owen did not in fact assign IQs to any of the groups he tested; he merely reported test-score differences between groups, expressed in terms of standard deviation units. The IQ figure was concocted by Lynn out of those data. There is, as Owen made clear, no reason to suppose that low scores of blacks had much to do with genetics: "the knowledge of English of the majority of black testees was so poor that certain [of the] tests...proved to be virtually unusable." Further, the tests assumed that Zulu pupils were familiar with electrical appliances, microscopes and "Western type of ladies' accessories."

In 1992 Owen reported on a sample of coloured students that had been added to the groups he had tested earlier. The footnote in "The Bell Curve" seems to credit this report as proving that South African coloured students have an IQ "similar to that of American blacks," that is, about 85 (the actual reference does not appear in the book's bibliography). That statement does not correctly characterize Owen's work. The test used by Owen in 1992 was the "nonverbal" Raven's Progressive Matrices, which is thought to be less culturally biased than other IQ tests. He was able to compare the performance of coloured students with that of the whites, blacks and Indians in his 1989 study because the earlier set of pupils had taken the Progressive Matrices in addition to the Junior Aptitude Tests. The black pupils, recall, had poor knowledge of English, but Owen felt that the instructions for the Matrices "are so easy that they can be explained with gestures." Owen's 1992 paper again does not assign IQs to the pupils. Rather he gives the mean number of correct responses on the Progressive Matrices (out of a possible 60) for each group: 45 for whites, 42 for Indians, 37 for coloureds and 28 for blacks. The test's developer, John Raven, repeatedly insisted that results on the Progressive Matrices tests cannot be converted into IQs. Matrices scores, unlike IQs, are not symmetrical around their mean (no "bell curve" here). There is thus no meaningful way to convert an average of raw Matrices scores into an IQ, and no comparison with American black IQs is possible.

The remaining studies cited by Lynn, and accepted as valid by Herrnstein and Murray, tell us little about African intelligence but do tell us something about Lynn's scholarship. One of the 11 entries in Lynn's table of the intelligence of "pure Negroids" indicates that 1,011 Zambians who were given the Progressive Matrices had a lamentably low average IQ of 75. The source for this quantitative claim is given as "Pons 1974; Crawford-Nutt 1976." A. L. Pons did test 1,011 Zambian copper miners, whose average number of correct responses was 34. Pons reported on this work orally; his data were summarized in tabular form in a paper by D. H. Crawford-Nutt. Lynn took the Pons data from Crawford-Nutt's paper and converted the number of correct responses into a bogus average "IQ" of 75. Lynn chose to ignore the substance of Crawford-Nutt's paper, which reported that 228 black high school students in Soweto scored an average of 45 correct responses on the Matrices--HIGHER than the mean of 44 achieved by the same-age white sample on whom the test's norms had been established and well above the mean of Owen's coloured pupils. Seven of the 11 studies selected by Lynn for inclusion in his "Negroid" table reported only average Matrices scores, not IQs; the other studies used tests clearly dependent on cultural content. Lynn had earlier, in a 1978 paper, summarized six studies of African pupils, most using the Matrices. The arbitrary IQs concocted by Lynn for those studies ranged between 75 and 88, with a median of 84. Five of those six studies were omitted from Lynn's 1991 summary, by which time African IQ had, in his judgment, plummeted to 69. Lynn's distortions and misrepresentations of the data constitute a truly venomous racism, combined with scandalous disregard for scientific objectivity. Lynn is widely known among academics to be an associate editor of the racist journal "Mankind Quarterly" and a major recipient of financial support from the nativist, eugenically oriented Pioneer Fund. It is a matter of shame and disgrace that two eminent social scientists, fully aware of the sensitivity of the issues they address, take Lynn as their scientific tutor and uncritically accept his surveys of research.

That's just one. I can find many more. So, to recap, there are accusations of a moral nature against the Pioneer Fund and the research it funds, but also there are scientific objections of bad science, which you will find associated principally with the researchers whose work has been funded by the Pioneer Fund, rather than associated with criticism of the Pioneer Fund by name. Put together, and you can link the Pioneer Fund with accusations of funding bad science.--Ramdrake 17:43, 25 August 2006 (UTC)[reply]

Journals criticizing funding

Re:these journals don't contain any criticism of any funding agency Journals do discuss any relevant issues. For example, the PF was discussed in Intelligence, but only in Weyher's editorial criticizing media presentation of the fund. Funding biasing researchers would certainly be relevant.--Nectar 01:12, 13 August 2006 (UTC)[reply]

Many other journals have criticized the fund. Regarding Intelligence, it has several Pioneer Fund grantees on its editorial board.Ultramarine 14:17, 18 August 2006 (UTC)[reply]

too much space in intro is spent on PF

there's no mention of PF in the best reviews:

  • the APA statement
  • the WSJ statement
  • the 2005 PPPL articles

There's Tucker, Lombardo, and a variety of reviews of The Bell Curve which mention PF. There's the Gottfredson incident, and the SPLC classifcation as a hate group. Anything else? Which of these relate directly to this article? I'm afraid that too much is currently being made in the intro out of very little published substance. --Rikurzhen 08:49, 12 August 2006 (UTC)[reply]

(edit conflict)

Well, it started with just the mention that the funding from the Pioneer Fund somehow biased the field or some of the scientific results in the field. Then, a tangible quote was requested as to how and why it could bias the results, and that was added. Then,a whole lot of explanations were added saying the Fund wasn't so bad, that fundees had to defend themselves from media opinion, etc. So yes, it kind of spiraled out. I think what's impotant to mention is that the fund has been criticized for a number of reasons. The rest is only counterarguments trying to say the fund isn't that criticized (in a very restricted sample of journals), that most of those criticizing it aren't specialists in the field, etc. Do we really need all that wording to try to compensate for stating a fact (that the Fund has been criticized)?--Ramdrake 14:24, 12 August 2006 (UTC)[reply]
just the mention that the funding from the Pioneer Fund somehow biased the field or some of the scientific results in the field -- But the quote doesn't seem to say that at all. To cause "bias" is to cause "bias" in interpretation. To fund a line of research is not "bias". However, the main question I raised is whether the PF criticism deserves the prominent treatment it receives given that we're scarping far-flung individual sources to piece together a criticism -- we're not simply getting it from a review article. --Rikurzhen 21:08, 12 August 2006 (UTC)[reply]
For the critical side of the edit, I used a total of two citations. I wouldn't call that "scraping together far-flung individual sources". What may look more like the expression you used is the amalgamation of sources used to try demonstrate that the Pioneer Fund criticism is restricted to some circles, and/or does not exist in the journals of the field, and that overall its influence can be considered a "weak plus". And I'm using the word bias in the largest sense possible, so feel free to substitute another more appropriate word if you feel that it differs from your definition of "bias".--Ramdrake 22:26, 12 August 2006 (UTC)[reply]

Lovely quote

I read a lovely quote from Martin Gardner yesterday:

[Some cranks] are brilliant and well-educated, often with an excellent understanding of the branch of science in which they are speculating. Their books can be highly deceptive imitations of the genuine article — well-written and impressively learned.... [C]ranks work in almost total isolation from their colleagues. Not isolation in the geographical sense, but in the sense of having no fruitful contacts with fellow researchers.... The modern pseudo-scientist... stands entirely outside the closely integrated channels through which new ideas are introduced and evaluated. He works in isolation. He does not send his findings to the recognized journals, or if he does, they are rejected for reasons which in the vast majority of cases are excellent. In most cases the crank is not well enough informed to write a paper with even a surface resemblance to a significant study. As a consequence, he finds himself excluded from the journals and societies, and almost universally ignored by competent workers in the field..... The eccentric is forced, therefore, to tread a lonely way. He speaks before organizations he himself has founded, contributes to journals he himself may edit, and — until recently — publishes books only when he or his followers can raise sufficient funds to have them printed privately.

FWIW, I encountered it in this review of Wolfram's New Kind of Science. Seems like a pretty good description of the PF gang to me. LotLE×talk 14:55, 20 August 2006 (UTC)[reply]

This seems like a denial of all intelligence researchers, represented as a criticism of the Pioneer Fund. Less than 1/5 of the members of the editorial board of Intelligence have received grants from the PF (5 out of 26, including the journals' editors, who have not received grants).[13] Staunch environmentalists like Sternberg and Flynn are also on the board. Less than 1/10 of the editorial board of Personality and Individual Differences have received grants (3 out of 40).[14] The Pioneer Fund has seized the imagination of Wikipedia editors like nothing else. To put to rest any claims that the statements in Gardner's quote apply here, these researchers' highly cited articles have been published in too many journals to list, but include APA journals like Journal of Consulting Psychology, Journal of Counseling Psychology, Journal of Consulting and Clinical Psychology, Psychological Bulletin and American Psychologist (ask for citations). Gardner's quote ("almost universally ignored by competent workers in the field") states what Wikipedia editors have neglected to acknowledge, that the opinion of researchers in a field (e.g. Sternberg) are in standard academic practice given preferential treatment to the opinion of researchers who don't have experience in a field. (What this does not mean is that opinion outside of a field is necessarily given little or no importance.) --Nectar 20:21, 20 August 2006 (UTC)[reply]
Nectar, you asked for a comment, you've got a comment... What else can I say? To me, Lulu's comment seemed more like a specific denunciation of the Pioneer Fund than something flung at the intelligence research community in general. At least, that's how I see it.--Ramdrake 21:55, 20 August 2006 (UTC)[reply]
Well, yeah. I have no idea what the point is supposed to be about the board of Intelligence; certainly there are researchers interested in intelligence as a concept who do not have the whole racialist agenda and cliquish self-reference of the PF folks. When someone like Lynn simply cannot be published outside of white supremecist vanity presses for his latest book, it's probably pretty telling of the fact he's a crank.
In truth, if this were really an encyclopic article rather than an advocacy piece, the first sentence would be something along the lines of "Race and intelligence is a pseudo-scientific movement to advance racialist thinking, and to justify a social policies of racial discrimination". But I know it's hopeless to dream of this article ever resembling something an NPOV encyclopedia would contain. LotLE×talk 02:42, 21 August 2006 (UTC)[reply]

Okay dokay... the quote is from a view of Wolfram's New Kind of Science -- about mathematics, not R&I. I strongly recommend this totally OT thread stop here and move swiftly to the archives. --Rikurzhen 02:54, 21 August 2006 (UTC)[reply]

FWIW, Gardner's comment was not itself directed at Wolfram. The reviewer, Cosma Shalizi, merely felt it happened to fit Wolfram's work; just as I happen to feel the description happens to fit PF's work (and therefore most of what is in this article, which is mostly just advocacy of the PF grantees' agenda). I haven't looked up Gardner's original context... he may have had someone specific in mind, but he obviously wrote it in a way to be more generally applicable. LotLE×talk 14:06, 21 August 2006 (UTC)[reply]
Considering this is the first comment generated from Nectarflowed's RfC, I would strongly suggest the whole thing be kept here at least until the end of the RfC. Sounds only logical to me.--Ramdrake 04:07, 21 August 2006 (UTC)[reply]
If this is all we can expect from a RfC, then why bother? Lulu's comment is entirely useless, offering no insight to the question of the RfC. It cites his personal (and fringe) view of the subject, making suggestions that are obviously unactionable. --Rikurzhen 05:21, 21 August 2006 (UTC)[reply]
Oh, I think it offers an insight on the question of the RfC, just a very different one than was expected.--Ramdrake 12:20, 21 August 2006 (UTC)[reply]
Whether or not my comments "offer insight" (I had no idea there was an RfC... if so, why is the discussion here rather than there?), it would be extremely unseemly and insulting to selectively delete my comments but not others. Obviously, I would expect my comments to be archived at the same time as other contemporaneous ones, but not on one editor's judgement of the lack of worth of their content. There are quite a few pro-PF comments in this thread, and generally in this discussion page, that I think fail to "offer insight"... but I'm certainly not going to selectively delete all those comments I unilaterally judge to lack worth. LotLE×talk 13:56, 21 August 2006 (UTC)[reply]
I think an RfC is always a good thing, if only because it offers us a chance to see a subject from a different perspective. Whatever that perspective is, even if that perspective is diametrally opposed to ours, I wouldn't want to dismiss it out of hand as "fringe" or "without insight". Lulu's comment reiterates what I felt as a first impression when I first came to the article, that in some respects, this looks more like glorified pseudoscience than real, debatable and improvable science. It reminds me of "creation science", which starts from a preordained conclusion, and looks for "evidence" that fits and/or supports the conclusion.--Ramdrake 14:28, 21 August 2006 (UTC)[reply]

Until someone can demonstrate what I'm supposed to take away from the exchange that has to do with writing the article, rather than trashing its subject and its editors, I'm finished with this thread. If anyone feels like being productive, there's an unanswered thread here that's on topic. --Rikurzhen 14:33, 21 August 2006 (UTC)[reply]

Maybe and simply, that people don't have to agree as to what an article ideally should look like to agree to work together on improving it little by little?--Ramdrake 14:47, 21 August 2006 (UTC)[reply]
The (only?) way to improve this article is to read the literature, stick to summarizing what's been published in reliable sources, and maintain adherence to WP:NPOV in the strongest possible way. As per the thread I highlighted, I don't believe the literature is being properly consulted on the PF topic. I suspect that I understand the series of events which led to this situation, and the only recourse I see is to read the literature and stick to summarizing what it actually says. --Rikurzhen 17:47, 21 August 2006 (UTC)[reply]
I guess the problem is that the determination of what sources are considered "reliable" is the subject of POV. The assertion that criticism of the Pioneer Fund is inherently unreliable is just as POV as asserting that any research that they fund is inherently unreliable. Let's just clearly state that the criticisms have been made, by reliable sources which are not definitive but merely representative, and make sure we clearly state that the Pioneer Fund funded research is also generally reliable sources which are not definitive, but merely representative of a certain POV. So long as we don't assert that anything is definitive (especially in the context of such a contentious subject), we abide by NPOV. The problem I see is when people want to arbitrarily define their POV as both reliable and definitive, rather than merely reliable and representative of a given opinion. --JereKrischel 19:11, 21 August 2006 (UTC)[reply]
I meant reliable in terms of WP:RS to exclude partisan web sites, etc. Professional publications are certainly reliable. Notable of course is another matter. What matters most is that the actual content of these papers has been examined carefully. They shouldn't be glossed over and then used to support a claim that they might actually not. --Rikurzhen 19:25, 21 August 2006 (UTC)[reply]
Professional publications are not "certainly" reliable - I think in fact, the argument being had is over which professional publications count as "reliable". Although perhaps we should agree on terminology first. "Reliable", if it is to include both Pioneer Fund researchers, certainly should include their detractors. "Notable", I would argue clearly includes Pioneer Fund critics. And insofar as carefully examining papers, I think you run into several issues here - part of the argument against folk like Rushton, for example, is that they have taken others' works (Cavalli-Sforza), and glossed over them and used them to support claims they actually don't. One might argue that since Rushton did the glossing, and not a WP editor, his gloss is allowed...but then the same would be true of anti-hereditarian folk who glossed over things as well. I am strongly supportive of the idea of making note of the glosses made by pro-hereditarian folk, and making clear their contradiction with their original sources - I think much of the concern over this article is regarding "glossy" support of pro-hereditarian positions. --JereKrischel 21:02, 21 August 2006 (UTC)[reply]
I don't believe this thread has made progress towards communication. Consider the problem as I've described it in the thread I linked. --Rikurzhen 22:22, 21 August 2006 (UTC)[reply]

moved from accusations of bias section

This has included accusations that funding from the Pioneer Fund (which according to the Southern Poverty Law Center "has funded most American and British race scientists, including a large number cited in The Bell Curve"[15]) supports only research that "tends to come out with results that further the division between races... by justifying the superiority of one race and the inferiority of another [1] The Pioneer Fund has been strongly criticized by anti-racist groups and some scientists and journalists.[2] Also, prominent critic Ulric Neisser states that the fund's contribution has overall been "a weak plus".[3] On the other side, it is asserted that misguided political correctness has led to large-scale denial of recent developments in the human sciences.[4]


based on the discussion above, it appears that this text should not be part of the "accusations of bias" section. i've moved it here to preserve it. i believe most of the data is contained in the subsequent "pioneer fund" section, without the attempt to link PF to bias. --Rikurzhen 20:09, 25 August 2006 (UTC)[reply]

I strongly disagree. Based on the discussion so far, it seems that this text is highly relevant to the accusations of bias section. Particularly the recent cite of Lynn's poor science and link to the Pioneer Fund shown by Ramdrake. Reverted back to inclusion. --JereKrischel 21:08, 25 August 2006 (UTC)[reply]
Then we're back to square 1. I see nothing in Kamin's text that Lynn is wrong b/c he is a PF grantee. It is a WP:NOR violation to build such an argument. --Rikurzhen 21:12, 25 August 2006 (UTC)[reply]

policy implications

quote #1 is about abolishing welfare. quote #2 and #3 are predictions about forseen negative outcomes. some more description about what exactly is being criticized about what H&M said might help tighten this up. remember to put page numbers on quotes. --Rikurzhen 02:37, 11 August 2006 (UTC)[reply]

Here's the summary of the chapter from which quotes #2 and #3 come:

we speculate about the impact of cognitive stratification on American life and government. ... Unchecked, these trends will lead the U.S. towards something resembling a caste society, with the underclass mired even more firmly at the bottom and the cognitive elite ever more firmly anchored at the top, restructuring the rules of society so that it becomes harder and harder for them to lose. Among the other casualities of this process would be American civil society as we have known it.

Not so sure this chapter is relevant to this article. Perhaps the affirmative action chapters would be more relevant. --Rikurzhen 02:59, 11 August 2006 (UTC)[reply]

UL, your latest change has it backwards. The fear is that infantilizing low IQ people will then lead to limitations being placed on their liberty. Still not sure this is on target for this article. p.s. They're not recommending "reservations", they're warning against them. --Rikurzhen 03:12, 11 August 2006 (UTC)[reply]

They are warning that this will happen if their policies are not implemented.Ultramarine 03:23, 11 August 2006 (UTC)[reply]

you wrote: they fear that as hostility toward the welfare-dependent increases, a "custodial state" will be created. On my reading that should be a custodial state will lead to hostility toward a welfare-dependent population, or something like that. Still not sure if this is on target w/ race and IQ. --Rikurzhen 03:45, 11 August 2006 (UTC)[reply]


yeah, that's cool, but just b/c this is discussed in TBC doesn't make it about R&I. it is about IQ, which makes it about race and IQ, but not in the specific. Is there a more direct link? --Rikurzhen

Archiving

Err... Archive, anyone?--Ramdrake 22:23, 11 August 2006 (UTC)[reply]

archive at will. if we want something, we'll fish it out. --Rikurzhen 23:45, 11 August 2006 (UTC)[reply]
Archive 24 all done. Jokestress 06:18, 12 August 2006 (UTC)[reply]

Quantifying validity

There seems to be a move afoot to suggest to the casual reader that some scholarship is more valuable or relevant in this debate. This brings me back to something I have been saying since last year about Gottfredson's Mainstream Science on Intelligence collective statement vs. the APA's Intelligence: Knowns and Unknowns consensus statement. The former is one-tenth as influential as the latter if one goes for these citation quantifications, and the oft-cited (here, anyway) Snyderman & Rothman survey is only slightly more notable than the Gottfredson piece, relative to the APA. So if we are going to heirarchize everything, we should point out the relative lack of influence of Gottfredon et al. and S&R compared to the APA piece. Thoughts? Jokestress 06:18, 12 August 2006 (UTC)[reply]

I'm not sure I follow, which probably means this would have an WP:NOR problem. One could argue that S&R surveys 500+ scholars, WSJ surveys 50+ and APA surveys ~10. --Rikurzhen 08:22, 12 August 2006 (UTC)[reply]
I think this article has taken into account the hierarchy in influence between these collective statements implied by this citation analysis and by that one of the statements largely represents the official opinion of the APA; I don't see any citations that give it undue prominence. Citation analysis is a valuable quantifying tool in gauging influence, but of course isn't the only consideration, per Rikurzhen's point. Also, the APA statement probably has increased weight in the sense that the journal it was published in has a very large readership, but on the other hand MSoI probably has increased weight from it being published in one of the foremost specialist journals in its area. (This discussion was partially started by the discussion at Talk:Institute_for_the_Study_of_Academic_Racism.)--Nectar 12:05, 12 August 2006 (UTC)[reply]
Another interesting statistic is that (according to LG), nobody has ever said that MSoI does not represent the mainstream consensus. On the other hand, there are several reactions to the APA report that criticised it for being biased. So (if we want to play this game) the score is 0-several. But I think it would be much more useful to identify the issues where APA and MSoI agree (e.g., measurability of intelligence, observable gap between test scores between populations), and then present these points in this article in the same way we write about the shape of the Earth. Arbor 18:37, 12 August 2006 (UTC)[reply]
AFAIK, they only "disagree" on one point -- the cause of group differences -- and here each is conspicuously indirect (employing careful spin). MSoI reports that Most experts believe that environment is important in pushing the bell curves apart, but that genetics could be involved too. Of courese this allows signers to agree that most experts agree w/o agreeing themselves. APA says It is sometimes suggested that the Black/White differential in psychometric intelligence is partly due to genetic differences (Jensen, 1972). There is not much direct evidence on this point, but what little there is fails to support the genetic hypothesis. As Murray points out, the term "direct" before "evidence" here makes the claim so specific as to have no bite. So, materially they don't actually disagree, but they both spin the causation question differently. --Rikurzhen 19:08, 12 August 2006 (UTC)[reply]

Policy implications - 17 August 2006

Policy implications is now messy. There are several problems, large and small:

  1. what's wrong with "argue"? i find it commonly used in the scientific literature.
  2. the quotes from TBC take up a lot of room, but don't seem to have any specific implications for "race", only "intelligence"
    1. besides In Our Hands, which calls for a direct cash-transfer program, has Murray debated Welfare since the 1996 reform?
    2. affirmative action seems to be the most direct thing to discuss

--Rikurzhen 18:40, 17 August 2006 (UTC)[reply]

The Bell Curve in general is about races. We should certainly point out that any policy affecting those with low IQ will affect memebers of all races.Ultramarine 18:44, 17 August 2006 (UTC)[reply]
But maybe we should move the Bell Curve material to a footnote, it is quite long? Ultramarine 18:52, 17 August 2006 (UTC)[reply]
That would be fine. So long as we move beyond the current state: footnoting, summarizing, etc. --Rikurzhen 19:03, 17 August 2006 (UTC)[reply]

size and detail are better. --Rikurzhen 19:27, 17 August 2006 (UTC)[reply]

do the IQ curves completely overlap each other?

The IQ curves completely overlap each other. "Substantial" would indicate that *some* part of the IQ curve of some races lies outside of the IQ curve of others, which is false.

actually, we don't know if they "completely overlap each other". the formulation often used is to talk about IQ "levels" (which implies there are small, finite number of levels). thus, in the U.S. individuals of every race can be found at all IQ levels. it's quite possible that there are some small groups which might be called "races" which have no individuals with 200+ IQ scores. because of birth defects, there are certainly people at the lowest levels from each group, which was a safe formulation. there are many public policies that target low IQ (few/none that target high IQ), so this is probably the right way to formulate it. --Rikurzhen 20:19, 18 August 2006 (UTC)[reply]

Fist of all, the IQ curves are Gaussians. By definition, Gaussians extend from (minus infinite) to (plus infinite). However, because of probability considerations, most specialists consider them to extend from 0 to 200 or from 4 to 196. Second, the graph we are using and our entire discussion or that matter only shows the "four largest" racial groups (hispanic isn't really a race, but the point is irrelevant here). Due to their extremely large membership, it is basically assured that all four racial groups do have members at all levels of the curve. And lastly, even if there did exist some minor racial group which wasn't large enough to have representatives at all levels (and that's a hypothetical), there at most could be one point of non-overlap somewhere. Saying there is "substantial" overlap means there exist definite regions of non-overlap. These regions are hypothetical and not one is proven up to now. Thus, the word complete is adequate until we have found at least one region of non-overlap of one racial curve versus another. You're right that we don't know or a fact that they completely overlap each other. But theoretically they do overlap completely and experimentally we haven't found a single counter-example. So, complete is the better term until we know more.--Ramdrake 21:32, 18 August 2006 (UTC)[reply]
There are probably no Khoisan or Australian natives with IQs of 200. Moreover, the highest IQ person in the world puts their race at an IQ that is unique. These are the things that concern me. Better to talk about all "levels" rather than overlaps. --Rikurzhen 21:38, 18 August 2006 (UTC)[reply]
I don't have an objection about using the formulation "all levels", and actually it may be more appropriate than trying to qualify the overlap. I've modified the sentence accordingly. Hope it's more to your liking.--Ramdrake 21:49, 18 August 2006 (UTC)[reply]
That looks better. "Complete overlap" would mean the curves cover the identical space, one placed precisely on top of the other. That would be true for any groups that have identical average IQs and IQ distributions.--Admissions 21:54, 18 August 2006 (UTC)[reply]
Actually, that's not quite what "complete overlap" means, but that's alright. "Complete overlap" would mean the two curves have exactly the same range in abscissa. It doesn't say anything about the comparison of ordinate values for the same abscissa value. But your comment, at the very least, is a good indication of what the wording might be interpreted as.
Duh-on me! If one were to say the "curves completely overlap", then your interpretation is right. My interpretation would correspond to the wording that the "curves' ranges completely overlap". My boo... so glad Rikurzhen prodded me to change it.--Ramdrake 22:04, 18 August 2006 (UTC)[reply]
Out of curiosity, are there any sources for this overlap? As Rikurzhen pointed out there may not be any Australian natives with IQs in the 180+ range. African Americans have white admixture which complicates the issue. While at it, are the means correctly shown in the bellcurve picture? I believe they differ for the various races. --Zero g 12:10, 19 August 2006 (UTC)[reply]
By definition, the IQ curves are population distribution curves, so any population should have representatives at all levels of the curve. And it is clear that if you parcel out the population in small enough groups (racial or otherwise), gaps may appear in the curves for the smaller groups. But the construct of the curve is such that the basic assumption is that there are no gaps, thus you are very unlikely to find a source to affirm the absence of gap (or complete range overlap). Conversely, so far, I haven't seen a single report of a population where any gap in the IQ curve was measured. And as far as anybody has been able to ascertain, the means are shown correctly on the current graph, and may even be very slightly overstated (it depends on one's POV).--Ramdrake 14:31, 19 August 2006 (UTC)[reply]
Using that reasoning there'd be 200+ IQ dogs. Asuming some intelligence genes are unique to specific populations, like the genes for a black skin color is unique among Africans, there logically is a boundary for each race if you exclude members of mixed race. Next it would be rather easy to determine the existance of 180+ IQ native Africans and Australians, and if such data is available it would be nice to include to make the point. As far as I can tell the means are all the same in the graph. --Zero g 16:39, 19 August 2006 (UTC)[reply]

there are several cites for the "levels" formulation. the bell curve "levels" there are: <75, 75-90, 90-110, 110-125, >125. --Rikurzhen 18:11, 19 August 2006 (UTC)[reply]

As stated earlier on this thread, the highest level of intelligence attainable by humans is (according to my readings) either 196+ or 200+ (depending on where you want to set the limit of what's measurable in IQs). This is for the entire human population, and has not been shown to differ for any subpopulation so far. Your assumption that "some intelligence genes are unique to specific populations" has not been demonstrated yet, and is just that. Also, BTW, the polygene responsible for dark skin is not unique to Africans (Aboriginal Australians and a few other Pacific peoples also have dark skin). And no, it wouldn't be easy to determine 180+ IQs in these populations, no more than it is anywhere else in the world: first of all, such a high IQ is exceedingly rare (I'll let someone else calculate the odds) and second, the vast majority of IQ tests just can't test that high (or at least become unreliable in this high IQ range). Lastly, just out of curiousity, how do you read the bell curve graph to get the impression the means are all the same?--Ramdrake 18:29, 19 August 2006 (UTC)[reply]
Since many IQ tests use multiple choice it's well possible for a retard to score 200 which contributes to a bellcurve shape. Possibly this deserves mentioning. Regarding skin color, you're correct, but it's a sharply contrasted genetic difference among races. Purely theoretical it could apply to intelligence genes as well. Unless there's valid research proving different it might be wise to not make such bold statements. Regarding the curve, I was using the wrong term, I'm refering to the standard deviation, which according to the article is 14.7 for whites and 13.0 for blacks. I don't believe this is shown correctly in the graph. If so it should be adjusted, because it implies there are more whites with very low IQs than currently shown. --Zero g 20:47, 19 August 2006 (UTC)[reply]
Showing a variable SD is aesthetically very unappealing. It also matters very little except at the extremes, where a normal fails to describe the true distribution anyway. --Rikurzhen 21:35, 19 August 2006 (UTC)[reply]
So the article is showing an incorrect graph because it's more aesthetically appealing? I find that odd to say the least. I strongly suggest using the unappealing but correct graph. --Zero g 22:13, 19 August 2006 (UTC)[reply]
The graph is modeled on a publication by Gottfredson. The further complication is that we do not have reliable SDs for groups other than Blacks and Whites. Keeping SD=15 is the best solution for that graph. The latter graph (IQ-4races-rotate-highres.png) uses more precise SDs. --Rikurzhen 22:41, 19 August 2006 (UTC)[reply]

ramdrake, although that's how IQ works in theory, in practice it does not work that way. the gaussian assumption operates only at the level of setting the scoring criteria based on a standardization sample. after that, anyone is free to score as high as the test might actually go. at least in the U.S., the population tends to have many more people w/ high/low IQ than would occur if the distrubtion were gaussian (normal). --Rikurzhen 18:33, 19 August 2006 (UTC)[reply]

And you, Rikurzhen, are absolutely right that the Gaussian curve is just an approximation; the high/low end statistical weights are higher than a Gaussian would predict, not only in the States but pretty much everywhere IQ tests are being widely used. However, each test is usually made to target a specific range (depending on which test one looks at), so may not all be appropriate to measure the high range. But I digress; the fact that the high and low ends have larger weight distribution than a Gaussian would predict is what makes me believe it is very likely that the ranges of the IQ curves overlap completely.--Ramdrake 21:33, 19 August 2006 (UTC)[reply]

This SD graph is only really showing two things, mean value and a visual representation that IQ's vary amongst the races (though inaccurate at how much). To be honest, I don't really understand the purpose of showing a normal curve. These equal standard deviations suggest that Asians are unequivically the superior race, in terms of intelligence, along with their deserved highest mean IQ. However, though Asians do have the highest mean IQ standard deviation amongst caucasians SD is greater meaning the graph should show a higher amount of caucasians in the ~140 range albeit a higher amount in the 50's. Therefore Asians are not clearly more intelligent in all aspects since caucasians produce more geniuses. If you don't agree with my facts imagine any other scenario or that guys issue with blacks being misrepresented. So, reminding you of the name of the article, this graph seems to be more of a misrepresentation of the facts than a simple table showing the mean data. It's more harmful than it is useful. I'd like to see it deleted or changed. 207.216.213.121 06:14, 22 August 2006 (UTC)RoosterCogburn[reply]

the difference between an SD of 13 (circles) and an SD of 15 (lines) isn't big enough at the tails to bother with the effect it has at the median. --Rikurzhen 06:35, 22 August 2006 (UTC)[reply]

Thoughts on RFC

"Is there a categorical distinction between general journals and specialist journals?"

On the face of it, no there is no distinction. To classify The American Journal of Psychology, American Behavioral Scientist, and Journal of the History of the Behavioral Sciences as "generalist journals" makes no sense when compared to journals such as Science and Nature (less extreme examples make sense also). I can imagine being able to identify the more generalist and more specialist of any pairwise comparison of two journals, but I can't imagine any classification rule existing for classifying journals as either "generalist" or "specialist".

That being said, the real question is whether:

  1. "Criticism of the Pioneer Fund (PF) has been limited to some general journals, and hasn't been raised in the specialist journals that deal with intelligence research regularly" is a fact, and
  2. (if this is true) what does that say about the status of the PF, and PF funded scientists, within the community of scientists.

Part of the methods for supporting this statement are described by one editor thusly: Journals deemed "specialist" in the field were validated using a citation analysis technique comparing the relative frequency of two words in their abstracts: target word "IQ" and control word "influence". There was no cross-check using another pair of words (such as "intelligence" and a known common word like "results"). Using "IQ" vs. "g" vs. "intelligence" deserves defending, and an explanation of this whole endevour probably ought to have a methods section, and a results section, and an introduction and a discussion, in short I think it qualifies as Original Research.

As for what this statement would mean about the status of the FP within "science", I disagree with the unstated assumption (?) that "specialist" journals are more authoritative about their research areas than are "generalist" journals. Journals which focus on a narrow research topic may be described as minor journals. The most prestigious journals are also the most generalist. This is no accident. A paper on a specialized topic, if sufficiently important and well-done, will appear in a more "generalist" journal than a less generally relevant, or less conclusive or elegant paper on the same topic.

The second question version is:

"Can editors decide which journals are "specialist" enough and use such decisions to assert that opinions (such as criticism) published in journals other than these is somehow less important or notable - even if the topic in question is a "meta" topic that is not ever directly addressed by any specialist journals."

Many of the same comments apply. I just don't see how to resolve editorial debates about how to address the role of PF funding in this article. But neither do I think that resolving this issue of what's a "generalist" or "specialist" journal holds the key to progress on the issue. Hope this helps. Pete.Hurd 05:31, 22 August 2006 (UTC)[reply]


Bogus arguments

IQ tests scores are not an absolute measure of intelligence; they tend to ignore many aspects of human cognition and the cognitive process. Things like creatively, wisdom, ability to learn, ability to adapt and practical skills are not gauged by these tests in a meaningful way. IQ tests also fail to measure the same construct among all people to whom the tests are applied, the more culturally distinct the group (I.E. Truckers, and Musicians) the greater the discrepancy. To apply a single test to an entire population of distinct individuals from varying backgrounds is unbelievably biased unless used to gauge a particularly relevant skill. Example: Race horses are not gauged for their poker skills. - Just as Sociologists are not measured by their ability to paint.

The fact of the matter is intelligence does vary among humans, but this can be for many reasons: prenatal care, subjective interpretation, interest factors, differing environments, life circumstances etc. My concern is not with differences among individuals, but with claims that imply that group differences involving subjective and highly bias testing situations can amount to genetic differences in the traits being tested.

How does one compare the intelligence of a gifted painter with that of a mediocre Physicist? According to the narrow methods and perspectives used and held by many Psychometricians, the Mediocre Physicist is likely to be perceived the more intelligent. Why, because this is what the testing situation demands that they believe/think.

Psychometric tests do not and can not measure the number of years spent in practice, nor can they measure interest, motivation, interpretation, diet, home & social life, daily activities etc.; nor do they try! Despite these obvious and fundamental short comings this model is often presented as valid and unbiased by many practitioners.

Cole, Gay, Glick and Sharp (1971:233) made the following insightful observation: “ Cultural differences in cognition reside more in the situations to which particular cognitive processes are applied than in the existence of a process in one cultural group, and its absence in another.

Robert Sternberg and his colleagues ask the experts to define “intelligence” according to their beliefs. Each of the roughly two dozen definitions produced in each symposium was different. There were some common threads, such as the importance of adaptation to the environment and the ability to learn, but these constructs were not well specified. According to Sternberg, very few tests measure adaptation to environment and ability to learn; nor do any tests except dynamic tests involving learning at the time of the test measure ability to learn. Traditional tests focus much more on measuring past learning which can be the result of many factors, including motivation and available opportunities to learn (Sternberg, Grigorenko, and Kidd, American Psychologist, 2005). - IQ test items are largely measures of achievement at various levels of competency (Sternberg, 1998,1999, 2003). Items requiring knowledge of the fundamentals of vocabulary, information, comprehension, and arithmetic problem solving (Cattell, 1971;Horn, 1994).

Further more, IQ is not a fixed quantity; it can be raised (It is not as difficult to rise, as it is to maintain). This has been demonstrated numerously through studies involving environmental stimulation.

Examples of such studies:

In 1987 Wynand de Wet (now Dr. de Wet), did his practical research for an M.Ed. (Psychology) degree on the Audiblox program at a school for the deaf in South Africa. The subject of the research project concerned the optimization of intelligence actualization by using Audiblox. Twenty-four children with learning problems participated in the study, and were divided into 3 groups.

The children in Group A received Audiblox tuition. The children were tutored simultaneously in a group by means of the Persepto for 27.5 hours between April 27 and August 27, 1987. The first edition of the group application of the Audiblox program was followed. No diagnostic testing was done beforehand.

The children in Group B received remedial education. They were tested beforehand and based on the diagnosis each child received individualized tuition on a one-on-one basis for 27.5 hours between April 27 and August 27, 1987.

The children in Group C were submitted to non-cognitive activities for 27.5 hours during this period.

All 24 children were tested before and after on the Starren Snijders-Oomen Non-verbal Scale (SSON), a non-verbal IQ test that can be used for deaf children. Dr. de Wet reported that he could do nearly all the Audiblox exercises without adaptations, except the auditory exercises. Because he had to use sign-language, the children could not close their eyes. The average scores of the three groups on the SSON test were as follows:

Average IQ's before intervention, after intervention, and general Increase

IQ scores Group A (Audiblox group): 101.125 - - 112.750 - - 11.625 Group B (Remedial group): 107.125 - - 116.250 - - 9.125 Group C (Non-cognitive): 104.250 - - 108.875 - - 4.625

Reports received from the teachers indicated that the improvements achieved through remedial education and through Audiblox transferred to the general school performance of the children. The transfer scored through the Audiblox, however, was superior to that of the remedial education, says Dr. de Wet. Finally, because Audiblox can be applied in a group setting, it is much more cost effective that remedial education, he says.

Reference: De Wet, W., The Optimization of Intelligence Actualization by Using Audiblox (M.Ed. (Psychology) Thesis: University of Pretoria, 1989).

The Glenwood State School

A particularly interesting project on early intellectual stimulation involved twenty-five children in an orphanage. These children were seriously environmentally deprived because the orphanage was crowded and understaffed. Thirteen babies with an average age of nineteen months were transferred to the Glenwood State School for retarded adult women and each baby was put in the personal care of a woman. Skeels, who conducted the experiment, deliberately chose the most deficient of the orphans to be placed in the Glenwood School. Their average IQ was 64, while the average IQ of the twelve who stayed behind in the orphanage was 87.

In the Glenwood State School the children were placed in open, active wards with the older and relatively bright women. Their substitute mothers overwhelmed them with love and cuddling. Toys were available, they were taken on outings and they were talked to a lot. The women were taught how to stimulate the babies intellectually and how to elicit language from them.

After eighteen months, the dramatic findings were that the children who had been placed with substitute mothers, and had therefore received additional stimulation, on average showed an increase of 29 IQ points! A follow-up study was conducted two and a half years later. Eleven of the thirteen children originally transferred to the Glenwood home had been adopted and their average IQ was now 101. The two children who had not been adopted were reinstitutionalized and lost their initial gain. The control group, the twelve children who had not been transferred to Glenwood, had remained in institution wards and now had an average IQ of 66 (an average decrease of 21 points). Although the value of IQ tests is grossly exaggerated today, this astounding difference between these two groups is hard to ignore.

More telling than the increase or decrease in IQ, however, is the difference in the quality of life these two groups enjoyed. When these children reached young adulthood, another follow-up study brought the following to light: ┨e experimental group had become productive, functioning adults, while the control group, for the most part, had been institutionalized as mentally retarded.⼢r> Other Examples of IQ Increase

Other examples of IQ increase through early enrichment projects can be found in Israel, where children with a European Jewish heritage have an average IQ of 105 while those with a Middle Eastern Jewish heritage have an average IQ of only 85. Yet when raised on a kibbutz, children from both groups have an average IQ of 115.

In another home-based early enrichment program, conducted in Nassua County, New York, an instructor made only two half-hour visits a week for only seven months over a period of two years. He spent time showing parents participating in the program how best to teach their children at home. The children in the program had initial IQⳠin the low 90s, but by the time they went to school they averaged IQⳠof 107 or 108. In addition, they have consistently demonstrated superior ability on school achievement tests.

Further References: • Clark, B., Growing Up Gifted (3rd ed.), (Columbus: Merrill, 1988). • Dworetzky, J. P., Introduction to Child Development (St. Paul: West Publishing Company, 1981). • Skeels, H. M., et al., “A study of environmental stimulation: An orphanage preschool project,” University of Iowa Studies in Child Welfare, 1938, vol. 15(4).

Leon J. Kamin (Bell Curve Wars, 1995 p.92): “Extensive practice at reading and calculating does affect, very directly, one's IQ score.”r>

Robert Sternberg on the matter of IQ gains (Interview with Skeptic magazine): "I think it's hard to maintain the IQ gains. But if you think environment is important in the development of intelligence, and you put people in a really good program and you raise their IQ, and then take them out of the program and put them back in the poor environment in which they started, chances are you are going to lose a lot of the beneficial effect. If you give someone antibiotics for a disease, cure them, then put them back in the original septic environment, the disease will return. We've seen this when we work with children with parasitic infections. We can give them Albendazol and it will cure their parasitic infection. But if you put them back in the environment in which they acquired the infection, they will just acquire it again."

I personally do not agree with his comparing of IQ with disease or infection, but his point is valid; I am sure the same can be said for a good music program or art school. I think the main problem here is maintenance. Example: If a body builder does not exercise for some time his muscle mass will decrease. Or, if an artist does not paint for some years his/her skill will diminish. In other words, “use it or loose it.”

There are many other studies that prove IQ to be a non static phenomenon of little genetic value; one of the most notable and well known being the Flynn Effect: This study of IQ tests scores for different populations over the past sixty years, James R. Flynn discovered that IQ scores increased from one generation to the next for all of the countries for which data existed (Flynn, 1994). This interesting phenomenon has been called "the Flynn Effect."

”Research shows that IQ gains have been mixed for different countries. In general, countries have seen generational increases between 5 and 25 points. The largest gains appear to occur on tests that measure fluid intelligence (Gf) rather than crystallized intelligence (Gc).”⼢r>

http://www.indiana.edu/~intell/flynneffect.shtml

This being said, how well do IQ tests predict real world success? - According to Stephen J. Gould the only thing an IQ test can accurately predict is how well a person scores on the test. Many others have made similar statements

Robert Sternberg on the matter of intelligence etc: My first set of interests is in higher mental functions, including intelligence, creativity, and wisdom. - I have proposed a triarchic theory of successful intelligence, and much of the work we do at the PACE Center is in validations of this theory. The theory suggests that successfully intelligent people are those who have the ability to achieve success according to their own definition of success, within their sociocultural context. They do so by identifying and capitalizing on their strengths, and identifying and correcting or compensating for their weaknesses in order to adapt to, shape, and select environments. Such attunement to the environment uses a balance of analytical, creative, and practical skills. The theory views intelligence as a form of developing competencies, and competencies as forms of developing expertise. In other words, intelligence is modifiable rather than fixed.

We use a variety of converging operations to test the triarchic theory--componential (information-processing) analyses, exploratory and confirmatory factor analysis, cultural and cross-cultural studies, instructional studies, and field studies in the workplace. The results of all of these kinds of studies have been encouraging.

Key References: Sternberg, R. J. (1977). Intelligence, information processing, and analogical reasoning: The componential analysis of human abilities.Hillsdale, NJ: Erlbaum. Sternberg, R. J. (1985). Beyond IQ: A triarchic theory of human intelligence. New York: Cambridge University Press. Sternberg, R. J. (1990). Metaphors of mind: Conceptions of the nature of intelligence. New York: Cambridge University Press. Sternberg, R. J. (1997). Successful intelligence. New York: Plume. Sternberg, R. J. (1999). The theory of successful intelligence. Review of General Psychology, 3, 292-316. Sternberg, R. J., Forsythe, G. B., Hedlund, J., Horvath, J., Snook, S., Williams, W. M., Wagner, R. K., & Grigorenko, E. L. (2000).Practical intelligence in everyday life. New York: Cambridge University Press. Sternberg, R. J., & Grigorenko, E. L. (2000). Teaching for successful intelligence. Arlington Heights, IL: Skyligh

http://www.yale.edu/rjsternberg/

Robert J. Sternberg (b. 8 December 1949) is a psychologist and psychometrician and the Dean of Arts and Sciences at Tufts University. He was formerly IBM Professor of Psychology and Education at Yale University and the President of the American Psychological Association. Dr. Sternberg has also been the editor or co-editor of well over 50 psychological Journals.

Sternberg is also the author or coauthor of several college-level textbooks in psychology:

• In Search of the Human Mind, now in its second edition (1998) and published by Harcourt Brace College Publishers, is a full-length introduction to psychology suitable for courses in introductory psychology or general psychology. It is based on Sternberg’s triarchic theory of intelligence, and approaches psychology from the standpoint both of the evolution of organisms and the evolution of ideas. The textbook emphasizes the importance of the dialectic in how ideas evolve. This text comes with a full set of ancillaries available from the publisher. •

• Pathways to Psychology, now in its second edition (2000) and published by Harcourt Brace College Publishers, is an abbreviated introduction to psychology suitable for courses in introductory psychology or general psychology. It is based on Sternberg’s triarchic theory of intelligence, and approaches psychology from the standpoint of the multiple pathways that converge on an understanding of psychology—multiple theoretical paradigms, multiple methodologies, multiple styles of learning. This text comes with a full set of ancillaries available from the publisher. •

• Cognitive Psychology is now in its second edition (1999) with a new, second edition to be published for 1999 by Harcourt Brace College Publishers. It is an introduction to cognitive psychology suitable for courses such as cognitive psychology and cognition. It is based on Sternberg’s triarchic theory of intelligence, and emphasizes the importance of intelligence as an integrating concept in the study of intelligence. This text comes with a brief instructor’s manual and with a test bank. •

• Introduction to Psychology is now in its first edition (1997) and is published by Harcourt Brace College Publishers in their College Outline Series. This text is intended as a review of psychology, and is suitable as an ancillary for students taking the introductory course, or as a review for students studying for various examinations, such as the Advanced Placement psychology text or the GRE Advanced Test in psychology.

Major Honors Include:

• Early Career and McCandless Awards of American Psychological Association • Outstanding Book, Research Review, and Sylvia Scribner Awards of American Educational Research Association • Palmer O. Johnson Award, American Educational Research Association • Cattell Award of Society for Multivariate Experimental Psychology • Distinguished Scholar Award of National Association for Gifted Children • Past-Editor, Psychological Bulletin • Editor, Contemporary Psychology • Past-Associate Editor, Child Development, Intelligence • Past-President, Divisions 1 (General Psychology) and 15 (Educational Psychology) of the American Psychological Association • Distinguished Lifetime Contribution to Psychology Award, Connecticut Psychological Association • James McKeen Cattell Award, American Psychological Society • President-Elect, Division 24 (Theoretical and Philosophical Psychology), American Psychological Association • President, Division 10 (Psychology and the Arts), American Psychological Association • Guggenheim Fellowship • National Science Foundation Graduate Fellowship • National Merit Scholarship


- Also see work by Harvard University's Howard Gardener.


Sternberg on Psychometric G (a quote from his interview with skeptic magazine): “What I found at that time was that if you use the kinds of tasks that are used in intelligence tests, then you will get the g factor. That statement reflected analyses we did that instead of using individual difference analysis used process analysis. Even using process analysis, we got a general factor. So if you were to ask me, "Do I think that there is general factor in the kinds of tests that psychometricians use?" I would say "Yes." That is a different question from, "If you define intelligence, not just as IQ, but as involving more than what the IQ tests in fact test, is there then a general factor?" then I would say the answer is "No." So the way psychometricians operationalize it, you get a g factor.”

Note: There are three major schools of psychometric interpretation and only one supports the view of g and IQ.


Race and Genetics:

- Osbonre and Suddick (1971, as reported in Loehlin, 1975) attempted to use 16 blood-groups genes known to have come from European ancestors. Testing two samples the authors found that the correlation over the 16 genes and IQ scores was not highly positive as would have been predicted if European genes in Blacks increased IQ scores. In Fact, the correlations were -.38 and +.01. Because the results were not significant, the authors concluded that European genes lower IQ scores.

- Zuckerman (1990) demonstrated the dubiousness of results obtained through race premises. He found much more variation within groups designated, and, like many other species, humans showed considerable geographical variation in morphology (p.1134). Yee, et al. (1993) further concludes this.

- A study conducted by Tizard and colleagues involving Caribbean children showed that there was no genetic basis for IQ differences between black & whites. The IQ of the children at the Orphanage was: Blacks 108, Mixed 106, and White 103 (Flynn, 1980; also see Richard E. Nisbett, Race, Genetics and IQ; The Bell Curve wars, 1995).

- Adjustments for socioeconomic conditions almost completely eliminate differences in IQ scores between black and white children. Co-investigators include Jeanne Brooks-Gunn and Pamela Klebanov of Columbia's Teachers College, and Greg Duncan of the Center for Urban Affairs and Policy Research at Northwestern University.

- According to most geneticists human populations have never been separated long enough for anything but the most superficial traits to have developed; human psychical traits over lap and graduate into one another. As well, there is as much or more diversity and genetic difference within any "racial" group as there is between people of different racial groups. Traits like height and body shape offer much more genetic information than anything we use to designate the racial groups here in North America and elsewhere. Also, what is considered black America could be considered white in Africa; that is, social ideas involving race differ from population to population." (See, Cavalli-Sforza, Menozzi, Piazza, 1994 & 2000; Davis, 1991; Allen & Adams, 1992. Yee, Fairchild, Weizmann and Wyatt, 1993; Also see Dryna, D.Manichaikul, De Lange, Snieder, and Spector, 2001; Holden, 2001)

- Also, IQ differences in the U.S are not as drastic as some have you believe. Many researchers put the difference between 7-10 points (Richard Nisbett, 2005; Vincent, 1991; Thorndike et al, 1986; Leon J. Kamin, The Bell curve wars, 1995). As well, this conclusion is only reached after lumping the entire population together as a single body. The truth is blacks from different regions in the U.S. differ markedly in culture and achievement.

-In more than a dozen studies from the 1960s and 1970s analyzed by Flynn (1991), the mean IQs of Japanese- and Chinese American children were always around 97 or 98; none was over 100. These studies did not include other Asian groups such as the Vietnamese, Cambodians, or Filipinos; who tend to achieve less academically and perform poorly on conventional Psychometric tests.

-Stevenson et al (1985), comparing the intelligence-test performance of children in Japan, Taiwan and the United States, found no substantive differences at all. Given the general problems of cross-cultural comparison, there is no reason to expect precision or stability in such estimates.


Much evidence against Rushton and Lynn to come! Until then, see empirical evidence against Rushton, here:

Reply to Rushton: Review by Douglas Wahlsten, University of Alberta:

http://www.cjsonline.ca/articles/wahlsten.html

  1. ^ RON KAUFMAN The Scientist, Vol:6, #14, July 6, 1992
  2. ^ See below. The leading critics of the fund include the SPLC, IQ critic William H. Tucker, and historian Barry Mehler and his Institute for the Study of Academic Racism.
  3. ^ Neisser, who was the chairman of the APA's 1995 taskforce on intelligence research, states race and intelligence research "turns [his] stomach," in a review of Lynn's, The Science of Human Diversity: A History of the Pioneer Fund (2004). He also states, "Lynn's claim is exaggerated but not entirely without merit: 'Over those 60 years, the research funded by Pioneer has helped change the face of social science.' . . . Lynn reminds us that Pioneer has sometimes sponsored useful research - research that otherwise might not have been done at all. By that reckoning, I would give it a weak plus."
  4. ^ See for example Morton Hunt's The New Know-Nothings: The Political Foes of the Scientific Study of Human Nature (1999; pp. 63-104) which argues that recent years "have witnessed a dramatic upsurge in efforts to impose limits on the freedom of social scientists to explore controversial research questions, particularly questions that could yield answers distasteful to those with certain sociopolitical or ideological agendas" (Template:AYref). Robert A. Gordon, criticized for accepting grants from the Pioneer Fund, replied to media criticisms of grant-recipients: "Politically correct disinformation about science appears to spread like wildfire among literary intellectuals and other nonspecialists, who have few disciplinary constraints on what they say about science and about particular scientists and on what they allow themselves to believe."(Gordon 1997, p.35)