- When I am relatively happy with a section, it will be "locked" with the slight green background. All other sections are a work in progress and may not represent all or even most of what I intended to say in those matters.
Over the years, I've learned several interesting things about Wikipedia. Let me share them with you:
Why edit warriors can win
- Please comment on this section here.
There is a problem I've noticed recently (somewhere around 2008) with how WP:3RR and edit warring are dealt with.
Consider the following case: Users A and B disagree about content. Discussion (and even some dispute resolution procedures) have been tried (on this article or other(s)), and the users still cannot agree. An edit war ensues. User A makes 3 reverts, User B makes 4 reverts. Will user B report user A to 3RR? Of course not. Will user A report user B to 3RR? Perhaps, but there are admins who will judge them both guilty of edit warring and block them both. Thus the user A who did not break the 3RR risks receiving the same penalty as the user B who did. User A cannot win edit war against user B (because he is not willing to break the 3RR) and the article version stays in user B version.
Reverting less, by itself, will not do much good. Reporting user B after 3 reverts is not going to achieve anything, of course (since 3 reverts are allowed...), and even with two reverts, user A may be accused of edit warring.
Unless our User A wants to go for a dual permblock (in other words, kamikaze himself on of many edit warriors - users B's - he has to deal with), or slowly see his reputation ruined overtime, his only option is to give up on the article(s) and let revert warriors win, at least temporarily (thus lowering quality of Wikipedia, whose articles were compromised by edit warrior(s)).
Of course, users A and B don't operate in a vacuum. One would expect that community will get involved and stop the edit warrior. But it is not always the case. When articles targeted are low key articles, with few or no editors watchlisting them (so nobody but users A and B is active in them), the bazaar principle that many eyes weed out the bugs (including disruptive edit warriors) fails to kick in. Sure - there are dispute resolution procedures, which allow user A to ask for those extra eyes. They are however lengthy and will require user to devote time to explaining why a 3RR violator should be stopped (one would think this would be obvious), and thus user A will not create content/police other articles/do other constructive things. Not to mention that in most dispute resolutions, neutral editors (RfC and noticeboards commentators, mediators, etc.) rarely involve themselves in an edit war; they may agree on talk with user A - but what good is it if user B keeps disagreeing with everyone and reverting? Sure, editor A could try escalating the dispute resolution to ArbCom (allowing the edit warrior to compromise x articles for months, before ArbCom ruling, and assuming the edit warrior is active enough to warrant ArbCom attention); he can ask more editors for input and involvement in the article (leading to accusations that he is forum shopping/canvassing and often failing to attract attention to anything but himself anyway, particularly when organized teams of edit warriors (WP:TAGTEAM) step in), or engage in edit warring and hope that 3RR will be interpreted correctly. Or, of course, he can just give up.
This gets even worse if user A is an estabilished editor who does not want to have his block record tarnished with blocks for edit warring. Even more so if user A is active enough to deal with several user B's on various articles (so he may have to deal with several dispute resolutions); multiply this by long history on Wikipedia and you have user A who tries to ensure quality in many articles, defending them from periodical edit warriors, but who can be easily portrayed in bad faith as a long-term edit warrior.
In other words, there is an increasingly dangerous interpretation of EDITWAR ("it takes two to tango so both are equally guilty") that replaces clear interpretation of 3RR ("one reverts 4 times and he is guilty"), In effect, this penalizes users who try to stop edit warriors and empowers hardcore edit warriors - who either will not get reported and will succeed in pushing their version, or who at least will take the user who tried to stop them down (getting them blocked or at least tarnishing their reputation). The incentive is increasingly high on "let disruptive edit warrior, user B, do whatever he wants, he is not worth my time and stress".
Of course, edit warring is bad, there is no disagreeing with that. But it is inevitable it will occur, due to edit warriors (particularly the "true believers" I discuss in the following section). It does not take two edit warriors for an edit war to occur - this is a common misconception. It takes one edit warrior, and a good editor who is not willing to let the edit warrior disrupt the article. If nobody breaks 3RR, but keeps at 3 reverts, this is when articles should get protected, and dispute resolution engaged, there is no other choice. But if one editor breaks the 3RR and the other doesn't, the solution - and difference - is simple.
Solution: enforce simple 3RR, not vaguely interpreted EDITWAR. 3RR was created so that one would not have to go through lengthy procedures in order to stop obvious edit warriors. 3RR draws a simple and clear distinction between a user still inside wiki editing policies and the one outside. Invoking EDITWAR for both sides, when only one side violated 3RR, creates confusion, making 3RR increasingly useless (as fewer edit warriors will get reported and thus edit war will continue), unpredictably unfair (random in outcome, as various admins pay varying attention to EDITWAR), and worst of all, against the spirit of our project, allowing edit warriors to win content disputes and have their version stabilized on Wikipedia. EDITWAR should be used to penalize both sides only if neither has violated 3RR and if both failed to seek a dispute resolution, and kept edit warring. If both sides violated 3RR, block both.
On the most dangerous of mindsets
- Please comment on this section here.
If an editor thinks he is truly neutral, and has no POV, he is not only violating WP:NPOV (which clearly states that all editors have a POV: "All editors and all sources have biases"), but he is likely to refuse to ever compromise over content ("because he is not biased, on the contrary, he is completely neutral, right and represents the truth"). One cannot reason with such a user (one can try, but one will always fail). Let's call such users "true believers" for the purpose of further discussion. There are also editors which likely realize they have some POV, but believe it is the "correct" one. Known as "POV pushers", there is little difference between them and "true believers", and for now let's lump them together, as their actions and consequences are little different (besides, few "POV pushers" will admit they have a POV, so in effect they claim to be "true believers" anyway).
"True believers" will commonly edit war, or rather, force those who disagree with them to edit war: since "true believers" cannot be made to change their mind on talk, the other side will have little recourse but to deal with them in article mainspace, and of course the "true believers" will defend their changes in article space (since they find it hard to let their "truth" be erased). The more impulsive of them commonly have significant 3RR block histories, the others - just long histories of edit warring.
"True believers" have a lot of bad faith: since they are obviously "right", they believe their opponents are "wrong". They will often discuss and criticize other editors, in more or less elaborate fashion, creating wiki battlegrounds. They will not shrink and will often start dispute resolution procedures, since they believe they are "right"; they will be shocked when the community fails to see their "truth" - which will often result in claims that others (mediators, arbcom, etc.) are biased and part of the "evil cabal".
Often, good, reasonable editors will give up or leave because they find dealing with "true believers" too stressful: "true believers" can't be convinced that they are not "100% right", they will edit war in defense of their claims, and they will accuse their opponents of various wrongdoings. Dealing with them is just the pain in the three letters.
Do note that "true believers" may edit in non-controversial content areas (either ones unrelated to their "true belief" or ones where it doesn't matter), contribute good content in them and be civil to editors they meet there. They are not trolls, damaging all they touch: their content disruption is much more selective and much more difficult to spot by a neutral editors, unfamiliar with the content issue (albeit the neutrals will likely pick up on "true believers" habitual edit warring and incivility).
In my experience, "true believers" are most common in articles related to national histories, and modern politics. Presumably religion is similarly plagued, but it's not a set of topics I edit.
Solution:
- ) Realizing what POVs one has is crucial. Admitting them is difficult but is also crucial, as is a willingness to negotiate around them and compromise when necessary. Remember: NPOV requires all significant POVs to be represented. That does include ones you may disagree with, albeit due weight is important. One has to realize one's POV, don't be afraid to admit to it publicly, try to understand the other side and try to reach a compromise. And remember: compromise is sometimes known as the situation where everyone is just as unhappy :) Learn to live with it.
- ) "True believers", once identified, should be banned. If one is not willing to compromise, hundred or so wikis with declared POV (like Conservapedia) are that'a'way.
On radicalization of users
- Please comment on this section here.
Most of us come to Wikipedia as good faithed, but naive, editors. Over the time, we realize the depth of wikipolitics, the ulterior motives of some editors, and we grow more cynical and assume less good faith. It's a sad story, but one that simply parallels the real life: growing out of childhood and teenage idealism, and moving into the adult world of realpolitik.
How badly you'll be hurt by the wikiworld, depends, just as in real life, on where do you come from and where do you live (edit). You'll be exposed to more radical views if you leave in the Isreali-Palestinian disputed areas than if you leave on a peaceful farm in Canada. If you edit rarely visited, uncontroversial Wikipedia articles (about your local town, or uncontroversial, obscure science) you'll have a more positive experience than if you deal with articles about abortion, global warming or the Holocaust.
The more one runs into highly POVed users ("true believers" are the worst), who tend to cluster in the popular and/or controversial articles, the more likely one will slowly radicalize against their POV (the or is important, as some controversial articles are very unpopular - little known facts of Polish-Lithuanian history, usually kept alive by extreme nationalists of one side or another, for example). Even if you are the most kind hearted peacemaker, after living for a few years in a conflict area, you will come do despise the radicals on both sides, who cause you stress and who will target you, simply because if you are not with them, in their mindset, you are against them. And if you prefer one POV over another (which is completely legit and expected), you may slowly find yourself drifting more and more into extremism.
This includes:
- not editing/creating certain articles ("why help them?", "they can create it themselves");
- editing/creating certain articles ("how do you like this?"),
- assuming more good faith about your side than the others, leading to
- defending problematic editors of one's side (also referred to as "grooming pet trolls" - with unspoken rationale "he may be disruptive, but we need him to combat the even more disruptive editors on the other side"),
- supporting problematic editors in content disputes / discussions / eventually, even harassment of others ("because they have the right POV")
- grouping "enemy" editors into "tag teams" ("users A and B share similar POV and often work together"), and assuming that they have ulterior motives and at the very least are working against your side (WP:CABAL)
- and associating entire group of editors with a given "tag team" ("users A and B belong to nationality M so all users of nationality M are as disruptive as A and B").
Some of the above are acceptable, other are borderline, others are bad. Sometimes you may be right (there "may" be a cabal to get you - example), more often you are not.
Over time, this leads to more and more WP:BADFAITH on all sides. Good faithed newcomers will either leave the arena of conclict, finding it too implolite or stressful, or will become more and more cynical, bad faithed and radical, to an increasing extent supporting one side or at the very least advocating the use of "ban hammer" with less and less thought ("let's pull the entire neighbourhood down, it's impossible to save the ghetto"). With time, you'll find more and more examples to support bad faith (finding even one "true believer" a year may give you a decent sample of "evil" after a few years...). The disruptive users will become more disruptive, but good, open minded editors will be increasingly likely to chose sides or withdraw from given content topics.
Solution: assume as much good faith as you can, moderate and even support restrictions/bans on disruptive editors (including "true believers") supporting your side
On the evils of anonymity
- Please comment on this section here.
Anonymity protects your true identity. There are many good reasons for it. But it also allows others (including most "true believers" or pure and simple trolls) to hide under a noname account, while launching uncivil attacks against others - including non-anonymous users. Non-anonymous users thus are more likely to leave this project, as they don't want their real life reputations ruined. Yet non-anonymous users are inherently better for Wikipedia than anonymous: first, they have a moral courage to associate their real life persona with their views; second, they are less likely to risk being incivil/dishonest (since their real life reputation is at risk), and third, they bring identifiable qualities (proof of expertise in various subjects) to discussions.
Anonymity has its advantages (for example, for users editing from oppressive regimes, where their participation in this project may be illegal), but it confers no benefit to the project other than that. Most anonymous editors simply lack the moral courage required to link their real persona to their POVs (again, I can think of good reasons for it - ex. if one edits articles on porn or other taboo subjects - even with the best intentions - it nonetheless may not always something one may want to have associated with him). So certainly, I have no problem with anonymity. But in most cases, it's not helpful - while non-anonymity is.
Anonymity makes it easier to engage in dubious editorial behavior - from edit warring to personal attacks. Sure, people do get attached to their anonymous personas, and some have considerable respect on Wikipedia, but in the end, an anonymous editor with bad reputation can always "restart", even after a block. Non-anonymous cannot.
Yet being non-anonymous is not promoted in Wikipedia community. This is simply illogical. I am not arguing for banning of anonymous accounts; they should be allowed. However, being non-anonymous should be promoted, and non-anonymous users should be rewarded for their special dedication to this project.
Wikipedia officially wants to attract academics, and become more reputable thanks to their participation. Three examples from academics I know illustrate their dissatisfaction with how they are treated on the site. Two of them revealed their true names; two of them left and one is considering leaving the project. Why?
- Editor A as his first edits added some external links (some were quite relevant, some were indeed too detailed). He got accused by an anonymous admin of spamming; he got offended and left, saying that he has better things to do with his time than to contribute to a project and get such strong words in return (I know he was planning a major rewriting of several key science articles - he never did so).
- Editor B, considered by many editors a good content creator and civil discutant, got accused of "academic dishonesty" on talk of an article by an anonymous user, known for rash and uncivil remarks, and left soon afterwards saying that he cannot participate in a site and risk such slander becoming associated with his real name.
- Editor C, contributor of hundreds of high-quality articles, got accused of "copyvio" in the middle of drafting an new article (original source was already referenced and paragraph in question was from the start partially rewritten). He was highly offended by the accusation of plagiarism, and stated that it does not make him want to contribute more, if he can face such slanderous accusations.
Solution: there should be an officially recognized level of usership for non-anonymous users. There should be a way to certify you are who you are (for example, by making a $5 donation to the project with a credit card with your name on it, or by demonstrating (via a website, blog, etc.) that you are who you claim to be; Template:User committed identity may also offer some solutions). The non-anonymous editors should be very strictly protected from slander and flaming (akin to WP:BLP), and there should be a protection level for articles that would allow only non-anonymous editors to edit them (thus shutting of anonymous "true believers" from it).
Why good users leave the project, or why civility is the key policy
- Please comment on this section here.
I've seen too many good editors - including real like academics, for example - driven away from Wikipedia by anonymous "true believers" and worsening atmosphere due to radicalization. In most cases, the same process occurs: good editors get involved in pointless, stressful discussion with "true believers" and will become target of their incivil personal attacks (baseless accusations of "academic dishonesty", "nationalism", "antisemitism", you name it). They may also get involved in some edit warring (since "true believers" like to edit war). That leads to stress ("why am I contributing to this project, if all I get as a thank you is flame and trolling?"). Good editors will then leave, not willing to spend time creating quality content in exchange for flames and in worst cases, slander against their real life persona.
Solution: enforce WP:CIVIL, promote non-anonymity and ban "true believers".
We are here to build an encyclopedia. This is a principle many increasingly forget.
We are not here to create another giant discussion forum. Discussion is fine up to the point it disrupts building an encyclopedia.
Editors who build an encyclopedia should be encouraged. Editors who chase away encyclopedia builders should be discouraged.
Editors are not equal. Of course even the greatest content contributors should not be given a carte blanche with regard to personal attacks or such: they may drive away more people who would have created more content than they themselves do. However, experienced users and prolific contributors should be given reasonable doubt when they say they know more than new and less active users.
Editors who are more likely to create content than to damage the encyclopedia (with edit warring/incivil remarks/etc.) should not be blocked.
Solution: WP:IAR, WP:NOTBUREAUCRACY
On adminship
- Please comment on this section here.
Ideally, admins should be a paragon of virtue, examples to follow, liked and trusted by all the community.
However, we don't live in an ideal world. Admins are normal people and occasionally make mistakes. Further, as anybody in the spotlight (famous people in real world... or Jimbo here), they cannot please everybody. The more active an admin, the more likely is he will become somewhat controversial and will have enemies. Heck, if admins ban trolls, the trolls will dislike him :) Admins often police wikipedia, enforcing policies and dealing with troublesome users: consider how police officers in real life can be unpopular. If admin edits content, those with opposite POV, particularly "true believers", will dislike him. In some areas, plagued by tag teams and such, admins can become targets of harassment - when they try to enforce the policies, the editors who disrespect them will try to paint them in the worst light possible. Tough life, but adminship should not be seen as popularity contest.
The above not only makes it likely that an average admin will not be liked by everybody, but it also influenced who becomes an admin. In theory, as long as admin respects NPOV and other content policies, his personal POV (ex. is he pro-life or pro-choice, or is his pro-Russian or pro-American and so on) should be completely irrelevant to whether he should or shouldn't become an admin. Yet I have seen many RfAs where a good editor, who has however shown some controversial POV (not breaking any rules, just clearly identify with one or more POVs), was attacked by his POV opponents ("true believers"), who flocked to his RfA with oppose votes (WP:TAGTEAM?). That made many neutral editors express doubt along the lines "if enough people oppose him, there must be something to it", and chose to not support him. That, in effect, torpedoes such nominations.
On the opposite side, it is also common (if much less problematic) for an admin to get high percentage of supports from editors who share his POV. I say it's less problematic as I have not seen ineligible candidates elected simply because editors with sympathetic content POV overwhelemed neutral editors, but supporting somebody just "becuause they have a similar POV" is not a good argument for adminship. However, this actually is most apparent in cases were an admin candidature has been targeted by one group (those who don't like candidate's POV), and than the other group(s) will come to the rescue. Usually this indicates high degree of radicalization of editors involved.
Perhaps the problem can be best illustrated by words of three editors I know:
- first editor is a Polish wikipedian, whose editing pattern on en wiki differs from editing pattern on pl wiki; also from discussions with him on talk I realized that he was not editing certain articles he was interested in. When I asked him why, he told me: "If I edit those articles or discuss them, I will become controversial. A controversial editor cannot win RfA. I have to be an uncontroversial nobody with some positive edits in noncontroversial article to win RfA. Once this happens, I'll be able to edit what I want and show my real POV."
- second editor commented along the similar lines, saying that if one wants to be an admin, he should avoid any controversial articles and expressing POV. He also suggested that previously controversial editors, if they want to become admins, have no choice but to vanish, edit uncontroversially under a new account for few months, and than apply to RfA, which would otherwise be stacked by their opponents
- a third editor told me: "you would never pass an RfA today, you passed it when you were a nobody - today you are somebody and somebodies don't pass RfAs"
The system - of electing new admins, and of criticizing existing ones due to their POV - is obviously broken,: it can be gamed, and is unfair to editors who express an unpopular POV before becoming an admin (and who thus have a much less chance of becoming an admin if instead hid their true POV and "cheated" the community).
Solution:
- ) If an admin abuses admin powers, he should be desysoped.
- ) If an adnin gravely abuses community trust, he should be desysoped. All admins should be open to recall.
- ) Admins, just like everybody else, are entitled to their POV(s) and cannot be expected to abandon it and become NPOVed angels. Being an admin should not be seen as having no POV. Admin content edits should be ignored when considering his conduct as an admin. You should not care if a police officer is pro-life or pro-choice, you should care if he is using his powers correctly (is he shooting at random people?) or if he is otherwise unfit (breaks common norms of civility, decency, etc.).
- ) Admin candidates should not be penalized for being unpopular or for having a particular POV. Existing admins should not be claimed that they have lost the trust of the community, if those claims are centered not around any misuse of admin powers, but around their content POV and content edits.
- ) If an admin has some POV enemies ("true believers", "tag teams"), they should not be allowed to create an illusion that the community in general has lost trust in him.
Mud sticks, or on activity of editors
- Please comment on this section here.
A novice was once curious about the nature of the Edit Count. He approached the Zen master and asked, "Zen master, what is the nature of the Edit Count?"
"The Edit Count is as a road," replied the Zen master. "You must travel the road to reach your destination, and some may travel longer roads than others. But do not judge the person at your door by the length of the road he has travelled to reach you."
And the novice was Enlightened.
I love this quote (source), but the there is more to that issue than the quote covers.
Type, quality and amount of activity of an editor is of paramount importance. To a varying extent, this can be quantified (it's called social science, and social science use statistics; see Wikipedia:Wikipedia in academic studies, a page I created and maintain, for hundreds of relevant studies, which even quantify such ideals as "trust" on Wikipedia (ex. Dondio and Barret, 2006)). There are tools (WP:COUNT) that allow an analysis of editing patterns. Through at first look they may be seen as a drug for Wikipedia:Editcountitis, in fact they can provide much insight into editing patterns (some can break down edits by type, time and variously defined quality).
First, type of activity. Editors who create content and/or do wikignomish tasks (including building our policies and such) should be valued over editors who treat wikipedia as discussion forums, or even worse, places to flame and create battlegrounds like on Usenet. The editcounter tools allow to a verying degree discern percentages of edits per namespace (and chronological trends); obviously a user whose 90% of edits are in mainspace is different and likely more valuable (as a likely content creator) compared to a user whose 90% of edits are in mainspace talk/user talk (and who thus is likely a flame warrior).
Second, quality of activity. We have various peer-based ways of recognizing quality of content (being able to write Featured or Good articles, receiving barnstars from a wide group of neutral editors (not tag team buddies!)), and so on. There are also some more esoteric ways to analyze quality (see Wikipedia:Wikipedia in academic studies for more). Bottom line, the better the quality of edits by a user, the more valuable he is.
Third, amount of an activity. Again, we should be careful of editcountitis, but the more active the user, on average, the more valuable to the project he is.
There is a very important difference between judging an editor based on the total number of things he has done, and judging him by comparing him to an average editor. Consider the following examples:
- who is a better content creator? An editor A who has created 1 Featured article per month for the pat 6 month, or an editor B who has created 10 Featured articles over 4 years? What about editor C who creates one DYK weekly and has been doing so for the past year, compared to editor D who has been creating 3 DYKs weekly for the past half a year? Sure, the answer here is "both are great content creators". But what about
- who is a revert warrior? An editor A with 1 3RR violation - who has joined Wikipedia this week? An editor B with 6 3RR violations, editing for 6 months? Or an editor C with 2 RR violations, editing Wikipedia for 4 years? Consider my first essay above, about inefficiencies of ANI/3RR: if an admin just looks at the recent history of an article and sees two editors (B and C) edit warring, is he right to call them both edit warriors? I think not.
- who is uncivil? An editor A with 2,000 edits per month and one confirmed uncivil comment per month, editing Wikipedia for 5 years and thus with over 50 uncivil comments on his record? An editor B who has joined this project a month ago and has made 1k edits, 5 of them uncivil? An editor C who has been editing for a year, with about 20 edits per month, but half o them uncivil (over a hundred total)? Without knowing the average level of incivility on this project, can we say that editor A is uncivil? Is one uncivil comment per month, among 2000 different edits, enough to say that? What if it is way below the level of incivility of an average editor?
- who has lost trust of a community? Let's say than an average editor makes 10 edits per day and is criticized once every 10 days (thus once for every 100 edits). An editor that is 10 times as active (makes 100 edits per day) and is half less criticized per edit (this once for every 200 edits) will still rack up one criticism every 2 days. If one states during a dispute resolution: editor A is disruptive and has lost trust of the community, he is 5 times as often criticized as an average editors because he is criticized every 2 days instead of every 10 days like an average editor will be doing injustice to editor A, who is actually twice less disruptive than an average editor - he is simply 10 times as active... The only "fault" editor A has is that he is 10 times as active as an average editor - should he be ordered to limit his activity? Or should we say that "if you are ten times as active, you should be ten times as civil as an average editor"? Ridiculous, isn't it? Yet I have seen active editors, civil above the average, criticized in that very way.
- ignoring time patterns can be perilious. Time patterns show radicalization: if an editor has been editing for 5 years, has created much content in his first 3 years, but for the past years has mostly edited on talks, it likely shows he has radicalized, and needs to be "reformed back".
Editcountitis teaches us a very important lesson: total numbers are much less meaningful than the averages, particularly when combined with time patterns. Assuming one's habits don't change in time (which is not always true - editors can radicalize, or de-radicalize - but let's leave this aside for now), total numbers can penalize active and long term editors: consider a dispute resolution, in which an editor with a 100 days of editing history, 10 edits per day, 1 edit per day incivil (thus 1000 total edits and of them, 100 incivil comments) is claiming that he is as uncivil as an editor with 1,000 days of editing history, 100 edits of day, 1 edit per 10 days incivil (thus 100 incivil comments as well - but spread among 100,000 total edits!). Both editors can present the same numbers of diff, but it is clear one of them is much more of an incivil flame warrior than the other. That said, this is where time pattern analysis should be used: if our active editor has radicalized recently - if 75 of his uncivil comments occurred in the past 100 days of his activity - the situation is different is his uncivility pattern has been stable.
Another factor to consider is that "mud sticks". On Wikipedia, things are publicly archived and can be easily brought back with links and/or diffs. The more active one is, the longer one has edited, the more problematic edits will one accumulate. Unlike in real life, where history fades and is forgotten and/or more and more difficult to trace back, on Wikipedia, past edits can be discovered and brought back much more easily. Further, the more active one is, the more feathers is one likely to ruffle (as I've noted in the discussion of adminship). If an editor A (or a tag team) has been criticizing editor B for 2 years, reputation of editor A will be much more damaged than if he has been doing this for a month. Being able to put things in the perspective via time and activity patterns is crucial, otherwise if one simply considers the total numbers the simple conclusion is: the more active an editor, the worse he is (if we look at disruptive edits; of course analogically, if we look at good edits, the reverse is true).
If we can calculate, or at least assume, averages for editors per edit and time, this tells us much more than total numbers. Adjusting this for time patterns can help to detect radicalization; such editors should be cautioned and mentored (this should however be a good faithed, friendly approach - not punishing and alienating them, at least not until they show no sign of improvement).
With the above tools, editors can be ranked. Ranking, of course, is not perfect and is not uniliear (editors should have several ranks), but to certain extent this allows important, objective, comparison of editors. Such a comparison is vastly preferred to anecdotal comparisons based on cherry-picked examples (often with the goal of slandering an editor).
Solution:
- ) Development of tools for analysis of user editing pattern should be encouraged. Editors can be ranked.
- ) Flame and edit warriors, true believers, and so on can be detected and should be warned, mentored and if unreformable, banished from the community.
- ) Radicalization can be detected and should be counteracted as above.
- ) There are small lies, big lies and statistics. When making decisions, statistics are important, but care should be taken to ensure they are not biased. Usually people when arguing their case will try to cite statistics that support their case, but which may be highly misrepresentative (ex. if an average editor makes 1 uncivil comment per 1000, an editor who makes one uncivil comment per 10,000 but has been editing for years and has been extremly active can appear somewhat uncivil - this is however an unfair judgment, in fact penalizing an editor for above-average activity).
On cabals, canvassing and cooperation
- Please comment on this section here.
Evil cabals are rare. However, the project is built around collaborative software and cooperation with other editors. 99,9% times the editors cooperate, they do this to improve this project. Sometimes they will get accused of doing it to damage the project. This often occurs due to radicalization, as editors on some subjects divide themselves into camps and increasingly assume bad faith about the other side. This may lead to a self-fulfilling prophecy and formation of defensive cabal(s), since when a group of editors is often enough accused of a conspiracy, they may form one simply to more efficiently defend themselves.
Model of mass radicalization and conflict generation
- Please comment on this section here.
1. In every content area, a small percentage of editors display the signs of being "true believers" (uncompromising POV pushers). True believers misunderstand or ignore WP:NPOV; they act as through their POV was NPOV, refuse to recognize they have a POV (they often claim they represent NPOV), refuse to compromise on content with editors of other POV, and treat all who disagree with them as enemies. They frequently edit war and refuse to back down on talk.
2. Wikipedia model in general and content related dispute resolutions procedures in particular do work; thus "true believers" in most cases find themselves in the "loosing position" - the neutral/mainstream community ensures their POV is given only due weight.
3. This may however take much longer in highly specialized content areas, where fewer neutral editors will notice disputes. There, the relatively few editors know each other much better, and radicalization (process where a normal editor turns closer and closer to a "true believer" - at the very least, they assume good faith for "their side" and bad faith "for the others") is more common, leading to rise of tag teams or at the very least formation of content-based sides/camps/alliances/etc. Thus battlegrounds are more likely to arise in such content areas (but the model is probably true for all content areas).
4. Because "true believers" are likely to lose content disputes, they turn to harassment, personal attacks, and similar. Whether they do it on purpose of due to frustration of losing one content battle after another, the end result is a high count bad faith accusations and battlegrounds in articles they frequent and where they clash with others. This is were ArbCom can help, by identifying and restricting/banning most disruptive "true believers". Radicalized users can also be identified, and helped with some advice/mentoring.
5. Editors who find battlegrounds uncivil leave the project. Only the harassers and the victims with "thickest skin" remain.
Technical note: If both parties ("true believers" and "victims") claim they are right, how to easily identify who is who? There are two ways: 1) Look at who's supported by neutral editors (moderators, etc.). Two caveats: users involved in content may be biased due to radicalization, users "just passing by" may be confused by "sticking mud". 2) Look at the content creation: users who can write peer-reviewed and recognized content (FA/Reviewed A/GA) probably know more about NPOV than those who don't.
Solution: Mentor/restrict/ban true believers, mentor radicalizing users, ensure that civil editors are not chased away.