Wikipedia:Requests for comment/English Wikipedia readership survey 2013

It is hereby proposed that we conduct a survey of our readers, with a dismissible notification on every English Wikipedia article (sitenotice). This could be a one-off, or could be a regular survey (eg annual).

The core idea is to get input on major issues concerning the future of Wikipedia from readers, to supplement the usual model of discussions dominated by highly active Wikipedians. A model for this approach is the m:Research:Wikipedia Readership Survey 2011; this was run by the Wikimedia Foundation and focussed on technical issues. The proposal is to use a similar approach for issues of concern to English Wikipedia, where the community thinks specific questions can usefully be asked of readers. A survey would not exclude active users, but would seek to identify them (eg by self-declaration of a rough editcount and rough account age) so that statistical analysis can be done to identify any systematic differences between readers and active editors.

This page is primarily about the principle and mechanics of a survey. Specific questions or topics should be discussed on a dedicated subpage.

Should we conduct a survey?

No, not if it is conducted by invitation on mainspace pages. It would be ripe for extreme non-response bias. The questions so far suggested touch Wikipedia policy, and without bias control it invites gaming. Surveys need to be driven by a need for information. I see no information need driving this, and it looks pointless. Wikipedia should not waste readers' time. --SmokeyJoe (talk) 11:51, 5 May 2013 (UTC)[reply]

See #Non response bias below. --Anthonyhcole (talk · contribs · email) 22:15, 5 May 2013 (UTC)[reply]

I don't see the harm in doing a survey as long as the limitations of the survey are kept in mind when making it and when analyzing the responses.-- Brainy J (previously Atlantima) ~~ (talk) 18:48, 23 May 2013 (UTC)[reply]

No. I am opposed to any mechanism that requires any sort of response to opt out. I have a very hard time imagining an issue so fundamental to Wikipedia that the Wikipedia project -- one based on the idea that anyone can opt in and contribute -- should try to coerce (even in a minimal way) comment from those who choose to retain their status as non-contributors. BSVulturis (talk) 17:06, 5 June 2013 (UTC)[reply]

Yes, I essentially agree with Brainy J (talk · contribs), above. — Cirt (talk) 17:51, 10 June 2013 (UTC)[reply]

Publicising the survey

  • A notification on every article? That would get supremely annoying after about 15 seconds. ‑Scottywong| communicate _ 17:39, 1 May 2013 (UTC)[reply]
    • It would be a MediaWiki:Sitenotice / MediaWiki:Anonnotice and dismissable (in the sense that dismissing once would dismiss it on all articles). Should be OK if done right. Rd232 talk 17:54, 1 May 2013 (UTC)[reply]
      • Dismissable for all users whose browsers are configured to accept cookies, right? Anyone whose browser doesn't like cookies will be forced to dismiss it on every article? ‑Scottywong| communicate _ 18:24, 1 May 2013 (UTC)[reply]
        • Well, if that's an insurmountable technical issue (in theory, the dismissal could be recorded for the account, eg in a usertalk subpage), we can have it just on the Main Page. Rd232 talk 18:42, 1 May 2013 (UTC)[reply]
          • The main page averages about 10 million hits per day. If 5% of users going to the main page have cookies disabled, then we're inconveniencing half a million people per day. The dismissal could be recorded for each account even if cookies are disabled, but that only works for users that actually have an account (and besides, cookies are required for you to stay logged in as you go from page to page). Are these surveys only intended to be displayed to logged-in users, or to everyone? Speaking personally, I hate surveys on web sites. If we're considering bothering readers with surveys, such surveys should be a last resort, and should only be used for the most absolutely important Wikipedia issues (in my opinion); issues that cannot otherwise be decided by editors using the usual methods of discussions and RfC's, issues that cannot otherwise be decided without input from non-editors, and issues for which gathering survey input from non-editors has a very strong justification. I currently cannot think of any issues that rise to that level of importance. ‑Scottywong| prattle _ 19:13, 1 May 2013 (UTC)[reply]
              • What are you on about? Every IP has a talk page, and can have a subpage of the talkpage to record a dismissal. But this is technical detail: postulate the condition that users can effectively dismiss the notice, and leave the rest for implementation. Rd232 talk 20:08, 1 May 2013 (UTC)[reply]
              • As to content: issues that cannot otherwise be decided by editors is a nonsensical criterion: everything can be decided by editors (even if it's by lack of consensus meaning the status quo wins), and ultimately will be even if the reader survey becomes a thing. The question is whether some of the time the decisions of the community of editors will be informed by knowledge of what readers (people who don't contribute to discussions) actually want. Rd232 talk 20:08, 1 May 2013 (UTC)[reply]
                • You're suggesting that we track anonymous users' dismissals by IP. How would we deal with people who access Wikipedia from shared connections? For instance, there are 3000 people on a college campus, and all of their internet connections go through the same public IP address. To Wikipedia, they would appear as the same single user, even though it would be quite unlikely that all 3000 people would have the same preferences regarding surveys. Or, what about people who access Wikipedia from dynamic IP addresses that change continuously? Their preferences would not be saved. I don't mean to dwell on the technical details, but I am dwelling on it because I can't postulate a situation where all users can effectively dismiss the notice and not have it return against their wishes. I'm not sure that such a situation exists. And if that's the case, then we can conclude that any survey is, by definition, going to aggravate some percentage of our readers, and the usage of surveys needs to take that fact into account.
                • I agree with you about the content of the survey, but I would still stress that any survey needs to have a very strong justification for bothering our readers. There should always be, in my opinion, a strongly demonstrated need for input from the readers, not just a vague "hmm, I wonder what our readers would say about this". Nor should the survey be used as a means to overturn something that already has consensus on Wikipedia (i.e. using surveys as a form of forum shopping). It's just my opinion that it should be used exceedingly sparingly. ‑Scottywong| express _ 20:58, 1 May 2013 (UTC)[reply]
                    • Re shared IP dismissal: yes, I thought of that, that once one person has dismissed the message, it'll be saved in that IP's subpage, and no-one else on that IP will see the message. That's not ideal, but not easy to fix; it might just be something we have to live with. But it can certainly be handled in a way that ensures no-one is ever aggravated by non-dismissability.
                    • Re content - yes, absolutely it should be used sparingly. My gut right now suggests that it might work well as an annual thing, where let's say every 1st January there is a reader survey, if in the preceding months potentially suitable issues have been raised and questions have been drafted, and if it's agreed by sometime in December that the result is worth putting to readers. Rd232 talk 23:00, 1 May 2013 (UTC)[reply]
  • Scottywong: I agree it should be used sparingly, and I agree with Rd232 that annual seems about right. Regarding the ability of shared IP users to see/dismiss the sitenotice (or for that matter vote), we may have to live with some readers being excluded, if there is no easy solution. We probably need to exclude dynamic IPs, too, if that's possible (I'm an IT moron), to reduce the possibility of gaming the results. So, this might have to be a survey not of all readers, but of one reader per static IP ... which would still tell us something about the general readership. I would oppose limiting the survey to readers of the main page, that would unnecessarily vastly reduce the sample size, and so the strength of the results and the franchise, for no apparent gain. It could be that the sample accessing en.WP via the main page is somehow qualitatively different from that accessing articles via a search engine, too. --Anthonyhcole (talk · contribs · email) 02:50, 2 May 2013 (UTC)[reply]

Overriding the community?

Currently the bulk of decisions are made by the editing community, with occasional override from the Foundation and or the devs. If we start surveying the readership then theoretically power passes to them, but more realistically, as with countries that use referenda, power accrues to those who set the questions. Surveying and referenda are not consensus based decision making methods, instead they tend to focus outcomes on a narrow set of predefined answers. So if we implement a readership survey it is crucial that the setting of questions is done consensually by the community.

One of the commonest ways to rig discussions via referenda is to divorce projects from their costs and implications. "rebuild school x" and "raise tax Y by 2% in order to rebuild school x" are likely to get very different results, especially if you ask taxpayers. In the case of Wikipedia the most important resource is the time of our volunteer editors, and the risk of a survey is that it would be used to get readers to give their views on matters which require substantial extra time from volunteers. Not only do most readers not edit, but I suspect most have no idea as to the size and health of our editing community.

Many of the most difficult discussions here have been resolved by compromises that were designed to keep both sides of the debate still editing on the pedia. But surveys, and their slippery slope sister referenda are rarely designed to hammer out compromises. Whether it comes to our diverse citation styles, our wp:ERA compromise on CE/BCE v AD/BC, our plethora of weights and measures and even currencies and the way we encompass multiple written variants of English shifting from decisions designed to keep as many editors as possible to ones based on readership preferences could lead to a radically different pedia.

Now it could be argued that we could conduct a readership survey and treat it purely as a consultative exercise. But in practice that would be difficult to do with a public and published survey. If the press knew that for example 52% of readers preferred American English spellings or that the majority didn't want us to delete so many articles that we think are "not notable" then it would be difficult to go against that. A commercial organisation can consult its customers, find out their views on say pricing, and then make the best decision for their shareholders, but they don't have to publicise their survey and disclose how they have used that data.

That said I'm not against doing a readership survey where the questions are set and agreed by the community, and the option put to readers are all ones that we could live with. For example I can't see us standardising on one variant of English without losing a large proportion of editors, but if the readers strongly preferred it we could make English v American English a user preference in the same way that the different versions of Chinese are.

Similarly we could find out our readers views on the contentious issue of whether to have an image filter. But in my view we should only ask the question if we are willing to act on the answer, (disclosure, I'm the principle author of the only image filter proposal where the workload falls on the filterers rather than the existing community).

My preference for this sort of survey is the same as for political ones. Only consult about options that you are willing and able to deliver. ϢereSpielChequers 06:45, 2 May 2013 (UTC)[reply]

Fair points. Yes, questions would need to be hammered out through a good discussion process. I'm not ruling out this will prove impossible in practice, and/or that the community would not approve the resulting questions to go live - so no survey would actually happen. As to the issue of what weight the survey has - well if the results are significantly different from what editors think should happen, then it ought to be possible to justify that disagreement on some rational basis. At any rate, I think it can only contribute to a healthy debate to know more about what readers think. Rd232 talk 12:32, 2 May 2013 (UTC)[reply]

Non response bias

I don't think this can be reasonably called a "Readership survey" with the more-than-likely massive non-response to response ratio. I know many readers, using Wikipedia frequently, who will not respond. --SmokeyJoe (talk) 11:55, 2 May 2013 (UTC)[reply]

Well, Non-response bias is worth thinking about yes. First, we can collect some basic demographic data, and compare it to population statistics (this would give us some understanding of differences between responders and potential readers, at least). Second, the alternative to a survey like this is not some perfect census; it's just the status quo, relying solely on editors and not even attempting to get input from a wider audience. Rd232 talk 12:37, 2 May 2013 (UTC)[reply]
Can you be more specific about what the real question or aim is? Broad surveys are usually bad surveys. Targetted surveys are easier to do properly. --SmokeyJoe (talk) 13:21, 2 May 2013 (UTC)[reply]
The "real aim" is simply to create a mechanism for getting wider input into discussions about shaping the direction of Wikipedia. Solely relying on current editors is not the optimal way to run all such discussions. Rd232 talk 13:58, 2 May 2013 (UTC)[reply]
I think "readers" can be readily divided into types. Browsers, fact finders, finding introductory material on a specific subject, reading for pleasure, seeking to copy/paraphrase some description on an already known subject. --SmokeyJoe (talk) 13:26, 2 May 2013 (UTC)[reply]
Well yes, some "why did you come here today" sort of question might well be helpful to understand responders better. Rd232 talk 13:58, 2 May 2013 (UTC)[reply]

With Wikipedia usage being so common (everybody I know has used Wikipedia), I think a representative survey of readers should be done by approaching random people in the real world. More at Sampling (statistics). --SmokeyJoe (talk) 02:22, 3 May 2013 (UTC)[reply]

Ensuring that such a survey didn't introduce its own bias would require using the services of a professional outside body. And it would have to be limited to one or a few countries, rather than all the countries Wikipedia is used. And it would still be heart-stoppingly expensive. I don't think Wikipedia would (or should) do this even if the WMF accidentally won El Gordo. Rd232 talk 10:22, 3 May 2013 (UTC)[reply]
Nonsense. Surveying is not rocket science. The worst way to run a survey is to spam millions, annoying most of them, and using the few responses. That's what this sounds like. Busy people don't respond to non-targeted surveys. Enwiki editors come from may counties, and from diverse backgrounds. With targeted groups, we could access sufficient numbers of each. The questions are: What sort of people are we interested in, and what do we want to know from them. --SmokeyJoe (talk) 12:32, 3 May 2013 (UTC)[reply]
So you've gone from arguing that we should do a hugely expensive offline survey, to arguing that we should pretend en.wp editors are representative of readers as long as we account for demographic differences... say what? And as part of that argument you've gratuitously asserted that advertising a reader survey on a website which the survey is about is "spam". I would also question whether an appropriate notice would cause more than very, very mild irritation. Most people are happy to be given the opportunity to voice their opinion about things they care about, and so even the very many who wouldn't respond might actually be happy to see an invitation. Rd232 talk 13:09, 3 May 2013 (UTC)[reply]
I never meant that we should do an expensive survey. I'm concerned that self selecting respondents might be very non-representative. Asking respondants who they are and why they were reading today is good. Do we know the demographics of our readers? I'm guessing not. --SmokeyJoe (talk) 13:44, 3 May 2013 (UTC)[reply]
Regarding response bias, it is inevitable in all surveys, but it doesn't make surveys valueless. One thing we can do to determine the extent of bias in this method is to run a concurrent survey using a different method, such as a general population telephone survey. If the phone survey is conducted in Australia, UK and USA, we can compare the results with identical sex and age cohorts from those countries in the online survey. The sample limiters and factors affecting response in phone surveys are different from those affecting an online survey, so if the answers are similar from the different methods, you can have more confidence in the representativeness of each. Not perfect, of course. Nothing is. --Anthonyhcole (talk · contribs · email) 23:16, 5 May 2013 (UTC)[reply]
Regarding the cost of offline surveys: I've just spoken with Bruce Packard of Roy Morgan (an Australian pollster). For three questions in their "omnibus" (multi-client, shared cost) phone survey of 630 people 14 years old and over, matching the Australian sex/age demographic, they charge $4590, and $935 per extra question. Presumably, this is negotiable, and presumably much cheaper per question if we commission our own large survey. I would hope we can achieve what we want for less than $15,000 in each of those three countries. We can consult the Foundation on this, and learn from their experience with Resolve Market Research who conducted the 2011 reader survey. The total sample size for that study was 4000 with a sample of 250 in each country.
I'm not suggesting we do the offline survey every time we do an online survey but I do think it would be prudent to do one at the outset, and again from time to time. --Anthonyhcole (talk · contribs · email) 23:47, 5 May 2013 (UTC)[reply]
Bias may or may not be a problem. I think it depends on whether responders have a conflict of interest between providing honest helpful information, and pushing a barrow.
(A) Questions where bias is less of a concern include: Were you able to find the information you were looking for? Which navigation aids did you find helpful (a) the search box; (b) in-sentence blue-linking to other pages; (c) the "See Also" section of links at the bottom of the page; (d) Category pages; (e) Link summary boxes (templates); (f) External search engine such as google.
(B) Questions where bias is concerning would include: "Should Wikipedia have a "safe search" option, so readers can selectively filter violent, sexual, or other potentially offensive images?" This is a question on which some existing Wikipedians have strong opinions, and it is unrealistic to ignore the possibility of these editors stacking the survey responses.
(C) Questions of borderline bias concern might include: "Did Wikipedia have sufficient information to meet your need" (this may touch inclusionism/notability/advocacy/fringe issues).
(D) An important question that is ignored by an online Wikipedia invitation to respond is: "What things prevent you from being able to access Wikipedia?"


(A) These issues should be regularly, even continuously surveyed, online. In fact, I think this question should be invited by link on every unsuccessful search result.
(B) A high profile contentious issue like safe-searching may require near-professional care in surveying.
(C) would only need reasonable care, such as having included question about who the responder is, and why they are seeking the information, questions that when analysed reveal faked submissions.
(D) Requires an offsite survey, or maybe it should be called "investigation", and certainly does not require professional services.
My point is that possible biased responses need to be considered while proposing the question. --SmokeyJoe (talk) 01:14, 6 May 2013 (UTC)[reply]
See meta:Research:Wikipedia Readership Survey 2011/Results for what's currently known about reader demographics. Generally speaking, they are less male, less young, and less white than our editors. This survey covered people in the 16 countries that represent 70% of all Wikipedia page views. I don't know whether it's possible to disaggregate the data to get en.wp results only. WhatamIdoing (talk) 01:40, 6 May 2013 (UTC)[reply]
I participated in that survey and read the results. It's a stretch to call the "responder demographics" equivalent to "reader demographics". I am personally aware of two significant groups of readers that seem under-represented among the responders: primary school children, and 40-70 year old professionals. It seems to be that the "responder demographics" better reflects the frequency of idle access by readers. I'm concerned that simple surveys may refocus efforts to serve the most frequent users, biased to idle users, to the detriment of our prime objective. --SmokeyJoe (talk) 01:54, 6 May 2013 (UTC)[reply]
Fair enough. I said above that in judging responder demographics we should compare them with potential readers. Rd232 talk 11:54, 6 May 2013 (UTC)[reply]

What's the question?

  • This RfC starts by saying that "It is proposed...". This sounds like it's a done deal but who has proposed this and where is the draft proposal? If it isn't a done deal and this is just kite-flying, then please say so. At the moment, it is hard to comment on something which is so vague. Warden (talk) 14:00, 2 May 2013 (UTC)[reply]
    • Added "hereby" to "it is proposed". Clearer? Rd232 talk 14:10, 2 May 2013 (UTC)[reply]
      • I agree that this proposal seems like a solution in search of a problem. It seems like this proposal is about enabling the ability to create surveys, and then once we've agreed on that, we go out and start looking for things to survey about. Are there any current discussions on Wikipedia where there is a consensus that having input from non-editors would be a useful/necessary data point? Until that situation comes up, I don't see the point in discussing this proposal. WMF apparently already has the technology to run a survey (since they've done it once in the past), so there is really nothing to discuss until there is actually a situation that requires a survey. I don't see the point in having an annual survey where people try to come up with things to survey about. I think it would be much better to wait until the need for a survey arises organically, and then deal with it at that point. ‑Scottywong| chatter _ 19:02, 2 May 2013 (UTC)[reply]
        • Oof. Don't you see the chicken-and-egg problem? Editors never even think of asking readers' input, and it's not realistic to expect them to start considering the possibility in a specific situation until there's some semblance of a mechanism for actually getting input. Establishing the principle and a discussion framework is pretty low-cost (cost in time and energy, Wnt), and even the higher-cost implementation phase is not too bad - especially if we can borrow WMF's tech for the survey. Furthermore, there's an inevitable Catch-22 in what you're asking: failing to bring specific topics now brings accusations of "solution in search of a problem" (I cannot adequately convey how much I hate that phrase BTW); but I can only imagine that bringing specific topics and questions at this point would result in at least some responses of "aha, you want solution X to problem Y and reckon a survey might help! No, that's just a clever form of WP:CANVASsing! Debate it on the merits!" or something along those lines. Rd232 talk 20:08, 2 May 2013 (UTC)[reply]
          • Fair enough, I can agree with that. I just hope that that's what this proposal is used for, as opposed to a situation where everyone says "oh, we have surveys now, let's find fun questions to ask our readers!" ‑Scottywong| confess _ 21:01, 2 May 2013 (UTC)[reply]
            • Well, yes, the drafting process needs to be sensibly managed, and especially in the final stages appropriately focussed on keeping the survey short and the questions structured to produce information likely to be useful. I would also stress again that the drafting/discussion process could be a valuable exercise in itself in encouraging editors to think about readers' needs and perhaps more holistically about the future of Wikipedia than "this would be nice. that's broken. Wish more people would do that..." wish is what we mostly have now. Rd232 talk 21:37, 2 May 2013 (UTC)[reply]

Survey frequency - annual?

One of the issues is that once such a survey is done once, it may be tempting to do it again - and since there are substantial setup costs to making it work once, this does make sense. So it may be best to consider the issue of frequency now. I would suggest that such a survey be no more than annual; I can't see the survey being needed for issues where a swift resolution is needed ("quick, let's have a survey!"), it should really be for long-term issues, priority-setting, etc. And drafting sensible questions and the community approving the results of that drafting process will take time anyway.

In fact it might actually work well as an explicitly annual thing, where let's say every 1st January there is a reader survey, if

  1. in the preceding months potentially suitable issues have been raised and questions have been drafted, and
  2. if it's agreed by sometime in December that the result is worth putting to readers.

One of the advantages of a regular reader survey is that it might work as a new editor engagement tool as well. In exposing some of the issues and choices faced, there's clearly an opportunity to invite readers to become editors in order to engage with specific issues. Rd232 talk 14:10, 2 May 2013 (UTC)[reply]

I like the idea of doing something like this regularly. There are many reasons to make surveys easier to run, and likewise reasons to test such tools on en:wp where there are a lot of statisticians and scriptwriters in the audience to manipulate and chart any results. – SJ + 17:54, 1 June 2013 (UTC)[reply]

[some comments about the above misinterpreted a phrase about "Costs of time and energy"] Wnt (talk) 20:18, 2 May 2013 (UTC)[reply]

  • I agree it should be annual, I don't agree with 1st Jan. For an awful lot of people that is the day after New Year and something of a non-standard day, it also means that the first one is a long way off. My suggestion would be for something like the first Wednesday in October. That's close enough to be viable for this year without being overly distant. As far as I know it is outside most religious festivals and pretty close to the Northern hemisphere Autumn equinox. So a pretty close approximation to a boring standard day. ϢereSpielChequers 23:44, 2 May 2013 (UTC)[reply]
    • Fair enough that 1 Jan may not be the best (also because of Christmas in the latter preparation stages). It's just the most obvious date. Rd232 talk 10:32, 3 May 2013 (UTC)[reply]
  • If it's annual, it should not be during fundraising season, since having multiple, competing banners up at the same time is likely to result in the worse response rates for both. There might be an advantage to a 13-month cycle (or any number not easily divisible by 12), or to spamming only a small fraction of users on a rolling cycle, because you could mitigate some of the seasonal aspects (e.g., all Germans in Spain for August, or all American schools closed, etc.) WhatamIdoing (talk) 01:26, 6 May 2013 (UTC)[reply]
    • Agreed that a conflict with fundraising season is probably best avoided. Rd232 talk 22:48, 6 May 2013 (UTC)[reply]
      • Aside: the definition of 'fundraising season' is changing. If we have a banner campaign season, many of the campaigns may not be requests for funds, as it seems we can get the funds we need effectively through better year-round requests and more efficient use of our current donor lists. – SJ + 17:54, 1 June 2013 (UTC)[reply]

Inclusion criteria

There is a limit to the number of questions we can ask. What are relevant inclusion criteria? Create a subpage (Wikipedia:Requests for comment/English Wikipedia readership survey 2013/A criterion) for discussion of each proposed criterion and add a link here.

Sampling not census, twice a year, readers only

I'll suggest that we strongly focus on getting a sample rather than trying to get "everybody's opinion". We can't get a census of everybody, and it just causes problems trying to do it. Rather we should recognize that we'll be taking a sample and try to figure out the best way to do this. The largest sample that I can see being useful is about 4,000, and that shouldn't be too difficult. The margin of error would be about 2%, which is more precise than needed to decide any real questions. I'll suggest that reader's opinions, given their presumed lack of knowledge of many details of interesting questions, shouldn't enter any decision making process here unless there is more than a 10% difference between readers favoring one option over another. But I think it would be a real call to action if readers were split 90%-10% on a question that editors were split 50-50 on. I'll also suggest that answers to any specific question be taken twice before being considered as a cause for action. The reason is not statistical, but more related to current events and news. Opinions may change over time because of an election, terrorist bombing, or other news event, but we'd like to make sure the opinions are fairly stable. Thus twice a year surveys (say April and October) would give us results that we can use in a reasonable amount of time.

I'll also suggest stratified sampling. Readership by day-of-the-week and time-of-day is likely related to religious and geographical groups, so we can make the effort to even out these effects by sampling say in 4 time-periods each day for 7 days. Readership proportions (page views) by d-o-w and t-o-d should be readily available.

I'll also suggest just sampling non-logged in folks ("readers"). These IPs don't have much of a say in what happens on Wikipedia, but I propose that they are as important as the editors. What the editors do, doesn't mean a thing without readers, and what readers do doesn't mean a thing without editors. Surveying readers only would just make up for a clear bias in our decision making processes. Editor-only surveys could be conducted separately if needed.

Clearly the number of questions would need to be limited sharply, say max 15 -

  • 5 for identification (country, age, gender, etc.)
  • 5 for reading experience (did you find... etc.)
  • and a max of 5 for questions related to policy (would you prefer more photos, more video, safe searching, etc.)

The last set of questions would have to be very carefully selected of course - most likely by our usual consensus process, but focusing on issues where we don't have a consensus among editors - to make the results useful. Even a 60-40 split in the readership would change the dynamic of many RfCs.

Smallbones(smalltalk) 01:58, 2 June 2013 (UTC)[reply]

Suggest a question

For ease of processing, questions must be suitable for multiple choice ("yes/no/undecided" or similar) answers.
Create a subpage (Wikipedia:Requests for comment/English Wikipedia readership survey 2013/Your question) for discussion of your question and add a link here.

What is the point?

I'm going to oppose this idea on the grounds that it won't achieve much. Wikipedia functions perfectly fine, with consensus derived from editors. I really do not see how reader input helps us in any great way. I really wouldn't want to see matters put to readership polls instead of RfC's, allowing community consensus to be overridden by drive by voters. RetroLord 15:42, 2 July 2013 (UTC)[reply]

Retrieved from "https://en.wikipedia.org/w/index.php?title=Wikipedia:Requests_for_comment/English_Wikipedia_readership_survey_2013&oldid=1144429438"