On PdF’s Consumer Guide: Aristotle Complains; We Respond

We recently received an email from the general counsel of Aristotle, one of the software-as-a-service companies featured in our online guide, complaining about our efforts, that arrived just as we sent out an email to our subscribers asking them to help update the survey data in the guide. We are reprinting his letter below, followed by our response. Feel free to join in the conversation in the comments thread.

To: The Editors, Publishers and Founders of Personal Democracy Forum
Re: Personal Democracy Forum’s Software-as-a-Service “User” Survey
Date: February 27, 2007
Aristotle applauds Personal Democracy Forum’s advocacy of the software-as-a-service model for political and grassroots organizations of the future. We commend PDF on its educational efforts in this area. Aristotle also appreciates the privilege of having PDF recently publish an article written by our technology director, Peter Kelly, on the benefits of open architecture for political software. It is therefore with great reluctance that we write this letter, yet we feel that there is no alternative.
In December 2006, Personal Democracy Forum released a report that compared political software vendors based on purported “user” ratings. We are deeply troubled by the manner in which PDF’s software-as-a-service “survey” has been represented to the public by both PDF and Complete Campaigns, one of the vendors covered by the survey. PDF knows how important such surveys can be in the marketplace. PDF even tells potential respondents that its survey “will help your peers choose the services that will help them best” Given the foreseeable commercial importance that potential customers may place in such a survey, PDF’s journalistic accuracy is paramount.
Right after the release of the survey, Complete Campaigns sent out messages comparing its favorable PDF “User Survey” results to the other top-tier political software firms in the market. Complete Campaigns, by coincidence, had far more survey responses than any other of its competitors.
The survey appears to have been conducted by sending emails to those who have registered at PDF. Survey respondents were asked to state the name of the organizations, companies or websites with which they are affiliated. They were not required to identify themselves in any other way. Those who received the email were encouraged to tell others about this survey. I understand that Personal Democracy Forum website visitors who happened to view the personaldemocracy.org page during the month of December also were allowed to respond to the web-form poll on the fly.
Apparently, none of the respondents were screened nor was any of their past use of each vendor verified. The PDF web poll also cannot verify whether individuals responded more than once via multiple user names or canned response loading. PDF does not know who provided positive information for some companies, or negative information for others.
In other words, PDF really has no idea who actually responded. It has no idea how many respondents voted more than once. And more significantly, PDF has no idea whatsoever whether the respondents actually are “users” of the software they purport to “review” in this “user survey”.
The problem is compounded by PDF’s summary of the results, with continuous references to a survey of software “users”. There is no basis other than blind faith for this entirely unfounded description of the respondents. There is no rational basis for the suggestion that those responding to the survey [or even most of those responding] actually used the software they claimed to review. Instead, it would be accurate to describe the respondents simply as anonymous or unverified respondents, who may have voted multiple times, and who claim to have used the software they are rating.
Having repeatedly characterized the respondents as “users”, PDF has a duty to obtain reasonable assurance that its statements are accurate. The truth is that the survey does not have the minimal scientific validity or foundation to have even been published with PDF’s name attached to it.
PDF has failed to publicize properly either the severe limitations of the “survey” or the complete lack of reasonable safeguards over the process. Moreover, objective evidence points to likely manipulation of the results. If the deck was unfairly stacked, it would render the “survey” nothing more than disinformation in which PDF has wittingly or unwittingly become complicit.
In the absence of appropriate controls for the “survey”, and the subsequent and foreseeable use of the survey by Complete Campaigns in its advertising, PDF has a duty to investigate whether the process was manipulated. This is necessary for the sake of your organization’s credibility and integrity. It also flows from your duty of accuracy to your readers, and perhaps most important, out of fairness for those companies surveyed. If your investigation reveals evidence that the process was manipulated, PDF is obligated to make disclosure of the investigation results in as prominent a manner as the original results were published. Even if your investigation is inconclusive, you must, at the very least, acknowledge the absence of the sort of reasonable controls normally used in bona fide surveys of those who use products and services.
As the survey appears based entirely on “trusting” your registrants to be honest, then one should look at several factors to determine whether such “trust” in your registrants is well-founded.
Question 1: Why was the survey sample so heavily skewed in favor of one firm?
The first question that must be addressed is why one firm, Complete Campaigns, was so heavily overrepresented in the survey.
Complete Campaigns had three times as many self-selected “user survey” respondents as Aristotle or other competing software companies. Out of all the responses, more than 52% of them were from self-identified Complete Campaigns customers. As Complete Campaigns accounts for a smaller percentage of the market, this saturation of respondents is, at a minimum, highly suspicious.
How many registrants giving high marks to Complete Campaigns were new, or signed up just before submitting the survey? Other questions naturally follow from this inquiry, as follows.
Question 2: Who actually participated in the “survey”?
Have you verified whether the names provided by the respondents are real names?
Question 3. How many submissions did each respondent make?
Do you know whether individuals responded more than once via multiple user names or canned response loading? I myself received two copies of the survey form at two separate email addresses. I did not respond, but it appears that I could have submitted two responses.
Question Four: Do you have any way of knowing whether those claiming to be users of the software reviewed had actually used it?
If your survey is intended, as your materials state, to be a buyer’s guide, then the answer to this question is critical.
Finally, there is simply not enough transparency to provide any reasonable assurance that the “survey” outcome is credible and not the result of manipulation. The survey itself reflects reality in the same sense that an election where ballot-box stuffing is allowed reflects reality. In either case, the numbers just don’t add up.
Now it appears from today’s email that PDF is preparing an “update” to the December “survey”. PDF’s message refers to bringing “the collective knowledge of software-as-a service consumers to the fore”. We do not know whether this means that those previous respondents who may or may not be users, and who may have previously voted multiple times, can now vote yet again and again. But surely this “update” appears poised to exacerbate the fatal flaws and problems that taint the December “poll”.
PDF also claims in its new update request that it follows its methodology in order to lead to “the broadest and most accurate responses”. Reasonably accepted methods for reliable or accurate surveys could lead to “the broadest and most accurate responses”. But certainly not PDF’s unregulated, unverified methodology. Under no foreseeable circumstances can the methodology currently used lead to “the most accurate responses”. PDF’s claim is inconsistent with journalistic standards and with PDF’s duty of accuracy to its readers and the companies reviewed.
Any statement by PDF suggesting or implying that its survey has an acceptable level of safeguards or scientific validity within a reasonable [or even discernible] margin of error is false. The results from December 2006 must be investigated, and, depending on the results of the investigation, the survey should be corrected, or fully disclaimed if necessary. If manipulation is evident, then this should be disclosed also, as one point of the survey is, presumably, to enlighten those in the marketplace about the types of companies that are offering the services surveyed.
A brief disclaimer that past results may not reflect future performance is completely inadequate to address the problem of having placed PDF’s credibility behind a “user survey” that may not actually be a survey of users. Much more needs to be done, as outlined above, and we trust that you will see this through. But surely, at a minimum, the same procedures, claims, misstatements, and assertions that were used to tout the bona fides of the December 2006 report must not now be repeated in an “update”
If you wish to discuss this, please feel free to call me or send me an email.
Respectfully yours,
J. Blair Richardson
General Counsel

Here’s our response:

Dear Mr. Richardson:

We are sorry that our modest user survey is causing such angst. But we take issue with your overarching complaint, that we have portrayed the survey as somehow being a scientific sampling of consumers of each company’s software product. If you were to publish a list of consumers of Aristotle’s services, we would be happy to independently survey them as to their satisfaction with your products. But we never claim anywhere that that’s what this survey is.

Instead, we clearly state on the home page for the company reviews, in bold face, that:

These ratings came from subscribers to PDF’s e-newsletter and anyone they could have sent our survey to. We did not specifically target these companies’ customers.

In addition, in the notes explaining the survey, we clearly state that the

User ratings of each of the companies are based on an online survey of Personal Democracy Forum’s registered members and subscribers conducted in December 2006. Respondents were asked first if they had used a company’s services, and then to rate them from one (lowest) to five (highest) on three scales: quality of software products, quality of customer service, and fairness of fees. User ratings are inherently subjective and should not be taken as conclusive or predictive of future service.

We put the last sentence in bold just to be very clear.

Given these limitations, is it still worth asking members of PdF to answer a user survey about software-as-a-service companies? We think so. The PdF community is relatively well-informed about software services and the opinions of its members–even if they are not verified consumers of specific companies’ products–are still of interest. They should be of interest to you, too; word-of-mouth from informed participants in the political-technology niche is clearly important.

Your letter raises several specific concerns about how we conduct the survey. We do not allow ballot stuffing in our survey, and if someone votes more than once from the same email or IP address, we will disregard their votes. We do monitor the voting in real time to see if such activity is happening, or if someone is entering the same response over and over to artificially enhance a company’s ratings.

To increase participation in the updated survey, which is currently open, we sent emails in advance to all the companies listed. We also added a request to participants to include the company or organization or website that they are from, and we will not be taking votes from people who refuse to fill out that field.

We really can’t comment on your complaints about Complete Campaigns; it seems to us that your issues are with them. Yes, Complete Campaigns got a large number of responses in the December 2006 survey, but they were not uniform, and they included many varied comments. However, they did not get three times as many as any other company surveyed, as you claim in your letter.

A final overarching comment: This is the web. If users of Aristotle’s software-as-a-service tools think our ratings of the company are incorrect or unfair, there is nothing stopping them from chiming in, either here or elsewhere. Similarly, if they — or anybody else reading this — think that PdF’s Software-as-a-Service survey and report is unfair, they can also chime in. At the very bottom of the home page for our report we write:

JOIN THE CONVERSATION
This is a living document. We will continue to provide updates to company profiles and fresh consumer survey data. It’s also up to you, our readers, to provide your own comments and add to these profiles. Have you used one of the vendors we profile? Add your comments to the mix.

We’re all ears.

Sincerely,

Andrew Rasiej and Micah Sifry
Publisher and Editor
Personal Democracy Forum

Technorati Tags: Aristotle, Complete Campaigns, software-as-a-service



From the TechPresident archive