In 1986, I presented my thesis research to the Midwest Association of Public Opinion Research (MAPOR). I had done a national online survey well before the widespread use of the Internet. While I felt laughed out of the room for even attempting the method, I caught the notice of Ph.D. selection committees. It was probably the reason Michigan State took a chance on me.
Last week, I attended the national conference of the AAPOR. The Picard Center project that accounted for a survey of around 100K was a middling small project by their standards.
However, I again saw an association in search of their path forward. Latecomers like myself packed back of the room as the association, still hurting from the last election, performed their post-mortem. They were looking for a solution or at least an excuse. I will talk more about that later, but the finest minds in AAPOR did not provide much comfort. For those of you that have not already made up your mind, find the report here.
Past the election, AAPOR had grown dramatically over the last several years and with that came people with unknown methods, population frames, and methods. The sessions that were not devoted to the election were considering these new entrants. How do you maintain standards in an era bent on non-scientifically drawn samples? (And to answer your argument, they were just as wrong in the last election.)
In an era of bigger than ever participation, how do you ensure representativeness? For that matter, how do we define what we are seeking to represent? Which methods show promise and which should be discouraged? Most important, what evidence is there that we are correct in our assumptions. Like many industries, surveyors are facing attacks from inside and out. At the same time, there is no shortage of a desire for their work. Despite what many think, AAPOR overflow with honest people trying to perform a difficult job without bias. The choices made in this research specialty are fascinating.