Incentivizing and evaluating internet-wide network measurements
MetadataShow full item record
The Internet’s size is a primary challenge to researchers attempting to capture its properties. Inferences are therefore often based on available measurements, which may be biased due to the measurement process. We seek to understand the dependence of sampling methodology on two network measurement projects. We examine the potential of Mechanical Turk (MTurk) to guide the selection of samples by country and reward. As a proof-of-concept, we design an IPv6 adoption experiment disguised as a human intelligence task. Using 75 dollars, we obtain an IPv6 adoption estimate that differed by less than 3 percent of public estimates. From this initial success and analysis of the price sensitivity, we attempt a crowd-sourced approach to obtain representative measurements of Internet source address validation. However, this second experiment violated MTurk’s terms of service. We therefore perform a per-country sampling analysis of nine years of existing source validation data from the Spoofer project. We conclude that conventional sampling methods do not properly characterize the data, primarily due to the changing nature of the underlying population during the collection period.