|Does the event occurs simultaneously? Please rate the Temporal Alignment between the audio event and the visiual event.|
|How natural (i.e. human-sounding) is this recording? Please rate based on the naturalness of audio quality (noise, timbre, etc).|
|How natural is this recording compared to reference? Please rate based on the naturalness of audio quality (noise, timbre, etc).|
|How similar are these recordings to the reference audio?|
|How similar is this recording to the reference audio?|
This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.
|Color||Pay Range (Hourly)||Explanation|
|RED||< $7.25 / hr||Hourly averages below US Federal minimum wage|
|ORANGE||$7.25 - $10.00 / hr||Hourly averages between Federal & highest statewide (CA) minimum wages.|
|GREEN||> $10.00 / hr||Hourly averages above all US minimum wage standards|
Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.
|Underpaid 1 / 5||
|Low 2 / 5||
|Fair 3 / 5||
|Good 4 / 5||
|Generous 5 / 5||
Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.
|Unacceptable 1 / 5||
|Poor 2 / 5||
|Acceptable 3 / 5||
|Good 4 / 5||
|Excellent 5 / 5||
This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.
|Very Slow 1 / 5||Over 2 weeks|
|Slow 2 / 5||~1 - 2 Weeks|
|Average 3 / 5||~3 - 7 Days|
|Fast 4 / 5||~1 - 3 Days|
|Very Fast 5 / 5||~24 hours or less|