Using data shared by some of the most experienced workers on MTurk users can gain insights into HITs that pay well and are safe to work on. Our users maintain some of the highest standards on the platform, most holding over a 99.5% approval rate, meaning HITs that appear here will pay out when completed properly.
Taking off the pay rating. he raised the pay to 5 cents within a couple hours of me posting this, so guess it's just one of those where pay is gonna keep fluctuating based on how fast they go, etc?
None
EDIT: This is a CON of TV, as much as the requester, but I want it to be visible, so putting it here:
- Other task that was reviewed that paid 25-30 cents before, has been posting at 2 cents, as well. There really needs a better way for TV to adjust reviews or hourly wages or something automatically when same stuff drops pay, etc, or at least a warning system or something? (ex. a pop up that points out when requesters tasks with same titles have been posting for different wages)
-Requesters varying pay (seemingly more than usual lately) is getting really old, and it makes me hesitate to post reviews for new requesters/ times and pay that are above average. Plus, if a requester has a lot of reviews and then turns to crap (or improves, thought that's much rarer), the hourly never really adjusts. In short, I am finding that I need to manually inspect way more requesters now because the TV hourly wages are becoming skewed left and right, and I'm tired of getting burned/ my work flow getting screwed up cause I didn't manually look at reviews for a requester that dropped pay by $4 or something like that.
tl;dr We, as a community, or the people who run TV need to figure out how to make this review system more reliable.
PS - not a dig at you guys running the show here. We've emailed before and talked about some of these problems recently, but some days it just annoys me more than others. And really just trying to get other reviewers to think along the same lines, and recognize this shit/ try to help change it.
TurkerView is designed to bridge the gap between workers & requesters through data & communication.
This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.
| Color | Pay Range (Hourly) | Explanation |
|---|---|---|
| RED | < $7.25 / hr | Hourly averages below US Federal minimum wage |
| ORANGE | $7.25 - $10.00 / hr | Hourly averages between Federal & highest statewide (CA) minimum wages. |
| GREEN | > $10.00 / hr | Hourly averages above all US minimum wage standards |
Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.
| Icon | Rating | Suggested Guidelines |
|---|---|---|
| Underpaid 1 / 5 |
|
|
| Low 2 / 5 |
|
|
| Fair 3 / 5 |
|
|
| Good 4 / 5 |
|
|
| Generous 5 / 5 |
|
Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.
| Icon | Rating | Suggested Guidelines |
|---|---|---|
| Unacceptable 1 / 5 |
|
|
| Poor 2 / 5 |
|
|
| Acceptable 3 / 5 |
|
|
| Good 4 / 5 |
|
|
| Excellent 5 / 5 |
|
This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.
| Icon | Rating | Approval Time |
|---|---|---|
| Very Slow 1 / 5 | Over 2 weeks | |
| Slow 2 / 5 | ~1 - 2 Weeks | |
| Average 3 / 5 | ~3 - 7 Days | |
| Fast 4 / 5 | ~1 - 3 Days | |
| Very Fast 5 / 5 | ~24 hours or less |
TurkerViewJS is the engine behind TurkerView. An efficient collection process combined with a user-friendly interface encourages more frequent worker input & allows for the refinement of aggregate data in real time.
Our API also allows users access to real-time data about HITs and requesters. Users can feel confident with the knowledge that our platform has vetted thousands of requesters who treat workers fairly.

Unique Requesters
have been reviewed by users on TurkerView
Individual Reviews
are available to TurkerView users
Awesome Users
and counting are part of the TurkerView community