Using data shared by some of the most experienced workers on MTurk users can gain insights into HITs that pay well and are safe to work on. Our users maintain some of the highest standards on the platform, most holding over a 99.5% approval rate, meaning HITs that appear here will pay out when completed properly.
First, this is going to be long, and what I write in pros/ cons may not actually be pros/ cons, but should probably just be read from pros to cons, then advice if I put anything there. It's as much a review of this hit, as it is about other batches/ requesters/ workers & the state of mturk at this time.
These hits used to be one I considered "bread and butter". I did thousands of these hits in the past without a single rejection, but the last batch I worked on I got a few, and it seems like other workers got even more (the "other review site" shows even more complaints). I had kind of sworn these off after the last batch I worked on in April, but decided to do a 100 tonight and see how that turns out. We'll see.
I consider creative writing/ rewriting, etc tasks to be one of my strengths, so I focus on tasks like this a lot. These tasks build upon each other. They are very similar to some other requesters, like Alexandria, where one batch leads to another and so on. I guess I am writing this for more experienced mturkers who know the kind of tasks I am talking about, but for those of you who aren't familiar with them, in one batch, you might write a question, in another you might write an answer, and this simulates an AI question/ answer system or human conversation or story telling, etc.
TurkerView is designed to bridge the gap between workers & requesters through data & communication.
This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.
Color | Pay Range (Hourly) | Explanation |
---|---|---|
RED | < $7.25 / hr | Hourly averages below US Federal minimum wage |
ORANGE | $7.25 - $10.00 / hr | Hourly averages between Federal & highest statewide (CA) minimum wages. |
GREEN | > $10.00 / hr | Hourly averages above all US minimum wage standards |
Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.
Icon | Rating | Suggested Guidelines |
---|---|---|
Underpaid 1 / 5 |
|
|
Low 2 / 5 |
|
|
Fair 3 / 5 |
|
|
Good 4 / 5 |
|
|
Generous 5 / 5 |
|
Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.
Icon | Rating | Suggested Guidelines |
---|---|---|
Unacceptable 1 / 5 |
|
|
Poor 2 / 5 |
|
|
Acceptable 3 / 5 |
|
|
Good 4 / 5 |
|
|
Excellent 5 / 5 |
|
This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.
Icon | Rating | Approval Time |
---|---|---|
Very Slow 1 / 5 | Over 2 weeks | |
Slow 2 / 5 | ~1 - 2 Weeks | |
Average 3 / 5 | ~3 - 7 Days | |
Fast 4 / 5 | ~1 - 3 Days | |
Very Fast 5 / 5 | ~24 hours or less |
TurkerViewJS is the engine behind TurkerView. An efficient collection process combined with a user-friendly interface encourages more frequent worker input & allows for the refinement of aggregate data in real time.
Our API also allows users access to real-time data about HITs and requesters. Users can feel confident with the knowledge that our platform has vetted thousands of requesters who treat workers fairly.
Unique Requesters
have been reviewed by users on TurkerView
Individual Reviews
are available to TurkerView users
Awesome Users
and counting are part of the TurkerView community