Using data shared by some of the most experienced workers on MTurk users can gain insights into HITs that pay well and are safe to work on. Our users maintain some of the highest standards on the platform, most holding over a 99.5% approval rate, meaning HITs that appear here will pay out when completed properly.
Actual hourly: $42.79/hr over 758 HITs (yes this is real)
TL;DR: big brainless bountiful batch
This batch pays well for the dumbest reason. The requester is throwing all of their object names into each image and seeing which one applies, which results in 99% of the given objects not being in the image. This means that most of the time, you can comfortably select 'No' for all the objects in the image.
There is only one image in each HIT, and most of the images are typical everyday images. They are simple most of the time, and you can almost always figure out the setting of the image, which can make determining objects even easier.
They provide keybinds for the HIT, and you can rifle through the objects super quick with these keybinds. With multiple tabs and the provided keybinds, you can make this batch extremely lucrative.
The requester has consistently posted work around 10AM and 9PM CST every day. When they post work, they drop big batches and smaller batches for several hours at a time, which means it should be available for everyone.
All of the above make the batch pay as well as it does, and it makes for a good batch to increase your numbers on as well.
The requester uses a somewhat closed qual system. You're given a 'tutorial' HIT, then the HIT after that is what they use to screen for good workers. If you don't submit adequate work, you'll be locked out of the requester's HITs until they post a new HIT with [NEW BATCH]. This requester generally knows what they're doing and how to get the best data out of mturk.
TurkerView is designed to bridge the gap between workers & requesters through data & communication.
This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.
Color | Pay Range (Hourly) | Explanation |
---|---|---|
RED | < $7.25 / hr | Hourly averages below US Federal minimum wage |
ORANGE | $7.25 - $10.00 / hr | Hourly averages between Federal & highest statewide (CA) minimum wages. |
GREEN | > $10.00 / hr | Hourly averages above all US minimum wage standards |
Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.
Icon | Rating | Suggested Guidelines |
---|---|---|
Underpaid 1 / 5 |
|
|
Low 2 / 5 |
|
|
Fair 3 / 5 |
|
|
Good 4 / 5 |
|
|
Generous 5 / 5 |
|
Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.
Icon | Rating | Suggested Guidelines |
---|---|---|
Unacceptable 1 / 5 |
|
|
Poor 2 / 5 |
|
|
Acceptable 3 / 5 |
|
|
Good 4 / 5 |
|
|
Excellent 5 / 5 |
|
This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.
Icon | Rating | Approval Time |
---|---|---|
Very Slow 1 / 5 | Over 2 weeks | |
Slow 2 / 5 | ~1 - 2 Weeks | |
Average 3 / 5 | ~3 - 7 Days | |
Fast 4 / 5 | ~1 - 3 Days | |
Very Fast 5 / 5 | ~24 hours or less |
TurkerViewJS is the engine behind TurkerView. An efficient collection process combined with a user-friendly interface encourages more frequent worker input & allows for the refinement of aggregate data in real time.
Our API also allows users access to real-time data about HITs and requesters. Users can feel confident with the knowledge that our platform has vetted thousands of requesters who treat workers fairly.
Unique Requesters
have been reviewed by users on TurkerView
Individual Reviews
are available to TurkerView users
Awesome Users
and counting are part of the TurkerView community