TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

MLDataLabeler

Is this your requester account?
Dartmouth College
  • Overview
  • Reviews 3806
  • HITs

MLDataLabeler Ratings


Workers feel this requester pays fairly

Poor Communication

Approves Slowly

No Rejections

No Blocks
Sorry, your browser doesn't support canvas elements.

MLDataLabeler Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

JensenIsHere Fast Reader
Reviews: 736
Points: 588
Ratings: 40
Image classification task - $0.01

Unrated

Unrated

Approved

$18.00 / hour

00:00:02 / completion time

Pros

Penny HITs that pay. You have to classify a short string of text by some criteria. For example, you may have to look for spaces before and after the text, or decide if it is punctuation or not. It's really easy. And thanks to their formatting, it only takes a few keystrokes and half a second to do each HIT. As long as you have a fast computer and a solid internet connection, you should have no problem making these pay.

Great for new Turkers because it builds up numbers while still paying well, unlike all of those terrible receipt transcription HITs. I got over 1000 HITs in under two hours, no problem.

Cons

You could very realistically hit the 3800 HIT/day cap while doing these...and only have $38 to show for it, too. Not good if you plan on Turking all day!
Jul 22, 2019 | 10 workers found this helpful.

legitimate11 Average Pace
Reviews: 51
Points: 194
Ratings: 37
Bounding boxes: Draw a single bounding box around the one-digit or two-digit number displayed on the left side of the front bump - $0.10

Unrated

Unacceptable

Approved

$45.00 / hour

00:00:08 / completion time

Pros

Cons

The instructions on this HIT clearly stated not to put bounding boxes around any numbers that are not on the front bumper of the vehicle. Unfortunately a large majority of the pictures that came up did not even have the front bumper in view and so I had to click "nothing to label" for about 85% of the ones I submitted.
After doing about 300 of them I was granted the permanent "Workerblockedduetopoorquality" qualification meaning I won't be able to work on MLDatalabeler HITs ever again.
This was extremely unfair as I was following the instructions as they were written.

From what I have read from other workers, you can never get this block qualification reversed as the requester never even responds to E-mails.

I E-mailed them and will update if they respond, but I'm definitely not counting on it. If there are no updates, they never answered.

Advice to Requester

This batch of HITs should have been looked at before posting over 1000 of them. There were HUNDREDS of photos that did not fit the description of what was to be labeled, that was a mistake on YOUR end, not mine. It's extremely unfair to permanently block workers from doing HITs when I was following the instructions AS WRITTEN.
You should also respond to people's e-mails when they e-mail you. It's common courtesy.
May 24, 2022 | 4 workers found this helpful.

hvnnvr Average Pace
Reviews: 645
Points: 1,556
Ratings: 224
What do you see: a Musical Instrument a Fruit a Cheetah a Tiger a Snowman? - $0.01

Good

Unrated

Pending

$12.00 / hour

00:00:03 / completion time

Pros

The usual MLDataLabeler HIT consisting of image classification by identifying the item within the image.

An interesting combination of items to identify from within the image (Musical Instrument / Fruit / Cheetah / Tiger / Snowman)

Bonus content for an inappropriate snowman idea if it would ever snow here enough to build a snowman.

Cons

Nearly all MLDataLabeler HITs have a five day approval time frame.

Requester post the same HITS under different Group IDs, and under multiple Requester IDs: (A1LWCCSJWAGBW / A3OO8KUVWDSZPV / AGDU89ZM13M8C)
Jan 14, 2019 | 5 workers found this helpful.

Want to see MLDataLabeler's full profile?

Create Your Account

or Login

MLDataLabeler


A1LWCCSJWAGBW MTurk Search Contact Requester

Recently Reviewed HITs


3D print failure detection
Annotate Clothing Apparel Labels
Annotation task to label Grocery Attributes in text
Answer a question about a scene.
Answer Validation

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact