TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

Picture It MIT

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 50
  • HITs 15

Picture It MIT Ratings


Workers feel this requester pays fairly

Good Communication

Approves Quickly

Rejections Reported

No Blocks
Sorry, your browser doesn't support canvas elements.

Picture It MIT Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

ChrisTurk Fast Reader
Reviews: 2,099
Points: 6,465
Ratings: 2,069
Label Images to Help Researchers Make Robots Smarter - $0.50

Underpaid

Unrated

Approved

$4.18 / hour

00:07:11 / completion time

Pros

Cons

Required to write 3 objects in 30 images, many images contain 1 (at best) blurry object, and apparently there are "quality control" images with "known answers" but what they were I have no clue because all 30 of my images were trash.

I mean that literally, a ton of my images had nothing more distinguishable than actual garbage lol.

If you enter a word their system doesn't recognize it yells at you with an annoying javascript dialogue box until you decipher their mysterious dictionary that doesn't include words like "dvd" or "spiderman" even if they are the main object in the image.
May 23, 2019 | 5 workers found this helpful.

Morgainne Proficient Worker
Reviews: 12,070
Points: 11,186
Ratings: 700
Watch short videos and tell us what object you see to help researchers understand the brain - $5.25

Fair

Unrated

Approved

$15.00 / hour

00:21:00 / completion time

Pros

Long timer.

Cons

The hit page states we have to evaluate 50 images but it's more like 125. I am concerned about a rejection because this was very difficult. This felt almost impossible to get right and may also cause eyestrain. I should have waited to do this because now I need a long break before I attempt other hits. My eyes are basically shot from doing this.
Nov 5, 2021 | 7 workers found this helpful.

DareAngel3 Fast Reader
Reviews: 12,539
Points: 11,042
Ratings: 1,409
Watch short videos and tell us what object you see to help researchers understand the brain - $2.50

Fair

Unrated

Pending

$10.61 / hour

00:14:08 / completion time

Pros

not actually videos at all, but still images shown for limited time. progress counter on the side.

Cons

50 trials, some longer than others. the answers change location which makes it tedious to scan.
Jul 7, 2021 | 1 worker found this helpful.

Want to see Picture It MIT's full profile?

Create Your Account

or Login

Picture It MIT


A1JUYZKK4D6IHO MTurk Search Contact Requester

Recently Reviewed HITs


Capture Images Using an Android Phone to Help Researchers Make Robots Smarter
How quickly can you recognize an object in an image?
Image Flashing Follow up
Label Images to Help Researchers Make Robots Smarter
Tell us which image caption matches best.

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact