TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

Coord

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 45
  • HITs 1

Coord Ratings


Workers feel this requester pays fairly

Good Communication

Approves Quickly

No Rejections

No Blocks
Sorry, your browser doesn't support canvas elements.

Coord Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

Lizj87 Average Pace
Reviews: 740
Points: 1,550
Ratings: 167
Parking Sign Transcription - $0.08

Low

Unrated

Approved

$2.97 / hour

00:01:37 / completion time

Pros

I like these. I have a soft spot for them because this was one of the first Quals I got when I really started to understand mTurk.

Previous batches of these were well paid. The same 8 cents per hit, but the cities had fewer restrictions and the same hours all over. You could pretty much copy and paste just tweaking details. Previously I averaged about 45 seconds per HIT. This batch is 2 mins.

The hits generally approve quickly and I've never had a rejection, even though a couple I've hit submit and then realized something was wrong.

Cons

This latest batch (Philadelphia) works out wildly underpaid because the city just loves their parking regulations! At least 4 sections to each sign, different times during the week and on weekends, and different times all over the city. This batch is hard!

I also really hate the interface for the pictures. I've started opening the image in a separate and working split screen - that's much faster.

Advice to Requester

Adjust the fee when cities have outrageous numbers of restrictions per sign.
Jun 5, 2019 | 4 workers found this helpful.

MechaTurk Average Pace
Reviews: 7,981
Points: 7,797
Ratings: 418
Parking Sign Transcription - $0.08

Fair

Excellent

Approved

$9.93 / hour

00:00:29 / completion time

Pros

Made about $10 per hour cherry picking 122 of these HITs.
Edit: The auto-approvals stopped and I had 90ish left, which went unapproved for going on 24 hours. I reached out to see if there was an issue, and they responded right away.

Cons

EXERCISE EXTREME CAUTION; THEY WILL REJECT. The first batch I participated in earned me 1 rejection, which I was able to have overturned, due to an error on their end. This last round I received two rejections, but I was completely at fault. 2 out of 122 isn't bad, I guess, but it's difficult to maintain 100% accuracy, given how fast one must speed through these to make any kind of decent hourly.
Apr 24, 2019

dgrochester55 Average Pace
Reviews: 668
Points: 2,322
Ratings: 508
Parking Sign Transcription - $0.08

Unrated

Unacceptable

Approved

$ / hour

00:00:45 / completion time

Pros

Approval within minutes

Cons

Pay is low overall, a few can be done at around 12 an hour but most are 2 to 6 and some have as many five signs.

More importantly, I had one rejection in my 30 hits for something where I thought I followed a rule in the instructions(write on signs only). No communication after two emails.

Very surprised since this is a closed qualification

Advice to Requester

1) Pay more in general, at least add bonuses for ones with multiple signs or people will cherry pick

2) You are missing the point of a closed qualification. If you have low pay, are reject prone and do not communicate, what makes you any better than the batches that people avoid every day?
Jul 18, 2019 | 1 worker found this helpful.

Want to see Coord's full profile?

Create Your Account

or Login

Coord


A16DZ68UZ1WWE3 MTurk Search Contact Requester

Recently Reviewed HITs


Parking Sign Transcription

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact