TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

Bryan Hatton

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 2
  • HITs

Bryan Hatton Ratings


Workers feel this requester pays poorly

Unrated

Unrated

No Rejections

No Blocks
Sorry, your browser doesn't support canvas elements.

Bryan Hatton Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

AfterDarkMark Average Pace
Reviews: 1,253
Points: 4,979
Ratings: 499
Categorize teachers according to what grade level they teach - $0.10

Low

Unrated

Pending

$6.00 / hour

00:01:00 / completion time

Pros

EDIT: changing this to "alright filler work". If done correctly, a lot can't be found and not only that, but determining the exact grade can be tricky. Found there is a lot of searching through pages for these and sometimes people no longer work at certain schools/ have changed between roles etc... Think they are about a minute a piece average if done correctly. If done incorrectly and you just choose based on the listed school, you can do them quickly, but I'd say more than half the time if the school is a high school or middle school or elementary or whatnot, the person actually isn't a teacher, but an administrator or IT specialist, etc.... This looked like a great batch at first, but just seems alright or kinda weak if you aren't just clicking based on the school listed (which you shouldn't do cause it'll end up screwing us all longterm, even if it works sometimes).

Cons

Based on how quickly the batch is disappearing, seems like a lot of workers must be cheating and just seeing HS or middle school and choosing that option. Really sucks because stuff like this can ruin it for all of us. Especially, in a batch like this where a lot of the people aren't teachers, but IT specialists, administrators, principals, etc.... so am kind of concerned about rejections at this point if people are just flying through like I said and choosing the grade level based on the school they see without looking it up....

Advice to Requester

Gave up on your batch because it was clear a lot of workers were just selecting the grade level of the school listed - really no way for the batch to be disappearing this fast otherwise... shame cause it means good workers data likely won't match that of the people who are "cheating" to do more faster and since those workers are doing more of the tasks, then even if there were a few posted for each teacher/ person, it's more likely that 2/3 will be from workers doing them incorrectly, so the people actually taking time to verify that the person is a teacher in the first place are the ones more likely to be rejected.

Hopefully, you block workers who were doing them too quickly, etc because your data on this batch is likely useless, which is really unfortunate. Of the ones I did, a large percentage weren't even teachers, and workers just going through selecting high school when they see a high school or elementary when they saw elementary listed would not have caught those (which were a large amount of the ones I did).

Good luck in the future. Hopefully you included some where you knew "other" should have been selected, even though high school, elementary, etc were listed as the school... if not, that would be another way to catch BAD WORKERS in the future.
Jan 1, 2019

Calexit Average Pace
Reviews: 2,419
Points: 3,732
Ratings: 365
Categorize teachers according to what grade level they teach - $0.10

Unrated

Unrated

Pending

$8.18 / hour

00:00:44 / completion time

Pros

New batch private test

Cons

Returned more than I completed because google couldnt find.
Dec 24, 2018

Want to see Bryan Hatton's full profile?

Create Your Account

or Login

Bryan Hatton


A2JENPWL8DW97V MTurk Search Contact Requester

Recently Reviewed HITs


Categorize teachers according to what grade level they teach

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact