TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

RobertEmergent

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 3
  • HITs

RobertEmergent Ratings


Workers feel this requester pays fairly

Unrated

Unrated

No Rejections

No Blocks
Sorry, your browser doesn't support canvas elements.

RobertEmergent Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

ChrisTurk Fast Reader
Reviews: 2,099
Points: 6,465
Ratings: 2,069
Label tweets in less than a minute! - $0.03

Good

Unrated

Pending

$18.00 / hour

00:00:06 / completion time

Pros

7 day AA
Pay is good if you are proficient with whipping up a quick script or are fine with tab/space.
Almost all HITs can be completed in 4-7s.

Cons

Maybe 1-2% of HITs have a tweet that needs to be labeled, which can take a few extra seconds but it was super rare for me.

The layout is pretty painful to work through since a lot of miscellaneous space/data is added that takes focus away from the text that needs to be judged.

HIT instructions claim to rely on majority rules, the HITs require English proficiency but have no location qualifications attached to ensure your peers actually understand the Tweets being labeled. Given the suspect nature of most Tweet contents anyway (broken/slang English) & the lopsided nature of the dataset it might not matter, but it also opens the door to rejections for completing work properly if you catch a fringe-case and others are blowing through them. I shot the Requester a message and haven't heart back yet, but this con is largely negated if they do reply.

Quoted from instructions:
"Note that we are having multiple workers to work on the same tweet. If you randomly label tweets which leads to your labeling being different from the majority, your submission may be rejected."

Advice to Requester

Add location qualifications to slim the worker pool to English native speaking countries (US, CA, UK, NZ, AU is what I see most often) if you're going to reject workers for shoddy work. Lowers the variance on majority rules IMO.
Remove the redundant questioning surrounding the tweet, or enhance the tweet text itself to bring focus/attention to the data being judged.
Remove the # identification before the tweets, its distracting and not useful information for the worker. Hidden inputs or simply wrapping them in a hidden block would be fine.
Oct 23, 2017 | 1 worker found this helpful.

slothbear Fast Reader
Reviews: 1,878
Points: 1,249
Ratings: 244
Share a group game involving physical movements - $0.05

Unrated

Unrated

Pending

$5.63 / hour

00:00:32 / completion time
Feb 7, 2018

PastMorning Average Pace
Reviews: 162
Points: 52
Ratings: 4
Share a group game involving physical movements - $0.05

Underpaid

Unrated

Pending

$3.21 / hour

00:00:56 / completion time
Feb 7, 2018

Want to see RobertEmergent's full profile?

Create Your Account

or Login

RobertEmergent


A2HQZU8B7QLJ57 MTurk Search Contact Requester

Recently Reviewed HITs


Label tweets in less than a minute!
Share a group game involving physical movements

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact