TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

CIIR Mechanical Turk

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 16
  • HITs 59

CIIR Mechanical Turk Ratings


Workers feel this requester pays poorly

Unrated

Approves Quickly

No Rejections

No Blocks
Sorry, your browser doesn't support canvas elements.

CIIR Mechanical Turk Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

ChrisTurk Fast Reader
Reviews: 2,099
Points: 6,465
Ratings: 2,069
Choosing the quality score for an answer in response to a question. - $0.07

Low

Unrated

Approved

$5.60 / hour

00:00:45 / completion time

Pros

Its a University affiliated project, so most likely safe.

Incredibly detailed quals (Location, approved minimum, approval %, and a bad worker disqual) suggests the requester knows what they're doing on the platform & definitely gives a good indication these are "safe" HITs to do since they intend to disqualify workers who do a poor job instead of rejecting/blocking them. That's a huge plus.

The task is formatted nicely enough, though it could use some improvements. Simply zooming out fixes a lot of the issues w/ the task being too big for the screen.

Instructions are daunting, but incredibly detailed. Once internalized they don't really require going back to reference, the tasks are mostly making judgements on English Q&A sessions so there is plenty of room for nuance & your own decision making IMO. Again, the Requester seems ready for that & has set up the task so they can simply disqualify anyone who doesn't align w/ their expected answer ranges so no reason not to give it a shot.

Cons

The hourly is, frankly, crap. With a script these would be in the min-wage area but they require high levels of English proficiency and then you have to parse out what is often insane internet speak for some of the questions.

Even then, the workload is very high and its annoying that the Requester has shoved 4 tasks into one HIT. Honestly if they cut it down to 3 these would probably be a fair filler batch, though I realistically think it should be 2 at this price.

Honestly, the requester is most likely paying poorly because a portion of the budget will be tossed into a fire pit to screen out bad workers so this should hopefully move to a closed qualification if its an ongoing project & the reward could be fixed by eliminating waste.

Advice to Requester

Switch to a closed qual after analyzing your first batch of results, and then reduce the task load per HIT so the hourly wage is fairer. You'll save money by not needing to waste part of the budget approving bad work & workers will receive a fair reward for their effort.
Jan 14, 2019 | 5 workers found this helpful.

AfterDarkMark Average Pace
Reviews: 1,253
Points: 4,976
Ratings: 494
Choosing the quality score for an answer in response to a question. - $0.07

Underpaid

Unrated

Pending

$2.80 / hour

00:01:30 / completion time

Pros

None really. Interesting I guess. If they paid better they would be worth it.

Cons

Reading full instructions takes forever. Kind of wanted to test the water though, since it's late and have seen these a couple of times.

First, there is a coding issue or something that makes some undoable right now or unsubmittable at least with the right answers, so beware (sometimes certain bubbles being selected will not allow you do go to the next task - there are four tasks per hit. Seems to be kind of random as I was testing this defect/ error or whatnot and sometimes the bubble corresponds to the right answer or is one off the right answer that makes it not advance, and sometimes it's the opposite bubbles).

That being said, these are really underpaid. 1:30 per is an estimate of an average time if you were cruising. Might be able to do a little faster skimming, but even at a minute or a bit more, they are underpaid. If there weren't four tasks per hit, these would be great, Kind of depends on the length of the responses that you get also as some are brief, others are lengthy paragraphs or more.

I did a few just to see if they are approved. But even when I did one in a minute, seemed to be one later that would go towards 2 minutes, so IDK. If approvals are almost guaranteed, then I guess I could skim and do them in 45 seconds to a minute on average, but really don't like taking that chance on a new batch.

Might be alright filler hits, but honestly pay like this is just really shitty and I hope workers stay away as much as possible until this guy comes up on wages. It does happen. As I try to say here and there, wages do move sometimes if requesters aren't getting what they need quickly enough, so we actually do have some power..

Advice to Requester

Pay fairly. Fix the bubble issue.
Jan 14, 2019 | 1 worker found this helpful.

angel Proficient Worker
Reviews: 288
Points: 977
Ratings: 157
Choosing the quality score for an answer in response to a question. - $0.07

Unrated

Unrated

Pending

$3.36 / hour

00:01:15 / completion time

Pros

easy

Cons

underpaid
even if you spead read these are underpaid, if you read slow it is going to be worse
if these were $0.15 -$0.20 a hit they would atleast bre in the $7 an hour range and okay filler
Jan 28, 2019

Want to see CIIR Mechanical Turk's full profile?

Create Your Account

or Login

CIIR Mechanical Turk


AEIS1XBEG8X4L MTurk Search Contact Requester

Recently Reviewed HITs


Answer questions about Tweets related to COVID-19
Choosing the quality score for an answer in response to a question.
COVID-19 claims on Twitter
Determine the similarity between answers corresponding to a question.
Determine the similarity between two answers corresponding to a question.

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact