TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

Research

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 94
  • HITs 272

Research Ratings


Workers feel this requester pays fairly

Unrated

Approves Quickly

Rejections Reported

No Blocks
Sorry, your browser doesn't support canvas elements.

Research Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

Yoyo Average Pace
Reviews: 68
Points: 1,899
Ratings: 186
Search Explanations Human Study - $1.00

Unrated

Unrated

Rejected (All)

$9.97 / hour

00:06:01 / completion time
  • HIT Rejected

Pros

Cons

Rejected for missing a hidden attention check, which I feel for HITs like this that seem to be qualifications for larger HITs is rather unnecessary. Especially since you have a little quiz you have to answer before you can even continue onto the actual work, which makes attention checks all the more questionable. (If you can pass the quiz, why is it necessary to make sure you still understand it later?)

Basically, in this HIT, you read a sentence, then read two additional sentences that are rewritten versions of it, and judge whether these rewritten sentences will still pull up the same article if plugged into a search engine based on 5 separate criteria, such as semantics and syntactics.

When answering these questions, each word is highlighted and blue, which when clicked on, will bring you back to the top of the page, and give you a paragraph of explanation and some examples of how the question functions.

So, for example, if you're judging a sentence based on "Fluency", you can click the word and it will bring you an explanation. If you're judging it on "Clarity", you can click it and it will explain that, too. Which is very necessary, as by definition, if something is fluent it should be clear, so you'll want to understand the minutia of the differences in how you're supposed to judge the sentences on these criteria.

My issue was that I spent so much time jumping back and forth for each question and the aspect I was judging it on (because remember, all the explanations for how to judge the questions based on these qualities are at the very top of the page, and the questions are the bottom, requiring you to scroll down a ways) that I missed the hidden check, which is set up like all the other questions (complete with button to jump to the top to view the explanation for how to judge it) but at the very end says "(Choose 4)".

So technically my fault, but I feel that this HIT could be more better designed to avoid this. Especially since while you're only judging 2 sentences for the whole HIT, you'll be doing a lot of back and forth and trying to understand the criteria they want in order to give quality work. I actually would not have been rejected if I answered randomly other than that one question, which makes it all the more silly!

Advice to Requester

If you already have a quiz in order to judge if someone understands the HIT, no need to insert attention checks. Especially on a HIT that seems designed to judge whether people qualify for further HITs. That's just overkill.

Also, if you're expecting quality work, I think paying Workers more than $1 is fair, especially given how much reading is involved, and how much time it might take them to understand the complexities and differences between the different types of judgements they're making. I completed it a bit over 6 minutes, and I still missed an attention check, so for someone taking it even slower, in order to be super, SUPER careful, the work might not be worth it anymore.
Oct 11, 2022 | 4 workers found this helpful.

Marla Relaxed Pace
Reviews: 75
Points: 176
Ratings: 11
Draw bounding boxes around logos - $0.02

Fair

Unrated

Approved

$3.79 / hour

00:00:19 / completion time

Pros

First time working with this requester.
Because of high approval rate I took the plunge and did some over a few days.
My first day I did some 20+
They have now all been approved.

Pay could be better, but they are super quick to do and I enjoy them.

Cons

May 7, 2022

shakeita Jones prince Careful Reader
Reviews: 31
Points: 128
Ratings: 36
Generating Timelines from Crisis Domain Tweets- multiple hit - $3.00

Unrated

Unrated

Approved

$5.09 / hour

00:35:22 / completion time

Pros

very interesting hit, good pay as allot of attention is needed to complete, although it comes with interesting bonus for high quality work.

Cons

Hit can be hectic as the first review could be lengthy taking a whole lot of time to complete
Jun 15, 2022

Want to see Research's full profile?

Create Your Account

or Login

Research


A3EXQVVREKQGG8 MTurk Search Contact Requester

Recently Reviewed HITs


[Qualification] Crisis Event Importance
Answer a questionnaire about news articles of events to human-right defenders
Can you categorize tweets?
Can you identify all the companies, people, and products in this text?
Check the boxes that apply to an image of a group of people.

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact