TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

Crowd4U

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 113
  • HITs 273

Crowd4U Ratings


Workers feel this requester pays fairly

Poor Communication

Approves Quickly

Rejections Reported

No Blocks
Sorry, your browser doesn't support canvas elements.

Crowd4U Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

sarin Unreliably Fast
Reviews: 10,091
Points: 10,085
Ratings: 1,706
Label Task_#103 - $0.04

Low

Unrated

Approved

$9.60 / hour

00:00:15 / completion time

Pros

Label 9 words on whether they are proper nouns or not
More HITs posted this time around
Most HITs have broken data, as it shows '0' or 'Miss_data' or 'No_data', so it makes the HIT go faster
Still instantly approves

Cons

Same pay for increased amount of work (originally 3 words to label, now it's 9 words) and an attention check/comprehension task at the end of EVERY SINGLE HIT, which wastes time. There is literally no good reason to not increase the pay if you're going to add more nouns to label per HIT, especially if half the HITs are going to be broken anyway.
The comprehension task at the end wants you to translate a sequence of images to numbers, and it's unnecessary add-on work.

Advice to Requester

Either increase the pay for the 9 nouns, or keep the same pay and reduce the nouns back to 3. If you want to not have bad submissions, use a closed qualification system or block the workers that make said bad submissions. Adding the comprehension task at the end of every HIT is unnecessary and is a major turn off to any good workers that would've been willing to complete your batches otherwise. The batches you posted in the past were good, but now they're just garbage.
Jan 19, 2019 | 3 workers found this helpful.

HardWorkingTurker Fast Reader
Reviews: 523
Points: 2,128
Ratings: 122
Extract Task_#77 - $0.04

Underpaid

Unrated

Pending

$4.24 / hour

00:00:34 / completion time

Pros

There are a few of these Check and Label tasks with different numbers. This one required only that I find 3 nouns within a paragraph and type them out in three separate boxes. Simple enough.

Cons

The requester needs to factor in the time it takes to read instructions on the first try. Maybe pay for the time as a one time bonus linked to a worker's ID. $0.04 is too low for the time it takes to complete this the first time. I can't speak for subsequent submissions as I only attempted it once for this review. All the reviews I saw were for workers who had already read the instructions but not for this task which not only required reading the instructions but the paragraph as well to extract the nouns.
Jan 12, 2019

HardWorkingTurker Fast Reader
Reviews: 523
Points: 2,128
Ratings: 122
Bird classification task - $0.15

Low

Unrated

Approved

$5.63 / hour

00:01:36 / completion time

Pros

Pleasant images of birds which made the HIT enjoyable. You simply have to match one bird image to one of four others 16 times, which I might have been able to do faster had there not been a flaw with the layout preventing me from progressing at the beginning.

Cons

There was no clear way to progress between bird images as the bottom portion of the clickable frame is hidden from view unless you tab your way down to it to expose it. I had to figure this out which wasted time. The survey needs to be fixed to correct this.
Low pay for the task.
Nov 30, 2018

Want to see Crowd4U's full profile?

Create Your Account

or Login

Crowd4U


A11X147JBFT25T MTurk Search Contact Requester

Recently Reviewed HITs


(~ 1 Minute) Simple Questions 20231221B
(~ 1 Minute) Simple Questions 20240319 (HIT Approval Rate >= 98%; approved HITs >= 500)
(~1minutes) Identify Chinese dishes images
(~3 minutes) Answer a survey about your favorite Chinese food
(~3minutes) Identify Chinese dishes images

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact