TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

CoCoSci Lab at Princeton

Is this your requester account?
Princeton University
  • Overview
  • Reviews 2427
  • HITs 1499

CoCoSci Lab at Princeton Ratings


Workers feel this requester pays well

Good Communication

Approves Quickly

No Rejections

No Blocks
Sorry, your browser doesn't support canvas elements.

CoCoSci Lab at Princeton Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

Yoyo Average Pace
Reviews: 68
Points: 1,899
Ratings: 186
Rate and tag game! v.9 (bonus up to $1.3-3.7) (dlgr-2225c1d3-cd7e-ba8e) - $0.10

Low

Unrated

Approved

$0.36 / hour

00:16:27 / completion time

Pros

Possible to get a okay bonus if you rush through quickly.

Cons

In this HIT you are given images, and you rate tags people have added to them, while also adding your own. For example, a picture of cows at a farm might have tags like, "barn", "cows", "grass", "field", and so on. If you're quick and add good tags you can get a decent bonus. However, the time it takes to actually do a good job makes this not worth it.

If you do a good job of adding new tags to most of the images in addition to rating them, the time it takes to finish is so much higher than the 8 minutes they say it will take. Worse yet, when you add a new tag that has never been used before, you're only given a 1 cent bonus, which is also conditional on a certain amount of time having passed between when you tagged it, and when the system decides to award you that one cent.

Even worse, the experiment can end early, because other Workers can arbitrarily flag your tags and fail you out, making it actually detrimental to try doing a good job. There is no risk to flagging tags, but if one of your tags has been flagged, your work ends, along with any chance to get a higher bonus. Sure, you get to keep all that you earned, but some of the bonuses you earn along the way (such as when you add those new tags, as I mentioned) are only paid after a certain amount of time has passed, or other users have upvoted them. Meaning you potentially lost out on a higher bonus than you should have been paid, because another Worker thought it would be funny to stick it to you!

For example, when I was doing this, I tagged a room with a bed, closets, doorway, and a mirror as a "bedroom". I also tagged "mirror" because there was a mirror on the wall. Another Worker named "Jimmy" flagged my tags soon after, and I was forced out of the experiment. I wasn't even able to leave feedback at the end to explain to the Requestor how a system where other Workers can dole out punishments without risk is a flawed system to say the least!

Advice to Requester

The system is entirely broken. Doing good work makes it not worth the pay, while rushing through gives you the highest possible payment for time spent. Worse, the system in place makes it so that unfair Workers can hurt other Workers without any recourse. There is no punishment in place for false flagging, but any Worker who is flagged by anyone for any reason is immediately booted from continuing the work.

This means that someone could game the system by flagging all the tags they see, in order to prevent people who were currently working from going forward. This would make all the future images completely tag free, making it so that only they would be able to add tags and get a higher bonus than possible for anyone else. Worse still, they could tag completely irrelevant things, as if they knocked out enough people from doing the HIT, there would be no one left to judge their own tags.

Honestly, the system needs to be completely redone. There's no incentive to do good work, and all the incentive to play unfairly. The bonus system also discourages doing good work, since getting a slightly higher pay requires so much time and effort that it isn't even worth doing. Better to just rush through than give a quality performance.
Jul 10, 2022

j0sh83 Average Pace
Reviews: 5,419
Points: 12,340
Ratings: 1,213
Point on Image: The Memory Game New (Bonus up to $1.50!) [nrp240v1] (dlgr-ba47586d) - $0.10 +1.40 bonus Confirmed!

Underpaid

Unrated

Approved

$6.22 / hour

00:14:28 / completion time

Pros

Edit: bonus paid same day within minutes of completion.

Cons

Garbage. Long and repetitive. 105 trials. Potential for $1.50 bonus, but this doesn't make up for abysmal pay rate for the time required to do this HIT. Bonus is dependent on accuracy. YMMV on bonus.

At first this goes painfully slow, but then after a few trials they crank up the speed. You are still looking at an average of 10 seconds per trial with page loading and transitions. Pretty sure they also start shrinking the photos as you progress. Just avoid.

Advice to Requester

Time is money. Pay a higher base payment in addition to the bonus. The bonus alone doesn't even make this worth it.
Jul 18, 2021 | 3 workers found this helpful.

Laser Careful Reader
Reviews: 174
Points: 291
Ratings: 24
Simple image similarity task (approximately $1.86 bonus!) (dlgr-d1309feb-a5c9-8772) - $0.10

Good

Unrated

Approved

$0.54 / hour

00:11:09 / completion time

Pros

Typical CoCoSci. Relatively easy comparing two pictures for similarity.

Greetings from Amazon Mechanical Turk,

You've received a bonus from CoCoSci Lab at Princeton for work related to 37VHPF5VYDR8C61KJD9LRWBLU8Z8CY.
The value of your bonus is: $1.88 USD

The Requester included this note:
Thank for participating! Here is your bonus.

Thanks for being a Worker on Mechanical Turk!

Cons

You count down the counter to see how many more to go to get to 85 done. It seems longer than it is.
Jun 29, 2022

Want to see CoCoSci Lab at Princeton's full profile?

Create Your Account

or Login

CoCoSci Lab at Princeton


A1S8PU3QK4OKG3 MTurk Search Contact Requester
Top Collaborating Institutions

University of California, Berkeley

Recently Reviewed HITs


(Fixed) Experiment: Read some stories and answer questions about them (<8 mins, bonus up to $1.50, no mobile please)
(New) Psychology Experiment: Judge Blue and Green Dots ($1.25 average pay inc. bonus, < 8 mins, no mobile please)
(New) Psychology Experiment: Read stories and answer questions about them (<8 mins, bonus up to $1.50, no mobile please)
2-Player Economic Game (Completion bonus $0.50 PLUS performance bonus up to $1.50; please start game right away)
2-Player Economic Game Beta (Completion bonus $0.50 PLUS performance bonus up to $1.50; please start game right away)

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact