TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

Stanford GSB Behavioral Lab

Is this your requester account?
Service Account
  • Overview
  • Reviews 10053
  • HITs 1146

Stanford GSB Behavioral Lab Ratings


Workers feel this requester pays well

Good Communication

Approves Quickly

No Rejections

No Blocks
Sorry, your browser doesn't support canvas elements.

Stanford GSB Behavioral Lab Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

TheNerdyAnarchist Average Pace
Reviews: 1,184
Points: 1,123
Ratings: 127
Short study about a company(~ 6 minutes) - $0.72

Generous

Poor

Approved

$16.00 / hour

00:02:42 / completion time

Pros

Cons

Got a rejection due to "Your work did not meet our standards".

I'm 100% sure I didn't screw this up. I distinctly remember the subject matter and I know I gave my actual opinions on the topic, so I'm not sure what happened here.

Sent a message to the requester asking for clarification and will update this as the situation progresses.

**UPDATE** - I just reviewed the HIT and verified that there was not an AC in the HIT - which would have been the only reason I could possibly have messed it up. Given the fact that I answered all the questions honestly and gave actual consideration toward them, I'm marking this as an unfair rejection. I've also sent an email to Stanford's IRB requesting that this be taken under review, as I'm inclined to believe the requester simply "didn't like" my opinion - which is not a valid reason to reject a HIT.

**UPDATE 2** - The requester never responded, however, their IRB did. They told the IRB manager that I did not submit a completion code, which is false - I have screenshots to prove it. He had them reverse the rejection.
Apr 12, 2019 | 23 workers found this helpful.

PeachyRider Proficient Worker
Reviews: 1,464
Points: 1,222
Ratings: 347
Lecture Series Assessment(~ 8 minutes) - $0.80

Generous

Unrated

Approved

$19.59 / hour

00:02:27 / completion time

Pros

You're sitting on a balcony late at night. The wind is cool across your face. Stresses in your life aren't stressing you out too much in this moment of time. The moon's light cascading onto the river below. You have had surveys that have stressed you out before, but this one was not one of them. This one was different. This one was as much a breeze as the wind is now as it's gently cascading the hair across your forehead. Easy for a change.

Cons

1 min 52s video you have to watch
Painless otherwise
Feb 13, 2019 | 15 workers found this helpful.

TinaBanina Fast Reader
Reviews: 3,171
Points: 3,307
Ratings: 300
Questionnaire - $0.15 +1.75 bonus Confirmed!

Good

Unrated

Approved

$15.00 / hour

00:07:36 / completion time

Pros

- A $1.75 bonus for completing additional task. Now that it's over, I don't remember at all what the $0.15 part was about. I'm thinking really hard right now and I think it was basically a "Hey, do you want to do this?" kind of thing.

Cons

You're basically rating 100 ideas by entering numbers for 4 categories and it's all done on one page. One mis-click and whoosh, you'd have to start over. Pay is kind of meh, but I guess it could be a done a bit quicker for a better hourly.
Feb 1, 2019 | 2 workers found this helpful.

Want to see Stanford GSB Behavioral Lab's full profile?

Create Your Account

or Login

Stanford GSB Behavioral Lab


A3OSXTUM1QEXNY MTurk Search Contact Requester
Top Collaborating Institutions

Stanford University Columbia University in the City of New York University of Notre Dame University of Toronto Northeastern University New York University Dartmouth College University of Virginia University of Florida Southern Methodist University

Recently Reviewed HITs


"On the Spot" Rewards - Part 2(~ 6 minutes)
"On the Spot" Rewards(~ 2 minutes)
"On the Spot" Rewards(~ 4 minutes)
10-day diary study (DAY 10)(~ 5 minutes)
10-day diary study (DAY 2)(~ 5 minutes)

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact