TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

STS Lab

Is this your requester account?
University of Pennsylvania
  • Overview
  • Reviews 718
  • HITs 140

STS Lab Ratings


Workers feel this requester pays well

Good Communication

Approves Quickly

No Rejections

No Blocks
Sorry, your browser doesn't support canvas elements.

STS Lab Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

gurlondrums Average Pace
Reviews: 1,900
Points: 2,687
Ratings: 582
Remember images and their locations - $10.00

Generous

Excellent

Approved

$14.98 / hour

00:40:04 / completion time

Pros

- Not a survey.
- Kinda fun and interesting.
- Upon opening the hit and starting to do it, I promptly (but accidentally) submitted it without doing anything but a test trial. I emailed the requester immediately. They were prompt and courteous, stated not to worry, that stuff like this happens sometimes, then issued me a make up and said that they would approve both upon completion.
- Paid out quickly within 8 hours from time of submission, as they promised. Not sure of exact approval time as I went to bed before they approved it. This requester is good people.

Cons

4 trials consisting of 160 items each. It can get a little boring and tedious. Make sure you take the breaks they allot you.
Jan 10, 2019 | 7 workers found this helpful.

basketcasey Average Pace
Reviews: 2,721
Points: 3,055
Ratings: 569
Learn About Shape Collisions - $4.50

Low

Unrated

Pending

$8.43 / hour

00:32:01 / completion time

Pros

You can earn a 2.50 bonus for accuracy, though this isn't really explained and is not confirmed at the end of the experiment.

Cons

Ugh, I'd avoid this like the plague if I were you. You have to watch 3 5 minute clips of shape collisions. After each video, you have to answer 18 questions in which you pick which clip is more likely to occur. I experienced quite a bit of lag during these sections, which made this drag on even more. You think you're done after the 3rd video and it's accompanying questions - you aren't. Now you get to watch more clips of novel shape collisions - only 6 for each bloc this time, instead of a 5 minute video, which I guess is nice. Then you answer 5 questions about these new shapes. I believe there were 3 of these shorter blocs as well. Then it mercifully ends with a couple quick questions about the experiment.
Dec 29, 2018 | 6 workers found this helpful.

jrw254 Fast Reader
Reviews: 3,024
Points: 1,997
Ratings: 152
Name animals and objects: 10 minutes or less - $1.50

Generous

Excellent

Pending

$14.79 / hour

00:06:05 / completion time

Pros

So, I originally had the auto submit issue mentioned in the first page of the HIT. So do this in Incognito mode on Chrome if you think you may have issues. However, after it auto submitted they approved and gave me the option to do a make up HIT if I contacted them.

I contacted them and they quickly set up a make up HIT. Worked in Incognito mode just fine. It's an issue with the enter key auto submitting.

I did the make up HIT but I just went ahead and put my info on this HIT instead for prosperity and communication reasons.

Oh P.S - There is an opportunity for more work if you do well on this. Say's a 5 dollar HIT coming but we shall see.

Cons

Sep 4, 2018 | 1 worker found this helpful.

Want to see STS Lab's full profile?

Create Your Account

or Login

STS Lab


A3UBQTCR2B8RJN MTurk Search Contact Requester

Recently Reviewed HITs


AF-FaceValue-12
AF-FaceValue-4
AF-FaceValue-6
Animal species
Animals in different habitats

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact