TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

SCS Lab

Is this your requester account?
University of Toronto
  • Overview
  • Reviews 359
  • HITs 120

SCS Lab Ratings


Workers feel this requester pays fairly

Unrated

Approves Quickly

No Rejections

No Blocks
Sorry, your browser doesn't support canvas elements.

SCS Lab Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

SuzyQ New Reviewer
Reviews: 7
Points: 34
Ratings: 3
Whose items are these? The Self-Other ownership judgment task(~ 25 minutes) - $2.50

Underpaid

Unrated

Approved

$5.12 / hour

00:29:19 / completion time

Pros

It looks like a good pay, until you actually do the task.

Cons

This task is not only Inquisit, but it takes FULL focus, and is INCREDIBLY exhausting. It will absolutely take 25-30 minutes. Given the time and focus that it takes, it ought to pay about three times as much.

Advice to Requester

Please consider raising the pay on this. My eyes were actually aching after watching so closely for so long.

Also, think about giving people examples of the pens/pencils/markers - the pictures were so small, and flashed by so fast, that it was hard to tell WHAT it was. One of them, I swear, looked like a blunt, not a pen or marker or anything else.

Also, I really don't understand what your initial instructions had to do with the rest of the hit. Who 'owned' whatever item had absolutely nothing to do with whether you answered 'yes' or 'no' to what it was. Identification has nothing to do with possession. You didn't even ask who owned what at the end - it was like ownership didn't matter at all - and you never used a friend's name, so why bother to ask for one? All in all, I found it a strange task, that was vastly underpaid for the stress and time that went into doing it.
Feb 7, 2020 | 3 workers found this helpful.

Troy Average Pace
Reviews: 9,095
Points: 9,888
Ratings: 1,150
Rate cultural stereotypes(~ 25 minutes) - $2.50

Fair

Unrated

Approved

$9.96 / hour

00:15:04 / completion time

Pros

Cons

mind numbing. 134 actions you need to judge with up to 4 integers entered per action. When you get to the end, the page errors out and you only get a URL in the middle. Take that URL, edit it to it starts with HTTP and ends with Dynamickey, toss it in a new window. Should pop up with that page you get in the event a code doesnt get displayed. Now go over to the previous HIT window that you are on, take that URL and then toss it into the new page with the code verifier box thing. Bam. You now have a code.
Apr 9, 2020 | 3 workers found this helpful.

bmt Proficient Worker
Reviews: 14,466
Points: 12,517
Ratings: 1,347
Simple categorization task(~ 30 minutes) - $3.00

Fair

Unrated

Approved

$12.40 / hour

00:14:31 / completion time

Pros

Description: C/M keyboard sorting task. Either 4 or 5 rounds, not including 2 shorter practice rounds. Short post-task survey/demographics afterwards. There's a feedback form.

Cons

- Inquisit.
- Can pause between rounds but not during the rounds themselves
Nov 10, 2020

Want to see SCS Lab's full profile?

Create Your Account

or Login

SCS Lab


A219S40LKKD0L0 MTurk Search Contact Requester

Recently Reviewed HITs


Answer a short survey about things people do
Answer questions and make decisions! Part 1 of TWO-part study—bonus payment opportunity!(~ 10 minutes)
Answer questions and make decisions! Part 1 of TWO-part study—bonus payment opportunity!(~ 15 minutes)
Answer questions and make decisions! Part 2 of TWO-part study—bonus payment opportunity!(~ 10 minutes)
Answer questions and make decisions! Part 2 of TWO-part study—bonus payment opportunity!(~ 15 minutes)

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact