TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

Lupyan Lab

Is this your requester account?
University of Wisconsin-Madison
  • Overview
  • Reviews 596
  • HITs 457

Lupyan Lab Ratings


Workers feel this requester pays fairly

Okay Communication

Approves Quickly

Rejections Reported

No Blocks
Sorry, your browser doesn't support canvas elements.

Lupyan Lab Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

Yoyo Average Pace
Reviews: 68
Points: 1,899
Ratings: 186
Do these words sound like what they mean?(~ 7 minutes) - $1.00

Fair

Unacceptable

Rejected (Unfair)

$8.82 / hour

00:06:48 / completion time
  • HIT Rejected

Pros

Cons

Takes a long time to complete if you're giving attempting to put any thought into it. You are required to rate 72 words on a scale of 1-7 of how much they sound like what they are.

Requestor is unfair in rejecting work too. They claimed I failed too many attention checks and rejected me, but I don't believe they actually looked at my work.

There are 4 attention check questions in total during the study. The first is "Please choose the middle option", so I selected 4, as it was right in the middle, as you can see:
[1] [2] [3] [4] [5] [6] [7]

The next attention check was "Please choose the right-most option", so I chose 7.

Next was "Squares have four sides", so I selected 7, which was written as "[7] Strongly agree".

The last attention check read as "The word red has two letters". This one is a BS question, to be honest. If you disagree you're right, because the word red has 3 letters. However, because it has 3 letters this also means it has 2, meaning if you agree you're still technically right.

In any case, I selected "[1] Strongly disagree" as my answer.

However, my work was rejected for failing "multiple" attention checks. How? My only guess was that for the question about red having 2 letters, you were ACTUALLY supposed to select the number "3", to indicate it has 3 letters, and for the question asking if squares have 4 sides, you were supposed to select "4", indicating it has four sides; even though four is the neutral option, given that 1 and 7 are labeled as "Strongly disagree" and "Strongly agree" respectively, meaning you feel unsure about it having four sides if you answered like that...

In any case, it seems that this Requestor is at best inattentive to checking their work and too eager to reject Workers. At worst, they have no idea what they're doing, and put in attention checks that have to be answered in odd-ways that regular people wouldn't think of, or could be answered correctly in multiple ways, despite them having a "correct" answer in mind.

Edit: Now a week later, I still have yet to hear any response from them. I'm updating my review accordingly.

Advice to Requester

Really THINK about your attention checks before you put them in a study. Ask yourself "Is it possible to correctly answer this multiple ways?", and "Am I being unclear with the way this is written?".

Also be more attentive with checking work. As it is, it seems you're probably rushing through and not doing a good job checking, which is very unfair to Workers. For some people, Mturk is their primary source of income, and to waste their time, take back the pay they were owed AND damage their ability to work in the future (by slapping them with a rejection) just because you were not careful is extremely, extremely unfair and callous.
Nov 9, 2022 | 4 workers found this helpful.

bmt Proficient Worker
Reviews: 14,466
Points: 12,517
Ratings: 1,347
Picture sorting study(~ 15 minutes) - $1.50 +0.50 bonus Confirmed!

Good

Unrated

Approved

$15.52 / hour

00:07:44 / completion time

Pros

Description: Look at a bunch of figures and decide which categories they belong in. Part 1 is a training round with answers. Part 2 has no answers and an additional bubble scale. There is a feedback form at the end.
Possibility for a 50 cent bonus if you score high enough on Part 1. Bonus is disclosed at the end of the study.
Hourly is $11.64/hr without bonus

Cons

- Those faceless CGI marshmallow people freak me out.
- No progress bar.
Sep 18, 2020

Hedgmog Careful Reader
Reviews: 10,159
Points: 22,045
Ratings: 1,426
Word similarity(~ 20 minutes) - $2.00

Underpaid

Unrated

Approved

$6.98 / hour

00:17:11 / completion time

Pros

Read instructions and complete the practice questions - drag each word that you are given towards the most similar word - there are several pages of these to work through with 4 sliders on each page - there is a short timer on each question so move quickly while working through this - progress bar is pretty accurate

Cons

Attention Checks included (pretty hard to miss these)
Jan 3, 2022 | 1 worker found this helpful.

Want to see Lupyan Lab's full profile?

Create Your Account

or Login

Lupyan Lab


A34Y0INWSLX4AQ MTurk Search Contact Requester
Top Collaborating Institutions

University of Miami

Recently Reviewed HITs


A follow-up from a previous surveys on individual differences in language and imagery(~ 40 minutes)
A follow-up from a previous surveys on individual differences in language and perception(~ 40 minutes)
Categories and language study(~ 10 minutes)
Categories and language study(~ 15 minutes)
Categories and language study(~ 20 minutes)

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact