TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

tedlab

Is this your requester account?
Massachusetts Institute of Technology
  • Overview
  • Reviews 856
  • HITs 150

tedlab Ratings


Workers feel this requester pays generously

Good Communication

Approves Quickly

Rejections Reported

No Blocks
Sorry, your browser doesn't support canvas elements.

tedlab Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

eyeofodin Average Pace
Reviews: 255
Points: 454
Ratings: 174
Diverse cognitive tasks battery June 2021 - $24.00

Unrated

Unrated

Approved

$8.06 / hour

02:58:43 / completion time

Pros

Accurate progress bars
Large pay for a single HIT

Cons

Where to begin?
There are 27 different HITs in one. Each HIT requires its own code, that must be obtained before continuing on to the next.
There are a variety of different style hits that include, recording, math, timed sections, etc.
Each HIT's instructions are read to you in their entirety, and you are NOT allowed to skip it. Each section requires multiple practice sections. Feedback is also read to you, about why it is right or wrong. When you reach 25%, 50%, 75% it will read to you that you have progressed to that amount of each section.
HITs often use spacebar, but at the end of a section, if you HIT space, it will close the popup box with your code. You will have to repeat that section for the code again.
Time is extremely underestimated in each section. If it states 5 minutes, it almost certainly will be 10.
No promise of any bonus
Aug 14, 2021 | 17 workers found this helpful.

L Lemon Average Pace
Reviews: 12,998
Points: 9,751
Ratings: 1,353
Diverse cognitive tasks battery June 2021 - $24.00

Unrated

Unrated

Approved

$11.31 / hour

02:07:20 / completion time

Pros

• I think they've fixed the "spacebar erases code" problem; they list the code on the final page - no popups, and the spacebar doesn't erase the code.
• 48 hour timer. I spaced this out over quite a few hours.
• Although it's tedious to be read the instructions, they are replicating exactly how the tests are given to brain damaged patients. We are the control group.
• The tests are quite simple and most of them only last a few minutes so, although the entire hit is somewhat of a slog, it moves along at least.
• Great for a slow Saturday.
• The two other long reviews of the hit were super helpful.

Cons

• A very long hit - approx. 27 different tests.
Aug 21, 2021 | 7 workers found this helpful.

Azazael Fast Reader
Reviews: 7,193
Points: 3,995
Ratings: 528
Diverse cognitive tasks battery June 2021 - $24.00

Unrated

Unrated

Approved

$13.09 / hour

01:49:59 / completion time

Pros

its 24.00

Cons

....the hard way
there is just so much here, its not overwhelming but i would say at least 15% of this time could be shaved off if the hit wasnt READING YOU EVERY SINGLE INSTRUCTION. it gets old fast to have a sentence of instructions read to you that takes 10x longer than it would take for you to read it. Or to explain how colors and shapes work...thanks I think i learned those about 40 years back. as someone else said, press the space bar ONLY if you see it say to, i got tripped once hitting space too fast and had to redo one section (thankfully it was a slow one). its not horrible but its not that great either but for a slowish weekend its cash
Aug 15, 2021 | 9 workers found this helpful.

Want to see tedlab's full profile?

Create Your Account

or Login

tedlab


A3CV1MVF006J21 MTurk Search Contact Requester

Recently Reviewed HITs


68 sets of sentences with comprehension questions. CODENAME: BLOOMING BARNACLES
Answer a survey about your internal thought processes (CODENAME: LYING LEMUR)
Answer questions about an interaction.
Answer questions about interactions
Answer questions about sentences 2.

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact