TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

JHU-APL BeHAVE Lab

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 67
  • HITs 16

JHU-APL BeHAVE Lab Ratings


Workers feel this requester pays generously

Unrated

Approves Quickly

No Rejections

No Blocks
Sorry, your browser doesn't support canvas elements.

JHU-APL BeHAVE Lab Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

NBadger Fast Reader
Reviews: 2,823
Points: 3,157
Ratings: 769
Decision making and email security - $10.00

Generous

Unrated

Pending

$21.65 / hour

00:27:43 / completion time

Pros

- Three parts: pre-survey, task, and post-survey. All done through qualtrics. Progress bar for the surveys jumps around a bit, but in a positive way and is decent at still letting you know about how much you have left.
- The task has a time limit of 30min, though it likely won't take you that long, and the timer ensures no one will spend too much time on it.
- The 60min estimate is extremely generous, but even if you did take that long it'd still be above minimum wage. Because of the timer limitations, it'd be almost impossible to drop your wage below that.
- 4 hour timer so there's no rushing to get it done.
- The formatting used for the task is super easy to use and well laid out.
- Mentions a possible bonus of $0-$8 for correct answers.

Cons

Oct 19, 2017 | 2 workers found this helpful.

scoot412 Fast Reader
Reviews: 1,210
Points: 5,460
Ratings: 737
Predict performance of a baseball image classifier - $15.00

Unrated

Unrated

Approved

$31.97 / hour

00:28:09 / completion time

Pros

Instructions are very clear: 7 blocks, 20 images per block.
7 blocks total: baseline block, 2 training blocks, test block, 2 more training blocks, test block. Each of those blocks had 20 images and 3 bubble questions. Each block took me right around 3 minutes to complete. After each of the two training blocks, there is a short section where you review the pictures and then answer a few bubbles.
At the end, there is one more page asking about your opinion/experience. One short written response (no character minimum) and some bubbles.

All in all, this shouldn't take much longer than 30-40 minutes.
$5 bonus is you are in the top 30% in the two testing blocks.
4 hour timer provided.

Cons

A little boring but, you can take breaks in between blocks if you need to.
Jun 27, 2022

Charlie Turksalot Relaxed Pace
Reviews: 432
Points: 928
Ratings: 153
Study a book, analyze group dynamics(~ 900 minutes) - $300.00 +20.00 bonus Confirmed!

Generous

Unrated

Approved

$32.00 / hour

10:00:00 / completion time

Pros

Great pay for an interesting study.
One week timer. You can stretch this out over several days without issue.
Bonus mentioned here and there for reimbursement of purchased material.

Cons

Time is a very rough estimate and will vary widely based on your reading and writing speed.
Plenty to think about and write about.
Requires a book purchase (hint: check your local library).
Multiple places to log in and work, but not too bad to manage.
Rating system is kind of arbitrary and it was hard to categorize things fully based on the subject matter I had. There are several different topics in different iterations of this study, so YMMV.
Dec 5, 2021

Want to see JHU-APL BeHAVE Lab's full profile?

Create Your Account

or Login

JHU-APL BeHAVE Lab


A202UOPNF08JH1 MTurk Search Contact Requester

Recently Reviewed HITs


Decision making and email security
Predict performance of a baseball image classifier
predict performance of an automated image classifier
Questionnaire about current events ($6 // 30 mins.)
Respond to short videos ($3 // 15 mins.)

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact