TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

Darlene Walsh

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 271
  • HITs 50

Darlene Walsh Ratings


Workers feel this requester pays well

Poor Communication

Approves Quickly

Rejections Reported

No Blocks
Sorry, your browser doesn't support canvas elements.

Darlene Walsh Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

Yoyo Average Pace
Reviews: 68
Points: 1,899
Ratings: 186
Social Media Consumption (P&M 2)(~ 7 minutes) - $0.90

Low

Unacceptable

Rejected (Unfair)

$10.06 / hour

00:05:22 / completion time
  • HIT Rejected

Pros

Cons

Many attention checks, which require you to remember obscure information that is hidden in questions from other prior pages.

For example, there's an entire paragraph on the first few pages about a brand in the UK which is not available in the US. You'll be answering some questions based on memory about it, which you cannot fail later on. Okay.

However, you're not actually told what the product IS, just information about the brand. Later on, you're asked a single question about your opinion on body wash, and then later on, you're asked what the product that the brand makes is.

Through insinuation you can guess that the brand makes body wash, because several pages back you were asked a question about it, but you're never actually told what the brand makes, so this is just an educated guess. It's also the only correct answer.

Really sneaky stuff.

Worse yet, even after completing the attention checks correctly, I received a rejection with the reasoning being "Sorry duplicate entries are not eligible for compensation".

I only did one submission. In fact, I looked back through my work history via MTS Tracker, and found that not only did I not ever submit any similar HIT, but I've only submitted another HIT to this Requestor once...7 months ago.

Edit: I recently accepted another study from this Requestor with the same exact title. This means that it definitely IS possible to duplicates of this same study, because they don't bother to filter out people who have already worked on their identical HITs. Really lazy, and even more concerning considering how they already are quick to reject for duplicates without checking.

I've still not heard back from the Requestor after having contacted them last week as well, meaning they get a bad mark in communication from me as well.

Advice to Requester

Be better about your attention checks. Attention checks that require memory to recall the correct answer are already pushing it; attention checks where the correct answer has to be guessed based on memory are insane.

Also, be more careful about actually checking to make sure multiple submissions have been submitted. The only thing I can think of that may have caused my rejection for "multiple submissions" is because I had to open the HIT again in another tab to go back and re-read all the information to pass the attention checks. But again, this is the fault of you, the requestor. If you set up attention checks that require memory and guessing, don't be surprised when people are unable to pass them without going back and rereading. You've set up that fail state, and Workers shouldn't be punished for taking the necessary steps to avoid it.
Jan 23, 2024 | 1 worker found this helpful.

Raymond James Proficient Worker
Reviews: 5,403
Points: 2,907
Ratings: 110
Perception, Situations and Consumers Insights(~ 10 minutes) - $1.00

Unrated

Unrated

Approved

$11.54 / hour

00:05:12 / completion time

Pros

Interesting questions.

Cons

There are 3 writing prompts to start off the hit. There is a 3 question attention check at the end of the survey that you must pass before you are given a completion code. Remember the details of the HIT or you could spend 10 working on it and not be compensated.
Jul 20, 2022

bugeekman Proficient Worker
Reviews: 28,195
Points: 17,533
Ratings: 1,047
Smartphone Apps, Choices and Consumer Insights (~ 7 minutes) - $0.70

Unrated

Unrated

Pending

$14.91 / hour

00:02:49 / completion time

Pros

Cons

poorly placed attention checks. they place them at the very end of the survey after the complete demographics when you think the survey is over.

Advice to Requester

place your little pop quiz after the main survey then ask for demographics. it's quite possible to forget some details of your generic survey while doing the demographics section
May 12, 2022 | 1 worker found this helpful.

Want to see Darlene Walsh's full profile?

Create Your Account

or Login

Darlene Walsh


A2JIAWOI6GCBDO MTurk Search Contact Requester

Recently Reviewed HITs


Behavior in a shopping experience (E.1)(~ 15 minutes)
Brand Perceptions(~ 10 minutes)
Brand Perceptions(~ 3 minutes)
Brand-Related User Generated Content (~ 7 minutes)
Brand-Related User Generated Content S2(~ 7 minutes)

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact