TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

Glen Zuska

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 68
  • HITs 1113

Glen Zuska Ratings


Workers feel this requester pays generously

Poor Communication

Approves Quickly

Rejections Reported

No Blocks
Sorry, your browser doesn't support canvas elements.

Glen Zuska Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

Yoyo Average Pace
Reviews: 68
Points: 1,899
Ratings: 186
ImageCoding_UvA_446 - $5.00

Good

Unrated

Approved

$16.62 / hour

00:18:03 / completion time

Pros

Doesn't take that long to do, if you're careful enough.

Pay is good.

Cons

Images have trouble loading in, and are hard to see. The formatting on the pages is bad, so that the images cover up the text as you scroll down, as the images scroll with the page for some reason.

Attention checks are phrased particularly badly. For example, one reads "1=Please select #1; 10= select #1". It turns out you're supposed to select answer 1 on question 10, but you'd be forgiven for thinking you're supposed to select answer 1 or answer 10 for question 1 due to how badly it's phrased. Which could lead you to selecting a different answer for question 10, which automatically fails you out of the survey...on the very last question, and warns you not to take the survey again. Talk about 10 minutes wasted!

Of course, I did risk it, and took it again in an Incognito Tab, due to how unfair it was how poorly that attention check is set up - there are others as well, and they're easy to miss, so be careful! - so we'll see what happens.

Also, if you do any other HITs by the same Requestor you open yourself up to rejection as well, as it seems they consider doing more than one study with similar titles to be all the same study. Therefore doing multiple HITs will lead to rejections. For example, if you do "ImageCoding_UvA_439", don't do "ImageCoding_UvA_433" or anything with a similar title.

Though because although they've released over 400+ versions of this HIT, each with their own variations of unique images to rate, it may be easy to do them more than once. However, the bad formatting of the study should tip you off. So just be careful not to do them more than once! And if you have submitted one before and end up finishing another, just don't submit! I know it's a bit of time to waste, but it's not worth the rejection.

Advice to Requester

Release all of your work under a single HIT, instead of spreading them out among hundreds of unique HITs. Not only does it increase the chances of Workers doing multiple of your HITs, (apparently) causing headaches for you and ruining your research, but also hurting Workers in the process. Not to mention it gums up Scrapers and other HIT tracking programs, by flooding Mturk with similarly named HITs over and over.

Please fix your attention checks. They're so poorly phrased that they're easy to do wrong. The formatting of your study also needs work, as the way it's set up now, the images we're supposed to rate are tiny - I had to open them in new tabs to see them properly - and they scroll with the WITH the page, meaning they cover up all the questions we're trying to answer. For someone who is very concerned with the quality of their data, these things greatly impact the data you're getting back, and should be fixed ASAP.
Jan 9, 2023 | 6 workers found this helpful.

johnxyz Proficient Worker
Reviews: 12,535
Points: 13,357
Ratings: 1,280
Music Project (1) (2) - $0.55

Underpaid

Unacceptable

Rejected (Unfair)

$24.75 / hour

00:01:20 / completion time
  • HIT Rejected

Pros

One page

Cons

- Rejected because I've completed a HIT by this requester from group C. There is no mention of group C, only not to repeat groups 1 or 2 (see screenshot below)
- No response to any messages (sent one per week). One month has now passed and it cannot be reversed now.
- Comprehension checks
- Update: now rejects based on not enough time spent!

Advice to Requester

Your instructions are not consistent with your rejection policy. It mentions "don't do more than one HIT from group 1 or 2", but I have only done a HIT from group C. There is no mention of group C in the instructions.

Here is a screenshot: https://i.imgur.com/bMWsHTk.png
Notice that there is NO MENTION of Project (C).

You've reposted this HIT and now added an additional radio button that says "I will not do this HIT if I've completed Project (1) or (2)" but still can't respond to any emails?

Please do not reject workers for not spending enough time on this HIT. It was a fairly simple task and many of us read faster than others.
Dec 1, 2018 | 6 workers found this helpful.

Troy Average Pace
Reviews: 9,095
Points: 9,888
Ratings: 1,150
Image Coding - 4 - $3.00

Generous

Unrated

Approved

$19.39 / hour

00:09:17 / completion time

Pros

Pretty mindless
Progress bar

Cons

1) This requester LOVES to reject people for taking thier surveys twice although they may appear very different than the others youve done for them in the past. So be VERY careful in taking this one.
2) If only there was some sort of setting or code that exists within the Mturk system that blocks duplicate submissions... if only?
3) AC's
4) Progress bar seems to drag when you start, but there are only 5 or 6 pics you need to evaluate, so it goes quicker than youre initially led to believe
Jun 9, 2020 | 7 workers found this helpful.

Want to see Glen Zuska's full profile?

Create Your Account

or Login

Glen Zuska


A1OQOUPPKOU4QJ MTurk Search Contact Requester

Recently Reviewed HITs


Answer a short survey about music genre recombinations
Image Coding - 105
Image Coding - 131
Image Coding - 135
Image Coding - 168

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact