TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

Computer Vision Turk

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 185
  • HITs 17

Computer Vision Turk Ratings


Workers feel this requester pays fairly

Poor Communication

Approves Quickly

No Rejections

Blocks Reported
Sorry, your browser doesn't support canvas elements.

Computer Vision Turk Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

sarin Unreliably Fast
Reviews: 10,091
Points: 10,085
Ratings: 1,705
Answer Simple Question about Image - $0.12

Generous

Unrated

Approved

$39.27 / hour

00:00:11 / completion time

Pros

Actual hourly: $42.79/hr over 758 HITs (yes this is real)

TL;DR: big brainless bountiful batch

This batch pays well for the dumbest reason. The requester is throwing all of their object names into each image and seeing which one applies, which results in 99% of the given objects not being in the image. This means that most of the time, you can comfortably select 'No' for all the objects in the image.

There is only one image in each HIT, and most of the images are typical everyday images. They are simple most of the time, and you can almost always figure out the setting of the image, which can make determining objects even easier.

They provide keybinds for the HIT, and you can rifle through the objects super quick with these keybinds. With multiple tabs and the provided keybinds, you can make this batch extremely lucrative.

The requester has consistently posted work around 10AM and 9PM CST every day. When they post work, they drop big batches and smaller batches for several hours at a time, which means it should be available for everyone.

All of the above make the batch pay as well as it does, and it makes for a good batch to increase your numbers on as well.

The requester uses a somewhat closed qual system. You're given a 'tutorial' HIT, then the HIT after that is what they use to screen for good workers. If you don't submit adequate work, you'll be locked out of the requester's HITs until they post a new HIT with [NEW BATCH]. This requester generally knows what they're doing and how to get the best data out of mturk.

Cons

TL;DR: minor annoyances, misleading pay, annoying keybinds

The hourly fluctuates a lot between images. Sometimes you're given really easy images with a few objects, other times you're given images of crowds or food with tens of objects in them, which requires more careful looking.

When you miss a keybind or don't answer, there will be an annoying fullscreen popup telling you that you didn't select an answer, which can ruin your flow.

The keybinds theirselves are very annoying and straining to use. The keybinds you'll use the most are Ctrl+N to go to the next object, and Ctrl+B to go back to an object. These keybinds require you to stretch out your hand and constantly press N or B, which strains your hand very fast. This can be worked around using macros, or by having a stack of coins or something similar to hold the Ctrl key down.

Every 20-25 HITs or so, you'll get the 'gentle reminder' that adds an unnecessary click to the HIT. This reminder doesn't really mean anything, as you can't be too careful on an image that you can spam 'No' on 95% of the time.

The instructions say you'll only have to answer 8 questions, but in reality you're answering 10 every time, so this HIT misleads you somewhat. This means they should be paying at a rate of about 1.5 cents/question, so in a perfect world they would pay 15 cents for each HIT.

Sometimes, the image will blur and move around in an annoying, nauseating way. According to some workers, this can be fixed by blocking an element that causes the blur.

Very rarely, you will come across an image with scantily clad women. This may be a pro if you're into that, but otherwise this is somewhat adult content that isn't disclosed by the requester.
Mar 5, 2019 | 20 workers found this helpful.

sarin Unreliably Fast
Reviews: 10,091
Points: 10,085
Ratings: 1,705
Spot All Objects of a Particular Type - $0.20

Good

Unrated

Approved

$16.00 / hour

00:00:45 / completion time

Pros

Actual hourly: $16.32/hr over 782 HITs

This batch is posted consistently around the morning and evening on weekdays
Thousands of HITs when they're posted, so you can grind them out for hours
Can reload instead of return to get a new keyword to label with if you don't want to hurt your return rate for some reason
Good interface, can usually locate what's labeled and zoom in feature can help you properly label objects
Usually fast to load pictures
Can go back and fix your work if you miss something
Get to laugh at idiot workers that label pizza as a book, for example
Easy keywords can boost your hourly
Same day approval
No qualifications needed apart from 98% approval rate
1 hour timer, so you can fill your queue with these

Good keyword examples:
laptop, monitor, keyboard, computer mouse, plate, toilet, any large animal

Cons

Instructions page adds an unnecessary click to start the HIT, and the 'gentle reminder' adds another click
Pictures with large filesizes will kill your hourly as they take forever to load
Pictures load one at a time as you progress, which hurts the hourly somewhat
Pictures with large crowds of people, groups of objects, or small/thin objects are hard/tedious to label, and you're better off just F5'ing
Rarely get the 'Index of /' page, forcing you to F5 or return
Small pictures cause your spotting tool to place the spot away from where you click, which is annoying to get used to
The initial object label is red, which can blend into red objects easily, turning the HIT into the worst scavenger hunt
When labeling, your spots are blue, which also blends into blue objects, making it harder to verify your work and whether you're on the object or not
Only US turkers can work on this batch
When rejecting a photo for not having an object, it will sometimes not add to the 5 photo count for the HIT, which wastes time
Very rarely you will go through a softcore sexual image, but not explicit enough to require the adult worker classification
Hard keywords tank your hourly and kill your morale, and it's especially demotivating when you're forced to either label 20+ objects for the last photo or reload and waste the effort you put into the other 4 photos
Inconsistent on how many objects you need to label
Lots of horror stories about people getting softbanned after doing 2 HITs from this batch
Mind numbing after an hour

Hard keyword examples:
ski, ski pole, signpost, lightbulb, sunglasses, ANYTHING FRUIT/VEGETABLES, book, shoe, knob

Advice to Requester

Have the 'show hint' button for when you need to verify the initial label, as it will save workers a lot of headache trying to find the labels that blend into the object
Jan 9, 2019 | 12 workers found this helpful.

scoot412 Fast Reader
Reviews: 1,210
Points: 5,460
Ratings: 737
Spot Objects in Image - $0.15

Underpaid

Unrated

Pending

$2.25 / hour

00:04:00 / completion time

Pros

none

Cons

The requester's statement that these HITs pay "well above an average mturk HIT" is absurd. I did a handful of these when they first posted (and no items had been marked yet). For starters, images take too long to load. It was nearly impossible to get them done in under two minutes, which makes the pay rate less than $5/hour. Out of curiosity, I tried two later on (after many items had been marked)...between having to skip images because there was nothing left to mark and looking for obsolete things that could be marked, the two I did took me roughly 7 minutes each. Total garbage made worse by the requester touting how well he pays.
I've adjusted time as an average and, frankly, am probably being generous.

Advice to Requester

You should test these HITs yourself and then explain how they're so well paid. Images often take seconds to load (on a fast, wired internet connection). Waiting for ten separate images to load takes too long and makes the pay not worth it. Factor in the typing and tagging (x10) and this is some of the most poorly paid batch work on mTurk.
Dec 15, 2018 | 10 workers found this helpful.

Want to see Computer Vision Turk's full profile?

Create Your Account

or Login

Computer Vision Turk


A1BEQYW3DRR3BR MTurk Search Contact Requester

Recently Reviewed HITs


[EFFICIENCY IMPROVEMENTS] Answer Simple Question about Image
[FIXED][NEW BATCH] Trace Object Boundaries
[INCREASED PAY] Verify All Instances of Object Have Boundaries
[INCREASED PAY] Verify Object Boundaries
[NEW BATCH] Answer Simple Question about Image

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact