TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

HI-Research

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 301
  • HITs 43

HI-Research Ratings


Workers feel this requester pays well

Poor Communication

Approves Quickly

Rejections Reported

No Blocks
Sorry, your browser doesn't support canvas elements.

HI-Research Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

Yoyo Average Pace
Reviews: 68
Points: 1,899
Ratings: 186
Analyzing Spatial Relationships in AI-Generated Images - $0.15

Generous

Unrated

Rejected (Some)

$36.00 / hour

00:00:15 / completion time
  • HIT Rejected

Pros

Can be done quick and easy

Cons

Requestor is rejection happy. It's better to answer randomly than accurately.

Timer is only 5 minutes long. Meaning you have little time to do these in batch work.

I was rejected on 2 HITs for not filling in one of the 5 choice answers. In this HIT you're presented with an AI generated image, usually of 2 objects, and you rate it on how realistic it looks (versus cartoonish), how realistic it is (for these objects to be presented as they are), and the dimensions of the object.

For example, in one a toothbrush was above and to the left of a carrot, so I marked those choices: * Toothbrush to left of carrot / * Toothbrush above carrot

Easy enough. You also have to check whether the objects are merged together, and you can also select N/A on the questions about if they're merged or separate, or where they're located, if the objects suggested don't appear in the image (for example, an image might have a zebra and nothing else, in which case you'd select that there is a zebra, but not whatever else it is suggesting is supposed to be there, like a potted plant).

The trouble is that on a couple of them both items were there. I got one that featured an image of an alarm clock and an orange, except they were fully merged together. Instead of the hands and face of the clock where you would see the time, you saw the inside of an orange. It was great surrealist art, but impossible to check correctly.

The orange was neither to the left, right, above or below the clock. It was inside it. However, I couldn't mark N/A either, because that option is only to be used if the object isn't in the image, and the orange is. So my solution was to leave it blank. And remember, you only have a 5 minute timer, so you'll have to decide what to do fairly quickly.

Anyhow, the Requestor didn't like that, and they rejected my work on these 2 HITs where I encountered an issue like that for not selecting an answer, despite choosing a correct answer being impossible.

I truly do hate when Requestors don't consider the nuances of the work they put out, and toss out rejections like candy at Halloween.

I did 35 HITs, and got 2 rejections, which makes these HITs riskier than they have any right to be for 15 cents with a 5-minute timer.

Careful with these.

Advice to Requester

Add more options to choose from if you want quality work; there aren't enough choices to properly judge some of your images.

Increase the timer. 5 minutes is hardly enough if you want unrushed, quality work.
Oct 25, 2022 | 3 workers found this helpful.

dmcal813 Fast Reader
Reviews: 50
Points: 248
Ratings: 38
Analyzing Spatial Relationships in AI-Generated Images - $0.15

Good

Unacceptable

Rejected (Some)

$23.48 / hour

00:00:23 / completion time
  • HIT Rejected

Pros

These are somewhat fun, but not worth it. (See cons)

Cons

I've been doing these HITs for a while with no issue and now this requester is suddenly very rejection-happy. I got the same rejection reason on two HITs from this batch for the same reason as everyone else here: “Answers to questions are missing.” I don't think you can submit the HIT without checking all of the boxes, so I'm not even sure how this is possible. However, it is sometimes impossible to choose the correct answer, as none of them match up with the direction of the target object, so you just have to take your best guess. Rejection reason wasn’t entirely specific, so I don’t know which one it is. But like I said, I’ve done these before, didn’t change how I was doing them, and this was never a problem but suddenly it is, so be careful if you decide to do these. If the requester has a new way they’d like us to do these HITs, then they should clearly state that in the instructions!

Very short timer and as I mentioned, sometimes the target objects don't match up with any of the four options to choose from. The direction will be in front or behind, but there's no option for that.

I would avoid this requester at this point. I know I will be.

Advice to Requester

Use a longer timer, give more concise instructions if there’s a new way that you expect us to do these, and add correct options for target objects.
Oct 25, 2022 | 1 worker found this helpful.

TheWerkz Average Pace
Reviews: 210
Points: 1,130
Ratings: 111
Classify a recipe based on its ingredients - $0.05 +0.10 bonus Confirmed!

Generous

Unrated

Approved

$20.77 / hour

00:00:26 / completion time

Pros

Finally, watching Food Network for way too many hours on end for way too many years is starting to pay-off!!!
These are great if you really know your food ingredients and you cook, otherwise you probably won't get the .10 cent bonus for each recipe type you classify and then the pay is only .05 cents and no bonus.
These can run the gamut between being really great pay and only mediocre or worse, it all depends on how many ingredients are listed and how fast you can recognize the cuisine type out of 10 possible types of cuisine AND be correct more often than not.
About 4 days afterwards, your mailbox gets a bunch of .10 cent bonus notifications for all of the recipes you correctly classified.

Cons

60 second timer! You cannot Go Ham on these!
Some cuisines' ingredients are very similar to each other and there's no measurements given for the ingredients or cooking directions that would be helpful sometimes.
Jul 16, 2019 | 1 worker found this helpful.

Want to see HI-Research's full profile?

Create Your Account

or Login

HI-Research


A2Y8KDPWV85C53 MTurk Search Contact Requester

Recently Reviewed HITs


(WARNING: This HIT may contain adult content. Worker discretion is advised.) Assess harmfulness of text from humans and AI.
[30mins, $8] (Event detector) Work with an AI-powered email management system to complete a series of tasks
[30mins, $8] (Search Tool) Work with an AI-powered email management system to complete a series of tasks
[30mins, $8] Work with an AI-powered email management system to complete a series of tasks
[30s, $.15] Label URLs for COVID-19 vaccine intent

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact