TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

IC3 AI team

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 511
  • HITs 644

IC3 AI team Ratings


Workers feel this requester pays fairly

Poor Communication

Approves Quickly

Rejections Reported

No Blocks
Sorry, your browser doesn't support canvas elements.

IC3 AI team Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

Yoyo Average Pace
Reviews: 68
Points: 1,899
Ratings: 186
Listen to audio files and rate their quality - $1.50

Underpaid

Unacceptable

Rejected (All)

$4.12 / hour

00:21:52 / completion time
  • HIT Rejected

Pros

Cons

Listen to 18 audio clips that are 17 seconds long each, meaning you'll be listening for a total of 16 minutes and 15 seconds minimum. They must be listened to fully each time, and you can't rate them until you complete the clip fully, otherwise it pauses and gives you a warning message, which you have to click off and then start the audio back up again, thereby wasting further time.

16 minutes and 15 seconds is the absolute minimum it would take to complete this. However, this doesn't account for the other clips where you have to listen to several distorted voices and write down the three numbers they say in text boxes, along with other clips where you have to judge between two different pairs of 7 second clips to see which has better audio quality. The latter of which at least you don't have to listen to fully, otherwise that would be an additional full minute of listening to audio, but even so, you're looking at roughly 20 minutes minimum to complete the HIT.

Once you do it once, you only have to rate 12 audio clips 3 times each for the next hour, which saves you just under 6 minutes. Meaning the HIT can be completed in about 15 minutes minimum instead of 20, but once the hour is over training begins again, bumping up the time it takes to complete the HITs.

So in reality, it takes you 20 minutes to do the first HIT, and then you'll be able to do an additional 4 HITs within the hour before you have to redo training. But that's only in a best case scenario. Realistically, you're looking at being able to complete 3 HITs in an hour before training restarts, meaning you're making between $4.50-$6 an hour max - with $6 being the absolute best case scenario where you can complete 4 HITs in under an hour, which isn't realistically possible.

Edit: Rejected, with the reason being "Major failure in gold questions which their answers is known to us". I asked them what this means, as I looked back at the HIT and there are no "gold questions", but of course, there was no answer.

I cleared all the attention check questions in the main body of the work and in the training - they're easy to do because although you have to listen to the same clip three times, it always says the same thing, which is something like "This is an attention check, please select the number 1 now.", which you do for all three options.

This means the only other "gold" questions I could have failed are the first question, which asks you to add up the numbers 2, 3, and 1, which is 6. So that can't be it. I suppose it's possible that I missed some of the numbers in the tests where you're blasted with white noise, and you can halfway make out 3 numbers. It says something like, "The digits...ssshhh...are...ssshhh...3...ssshhh...2...ssshhh...9", which can be a bit difficult to hear, but I've never had any trouble with them in the past.

Which leaves only the three questions where you judge the audio quality of 8 audio files and choose which one sounds better. They're 90% identical, so I always choose the one which has less white noise, which has always worked out for me in the past. I guess now they've become extra strict, which is part of the reason I stopped doing HITs for them in the first place.

See, a couple years back IC3 used to be good paying, and the work was decent. Then I did a $7 HIT for them which required you to record multiple audio files (and took several hours to do) and send them via email to them. All of the work was done outside of Mturk. And predictably, with a HIT like that, they rejected me for arbitrary reasons (being that some of the audio clips were about a second shorter than they would have liked), getting to keep all the audio files I sent them, without having to pay me. They were also combinative when I reached out to them, souring my impression of them even more.

I swore them off for a long time after that, but recently I started giving them a chance again. The last HIT I did for them was extremely underpaid, but I gave them another shot with this one. This one paid even WORSE, and they're just as much sticklers for arbitrary rejections as ever.

I'll be avoiding them completely from this point on.

Advice to Requester

Ditch the training, or pay people a better wage. Your HITs used to be fair a couple years ago, but over time they've gotten worse and worse and worse.

You also need to be MUCH clearer in why you're rejecting people. Saying they had a major failure with gold questions tells them nothing. For example, what is a gold question? What was the failure? How can this be avoided?

It would be even better if you didn't reject people so randomly either, but I feel that's probably asking too much.
Sep 24, 2022 | 31 workers found this helpful.

Troy Average Pace
Reviews: 9,095
Points: 9,888
Ratings: 1,150
Record noisy speech - $10.00

Generous

Unrated

Approved

$16.80 / hour

00:35:43 / completion time

Pros

-So to be brief, this can pay very well depending on how your audio equipment is set up and how well prepared you are for recording stuff. They state to use as many audio recording sources as possible, but for some, technology is so awesome, that background noise was hard to pick up using the devices plugged directly into my desktop. I mean, how am I going to take my desktop out to my car to record noise? Either way, I picked up a program called Easy Voice Recorder for my andoird phone after getting frustrated with my recording results due to my Astro A40 mic and main Desktop mic not picking up enough sound. This program allows you to record with the correct specs that they wanted using audacity, but on your phone. Then you use Audacity to trim the audio. So realistically, this took me 80 minutes trying to get it set up right while i figured out how to do this effectively, but the final 15 minutes or so, I was able to knock out 4 recordings(of 7) easily since I was now using my phone and bringing the mic directly to the source while I read the text.

-10-15 seconds of voice recording while ambient noise is going on(7 times). Again, depending on your equipment setup, this can either go really quick, or really slow. But the length of each clip is short.

Cons

-Voice and sound recording.
-You have to figure out how you want to do this one since they request multiple recording devices, some of which you'll only have one of. At least with the phone option, you will be able to provide 2 sources.
Apr 5, 2020 | 10 workers found this helpful.

ashjanine10 Careful Reader
Reviews: 216
Points: 555
Ratings: 98
Payment for failed HITs for Personalized, Emotional and Noisy Testset - $30.00

Fair

Unacceptable

Approved

$47.00 / hour

00:38:18 / completion time

Pros

Enter at your own risk is all I can say, especially on the high paying hits that are like 30 dollars...

Cons

I was finally paid, but the fact that it took months, and barely any communication from their side, and the fact that they issued me a qualifier for a repayment hit then rejected it a week later saying they didn't have my work...none of it makes any sense. So my suggestion would be to stay far away from this requestor, literally only ONE competent professional person works on this team of people, and no one communicates with each other over there, so one person will have your work and send you out a repayment hit, then another will come along and reject it thinking they don't have your work....super ridiculous.
Nov 17, 2021 | 14 workers found this helpful.

Want to see IC3 AI team's full profile?

Create Your Account

or Login

IC3 AI team


A2SAUCU1B6BDTS MTurk Search Contact Requester

Recently Reviewed HITs


12 pairs of videos to be watched and rate their quality
12 pairs of videos to watch and rate their quality
Android -- Record noisy speech
Answer a survey about your opinions
Audio Labeling: Listen to audio clips and rate their quality and characteristics

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact