TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

Angel Chang

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 7
  • HITs 18

Angel Chang Ratings


Workers feel this requester pays poorly

Good Communication

Approves Quickly

No Rejections

No Blocks
Sorry, your browser doesn't support canvas elements.

Angel Chang Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

Yoyo Average Pace
Reviews: 68
Points: 1,899
Ratings: 187
Annotate words or phrases in sentences with 3d object labels - $2.00

Underpaid

Unrated

Approved

$4.52 / hour

00:26:34 / completion time

Pros

Cons

It actually took well over half an hour to complete. I did not accept the HIT immediately, because I took the time to read all of the instructions first. When including that, this was extremely underpaid.

The interface is very bad. The 3D areas where you annotate objects is sluggish to move around and hard to navigate without it jumping around at random.

Some objects are also not selectable, making the work impossible to complete properly.

An example of this there might be a sentence the reads something like "The orange sofa with a cushion on it sits next to the coffee table. It also sits next to a chair on the left."

In that case you would highlight "The orange sofa" and the two instances of "it" and connect them to the 3D render of the sofa. You would also take the the phrase "the chair" and link it to the chair sitting next to it, by highlighting the text then double clicking the 3D object.

Seems simple enough, but you would also be required to highlight "a cushion" and select it...but the cushion may not be a 3D object you can select. You can search for all objects in the environment from a sidebar as well, so you can know for sure if the object is selectable or not. If it's not...you're at risk.

Even worse is when the text mentions objects that weren't in there. One example was that I had to highlight a bed...in what looked like a kitchen. There was no bed, so I had to skip that entirely, even though, of course, that puts you at risk of rejection. (I left them feedback explaining all of this, of course.)

They're VERY specific this work, despite paying so low for it. The instructions say that even if you get one or two things wrong they will reject your work. So I was very careful to not make any mistakes and highlight any instances of an object, the word "it" or "that" or whatnot, which could refer to a noun. (You also have to make sure you highlight descriptors like "the long table", but not descriptors that come after like "the table that is long", which would just be highlight as "the table" instead. Or selecting "green chair" would be incorrect, but "a green chair" is acceptable. That's how specific they are.)

Still, whatever, I finished the HIT. It was annoying to use, underpaid, and broken, but whatever, I finished it. A day later I get a threatening email. It told me that of the 10 scenes I annotated, only 4 were acceptable to them. They said I did not read the instructions, I did not link objects correctly, I did not highlight determiners and pronouns such as "the northmost one", "the leftmost one", "the one", "another", "this", and so on. (This was very odd, as none of the sentences I encountered use phrases like this, but were usually very straightforward, reading more along the lines of "The rectangular end table is to the left of the door." instead.)

They explained that normally they don't reject work, but mine was so bad they don't have a choice. They hate to do it, but I was just awful, you see.

"Luckily", they went on to give me a forum I could fill out, where I would agree to complete another round of the task for free, and they wouldn't reject me. So long as I agreed to annotate AT LEAST another 10 rooms at minimum, they may reconsider. However, if I did not agree within 24 hours they would reject me without argument.

Now, obviously, this screams blackmail, and sounds insane...But it gets weirder. 2 minutes later they sent me another email saying they approved my work. What. I didn't even know I was at risk of getting rejected or that any of this was going on until about 12 hours later when I saw both emails!

I have no idea if that email was meant for another Worker and was sent to me by mistake, or if I was approved by mistake, or what, but the fact that they're trying to coerce free, additional work out of people does not sit well with me.

Please, please avoid this Requestor for your own sake.

Advice to Requester

1. Don't blackmail your Workers to get extra work. Seriously. That shouldn't even need to be said.
2. Better communication. I have no idea if these emails were even meant for me, but they came off as crazy. And again, giving someone 24 hours to respond to your threat is not good communication even what you were asking was reasonable (which it was not)!
3. Clearer instructions. If you feel that these instructions are so difficult for people to follow, then that's on you for not making them clear enough. It takes over 10 minutes just to get through them, then another 30 minutes to do the task, and you're still going to blame the Worker, despite underpaying them by a ton.
4. Pay people a fair wage! If a job takes 30+ minutes to complete, you think it's complex, AND you still want quality work, then you have to pay for it! This should pay at least twice what it does at minimum.
5. Fix the interface. Part of the reason the HIT is so frustrating to use is how slow everything runs, and how difficult it is to navigate. Everything from highlighting the texts to selecting objects to moving around is more difficult than it needs to be, and often results in errors that need to be fixed before moving on, further slowing things down.
6. Check your HITs before you submit them for Workers to work on. If you're asking people to annotate a room and match the text to the 3D objects, then you'd better be certain those objects are selectable or are ACTUALLY in the room.
Jul 28, 2022

jclark19 Average Pace
Reviews: 410
Points: 766
Ratings: 93
Annotate words or phrases in sentences with 3d object labels - $2.00

Low

Good

Approved

$7.21 / hour

00:16:38 / completion time

Pros

It's different and fairly easy to do.

Cons

Update: Got an email from the requester stating that while my work was "good" I had missed a few things to annotate. Like "the leftmost one"...and then threatens to reject future submissions if any annotations are left out. Okay...this is underpaid anyway, so I won't be working on these anymore.

Underpaid for the amount of work should pay at least $3.00 to make it a little more fair pay. I can only do two or three before I have to stop and give my wrist a break.

Advice to Requester

Up the pay a bit or make the hit shorter, don't threaten good workers.
Jun 29, 2022

fjleon Fast Reader
Reviews: 230
Points: 680
Ratings: 100
Annotate words or phrases in sentences with 3d object labels - $2.00

Underpaid

Unrated

Approved

$5.04 / hour

00:23:48 / completion time

Pros

quick approval. unique, interesting task

Cons

takes too long, interface is hard to navigate to the correct angle, underpaid, threatens to reject in the future, even with an 8/10 score which is what i got

Advice to Requester

this should pay 6 dollars
Nov 2, 2021 | 4 workers found this helpful.

Want to see Angel Chang's full profile?

Create Your Account

or Login

Angel Chang


A2AZGZEDP36L1A MTurk Search Contact Requester

Recently Reviewed HITs


Annotate words or phrases in sentences with 3d object labels
Refer to the object in yellow box in natural language

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact