TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

Textual Choreography

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 153
  • HITs 191

Textual Choreography Ratings


Workers feel this requester pays fairly

Poor Communication

Approves Quickly

Rejections Reported

No Blocks
Sorry, your browser doesn't support canvas elements.

Textual Choreography Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

Naria Proficient Worker
Reviews: 522
Points: 452
Ratings: 53
judging meaning similarity of English sentences - $0.22

Generous

Unacceptable

Rejected

$22.63 / hour

00:00:35 / completion time
  • HIT Rejected

Pros

Cons

Context, these are hits comparing sentences for intent similarity, you compare a sentence to a given example sentence. You do all this with a slider ranging from "nothing alike" to "identical meaning", or some equivalent phrasing. In the instructions they state that if given two literal identical sentences, to mark them as "identical meaning", which makes sense, however, being sliders this is going to be ballparked a little. It's the nature of sliders, and I was not aware that they needed to be an exact 100/100 or else, which is absurd to begin with.

I had mass rejections for these being in the 90-100 range instead of dead on 100.

Actually 1000+ rejections for that reason.

Upon contacting them they basically (in a wordy way) just said they would not be overturning it, and that my data was unusable because of that small inconsistency.

The rejection note makes it seem a lot more cut and dry than it was, but ambiguous wording got me a massive amount of rejects on a closed qual, and they don't seem to care... so consider this your warning label.

Advice to Requester

Avoid mass rejections of account ruining proportions over ambiguous rules.
Feb 22, 2019 | 12 workers found this helpful.
Textual Choreography (Requester) Rejection Feedback:

Thank you for doing our HITs. However, we clearly state in the instruction that you need to give sentences identical to the reference 100/100, as a check of attentiveness. You failed all such checks. We have to exclude your submissions to e


jessers Average Pace
Reviews: 1,610
Points: 2,143
Ratings: 202
Judgments about English sentences - $2.00

Low

Unrated

Approved

$6.00 / hour

00:20:00 / completion time

Pros

Possibility of obtaining a qual for future work.

Cons

Threats of a block if they don't like your answers. Yucky.

Advice to Requester

I left a message on the HIT about how detrimental mturk blocks are, and encouraged them to use a qualification for a block rather than mturk's system.
Sep 24, 2017 | 3 workers found this helpful.

dgrochester55 Average Pace
Reviews: 668
Points: 2,322
Ratings: 508
Subjective Probability Estimation - $0.15

Unrated

Unrated

Approved

$13.17 / hour

00:00:41 / completion time

Pros

East to get a hang of. Closed qual, batch lasts long time, approval often within 24 hours

Cons

Some questions can be hard to estimate.
Jan 21, 2019 | 1 worker found this helpful.

Want to see Textual Choreography's full profile?

Create Your Account

or Login

Textual Choreography


A11J530JMR0EVG MTurk Search Contact Requester

Recently Reviewed HITs


[new interface]- sorting hypothesis according to their probability
[qualification] - sorting hypothesis according to their probability
Accept or reject chatbot responses
Amazon Review Sentiment Task - dual 1d vas
Amazon Review Sentiment Task - dual ordinal annotation

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact