TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

edirewrite

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 175
  • HITs 137

edirewrite Ratings


Workers feel this requester pays well

Poor Communication

Approves Quickly

Rejections Reported

No Blocks
Sorry, your browser doesn't support canvas elements.

edirewrite Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

AfterDarkMark Average Pace
Reviews: 1,229
Points: 4,923
Ratings: 483
Ed-Judge the relevance of a machine-translated document summary (2019-01-08) - $0.30

Good

Unrated

Pending

$10.00 / hour

00:01:48 / completion time

Pros

Can be done faster. Did a bunch, so not really sure what my average is. READ the cons. Might save ya some trouble that I almost had. T

hese are good tasks though. Master's. Certainly pay fairly/ well. Speed up once you get the hang of it, though some tricky ones to work out.

Helps if you can skim quickly or you might be slower than I was at these. Again, not sure on my average time. Probably did some in under a minute. Some in over 2 minutes, more at first.

Cons

BE CAREFUL! Within this same batch (there was a different 20 cent one up at the same time, but just talking about this one), the instructions can vary hit to hit!!!

There was a set of orange instructions or pink instructions on the first page of each hit at the bottom, but within the same batch they change frequently! And don't go simply based off the word at the top (which is colored orange or pink) as sometimes that stays pink, even though the instructions at the bottom have changed to orange.

Here's why that's so important. For the orange ones, we were looking for the exact phrase or a synonym in the same form - example the requester gave is that lavatory is okay for bathroom, if that's the word, but bathrooms (plural) is not. Or Purchased is okay for bought (cause both past tense/ synonyms), but buy is not... So could only mark the scale if the word was exact or a synonym in the exact form. In this version, if submarines was in the document and the word was submarine, then none found should be marked.

For the pink instructions, the form did not matter. So, if the word was navel, then belly button, belly buttons, navels were okay. Or if the word was buy, the bought, buys, purchased, etc are all okay. So any form of the word should lead to us marking the main scale and not the none found (or whatever it says) bubble.

That's really tricky and sneaky (though not intentionally I don't believe). I didn't notice until I had done quite a few, so am actually expecting some rejections possibly. Hopefully not, but some of the questions (think like one or 2 our of the 6 that are in each hit are quality control questions - usually easy to tell which ones - and I fear reading the instructions wrong or not realizing they were changing a bunch may have led me to screw some up).

This might not make sense, until you actually do the hit, but it's really important. So used to batch instructions staying the same throughout that I didn't reread the instructions every task or was just mindlessly skipping to the 2nd page where the actual task begins. Guess it's a good lesson to be more careful if I do get the expected rejects.
Jan 10, 2019 | 9 workers found this helpful.

klee Average Pace
Reviews: 9,743
Points: 5,663
Ratings: 428
Evaluate summaries of reviews - $0.80

Good

Unrated

Rejected (Some)

$14.12 / hour

00:03:24 / completion time
  • HIT Rejected

Pros

Cons

1 out of 5 of my submissions was rejected with a message "Thank you for your time and effort. However, your submission contains incorrect answers to sub-tasks to which answers are known in advance."

I was very thorough in making sure I marked the ones that were obvious with the correct response. I suppose I could have missed one but I really took my time with these hits to try and make sure I didn't. So perhaps they weren't as obvious as I thought and these rejections are subjective. I'm not going to keep doing these, even though the pay is ok it's just not worth the risk.
Apr 21, 2020 | 2 workers found this helpful.

johnxyz Proficient Worker
Reviews: 12,502
Points: 13,311
Ratings: 1,251
Rating fluency of English questions - $3.00

Generous

Unrated

Rejected (Unfair)

$26.67 / hour

00:06:45 / completion time
  • HIT Rejected

Pros

All inside frame

Cons

Can be subjective
Some questions had no answer that made sense, but you are forced to pick one
REJECTED: "Failed control sample: selected output that was exactly the same as original as being the most dissimilar." It's not fair because all questions must be answered, and thus we have to pick one which is dissimilar (even if they are exactly the same.)
Jan 22, 2021 | 2 workers found this helpful.

Want to see edirewrite's full profile?

Create Your Account

or Login

edirewrite


AP3RB93L4B1OD MTurk Search Contact Requester

Recently Reviewed HITs


adjust the text to the data in the table
Answer a few questions based on a document summary
Answer a few questions based on a summary
Customer reviews summarisation
Ed-Judge the relevance of a machine-translated document summary (2018-12-12/v2)

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact