TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

MLText

Is this your requester account?
University of Pittsburgh
  • Overview
  • Reviews 94
  • HITs 29

MLText Ratings


Workers feel this requester pays well

Okay Communication

Approves Quickly

No Rejections

No Blocks
Sorry, your browser doesn't support canvas elements.

MLText Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

SolidWork Careful Reader
Reviews: 28
Points: 129
Ratings: 15
One-off payment task for worker A6FQHZ58DQ0S6 - $12.00

Fair

Poor

Approved

$13.84 / hour

00:52:01 / completion time

Pros

- Writing work.
- Competitive pay for Mturk.
- Interactive program work.

Cons

- 4+ months worker time, hundreds of hours & thousands of dollars monthly. It was great work sometimes.
- MLText requester's AI program failed, 10+% of work time. Comp HITs were necessary.
- This commonly is a Quality Assurance (Q.A.) expense in software development business wise, however this requester only pays for working conversation HITs usually.
- Most work comp HITs, after you'd email requester documenting the issue - requester would email back, asked worker to explain repeatedly, what happened, in between "I trust you, I'll pay." This was confusing, and inconsistent.
- I stopped Anthropic / MLText work this week, due to failed AI program work time unpaid, again.
- I requested pay a second time for an hour's unpaid work, because they asked for more worker time on their Slack server today.
= As direct result = Requester blocked my Mturk quals today, blocked my Slack work environment access, and parted ways in response. =
...In my experience, AI work product environment, displayed unconscious bias w/ gender, race, & socio economic demographic categories, especially with AI responses. Workers should not have to repeatedly request work pay, for example.
- It's an easy thing to fix, and they chose a different direction. If you ask workers to work for you on your Slack server today, requester should pay outstanding work, so we can continue. This is a fair and reasonable expectation.
- This project is funded by Anthropic / MLText, and their requester name on Mturk is MLText.

Advice to Requester

Loved the work, although their treatment of Mturk workers, needs improvement. A worker should never be blocked after asking for pay, because of requester's failed AI program, and unpaid Mturk worker time. If this could be adjusted, I'd continue working with this group. They're developing a great product, and diverse voices in that development, only strengthens that software tool.
Feb 22, 2022

MirrorMan Careful Reader
Reviews: 1,109
Points: 1,754
Ratings: 403
Talk to AI assistant - $7.50

Low

Good

Approved

$11.17 / hour

00:40:17 / completion time

Pros

Kind of fun chat task with an AI
Can maybe done quicker if you get the system to give you short and concise answers without rambling on.

Cons

AI. types. out. answers. one. word. at. a. time. and. you. have. to. wait. for. it. to. complete. typing. before. you. can. select. anything.
Timer is only an hour, so you need to start on it pretty much as soon as you get it, so it does not time out while you are working on it.
Sometimes the Assistant will lock up and not respond to your message. I have had this happen a few times. I messaged the requester and they worked out a comp HIT for me the one time, but the other times I did not hear back from them. So, be cautious.

Advice to Requester

Get that AI to stop with the single word typing and the HIT would take half as long.
Sep 22, 2021 | 4 workers found this helpful.

fartarsenal Proficient Worker
Reviews: 964
Points: 1,642
Ratings: 157
Make an AI assistant say bad things - $7.50

Fair

Unrated

Approved

$14.11 / hour

00:31:54 / completion time

Pros

reliable requester

Cons

what have they done to my boy? Now seems to generate text blocks completely before showing them, but takes a long time (upwards of 1 minute minimum) regardless of response length, effectively doubling the time to complete these
Jan 4, 2022 | 2 workers found this helpful.

Want to see MLText's full profile?

Create Your Account

or Login

MLText


A3B0Y7IBBOL0DH MTurk Search Contact Requester

Recently Reviewed HITs


Answer 6 anonymous demographic questions, for aggregate information about study participants
Evaluate an AI Research Assistant
Have five conversations with a text assistant ($1.50 per conversation)
Make an AI assistant say bad things
One-off payment task for worker A1XUZFDVKP95VC

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact