TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

Speechfeedback

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 604
  • HITs 806

Speechfeedback Ratings


Workers feel this requester pays well

Unrated

Approves Quickly

Rejections Reported

Blocks Reported
Sorry, your browser doesn't support canvas elements.

Speechfeedback Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

Mouse Proficient Worker
Reviews: 20
Points: 151
Ratings: 21
Rate the quality of computer-generated speech - German@Germany native only [#00ab] - $1.80

Generous

Unrated

Approved

$19.94 / hour

00:05:25 / completion time

Pros

Clear UI that always works for me
Good to generous pay on about half of their hits (judging the German ones in this case, but I'd imagine it to be the same for other languages)

Cons

Very Detail oriented work, you need to listen for subtle vocal changes and judge accordingly.
Yup, some of their hits are atrociously underpaid because they all pay the same amount. So some hits have 72 voice files (<.<), others have 24 (incredible pay), average seems to be 36 which still allows for generous pay if you don't have to go back to rerate some audiofiles. Some hits also have 2 audiofiles to rate in tandem while others have them one after another so the ones with 2 audiofiles are usually eh unless, again it's only 24 pages.

Advice to Requester

Raise the pay for hits with more than 36 audiofiles, especially the ones with 72. The pay for 2 audio files per page and 1 audio file per page should also differ.
Sep 19, 2019 | 5 workers found this helpful.

marcusdavvid Fast Reader
Reviews: 7,081
Points: 3,326
Ratings: 144
Rate the quality of computer-generated speech - English@United States native only [#6e61] - $1.20

Generous

Unrated

Rejected

$14.50 / hour

00:04:58 / completion time
  • Account Blocked
  • HIT Rejected

Pros

Cons

Rejected for not listening to full audio clips, which would make it highly underpaid. I've done a bunch of HITs from this requester and have received no notice of the quality of my work.

Edit: Apparently they blocked me for this as well.

Advice to Requester

Please be more consistent with your approval and rejection policy.
Apr 25, 2019 | 12 workers found this helpful.

HardWorkingTurker Fast Reader
Reviews: 523
Points: 2,128
Ratings: 122
Rate the quality of computer-generated speech - English@United States native only [#7f2b] - $1.20

Good

Unrated

Approved

$12.59 / hour

00:05:43 / completion time

Pros

Usually easy to do and I enjoy these. Sometimes you get a lot of audio files to rate and other times you're lucky and you only have to rate a few. So, the hourly varies, but for the two I caught, the hourly was good. These usually approve quickly, within 24 hours.

Cons

The 3 question test at the beginning was annoying, and you're given a very short window to respond. If you're unprepared it can be easy to mess up so think fast and be prepared.
Dec 5, 2018 | 2 workers found this helpful.

Want to see Speechfeedback's full profile?

Create Your Account

or Login

Speechfeedback


A2DPU6JE37X0YV MTurk Search Contact Requester

Recently Reviewed HITs


Compare quality of audio samples - English@United States native only (non-natives rejected) [
Compare quality of audio samples - English@United States native only (non-natives rejected) [#02f0]
Compare quality of audio samples - English@United States native only (non-natives rejected) [#05a2]
Compare quality of audio samples - English@United States native only (non-natives rejected) [#07cd]
Compare quality of audio samples - English@United States native only (non-natives rejected) [#132c]

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact