TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

Fondazione Bruno Kessler

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 13
  • HITs 27

Fondazione Bruno Kessler Ratings


Workers feel this requester pays very badly

Poor Communication

Approves Quickly

Rejections Reported

No Blocks
Sorry, your browser doesn't support canvas elements.

Fondazione Bruno Kessler Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

Larsanix Unreliably Fast
Reviews: 333
Points: 933
Ratings: 127
Social media hate speech annotation in tweets (explicit content) - $0.03

Low

Unacceptable

Rejected (Unfair)

$8.31 / hour

00:00:13 / completion time
  • HIT Rejected

Pros

Cons

Edit: 5/22 after several attempts the requester still hasn't responded.

Claims they will reject if you miss their "gold standard question", which could be any one of those because there are more than one that are really just jibberish. It is also counterproductive to the task since it appears to be subjective in nature.
At just about 0.02 more these would be worth it, especially with the threats of rejection.

Edit: Instantly rejected just one of my HITs claiming I missed the said golden question, so I went back to review it. There were only two offensive sentences in the HIT and the rest were not. They're rejecting HITs that are totally subjective in nature. Pretty sure they'll keep my data too.
As a matter of fact, this is the question they ask further justifying my claim.

Does not reveal content that would reveal information/help anyone blindly complete the hits: https://gyazo.com/84bd1cb4477b015e5e733d4d3fdceda9

After doing some research it appears this may be the person behind these HITs, as there are several papers written that mention the content that was used in the HITs;
https://scholar.google.com/citations?user=7b95vjEAAAAJ&hl=en

With all of this under their belt, it is just shameful that they are putting out NLP hits touting a subjective nature, and then rejecting based on whether our opinions match their own. In this case, it would be best to just use data based on what they think and avoid posting anything on this platform.

Advice to Requester

Raise the price, don't threaten rejections for missing something so mundane. If you're really worried about quality, make a qual test for these instead.
May 8, 2020 | 1 worker found this helpful.

Marla Relaxed Pace
Reviews: 75
Points: 176
Ratings: 11
Hate speech - $0.05

Low

Unacceptable

Rejected (Unfair)

$1.73 / hour

00:01:44 / completion time
  • HIT Rejected

Pros

Stay away from this requester.
I completed 12 hits.
11 got approved in less than an hour. Then suddenly I got 3 rejections even though I have followed instructions to the letter.
I have them open on my second screen and I look at them for reference.

It says I've missed 'the golden question'.

I sent email to requester. If I hear back I will update. But I doubt it.

Cons

Vague rejection reasons.

Advice to Requester

It seems you are rejecting based on your subjective standards.
Apr 21, 2022 | 1 worker found this helpful.

TheOneTrueChuck Fast Reader
Reviews: 52
Points: 47
Ratings: 15
Offensive content detection - Covid-19 - $0.05

Unrated

Unrated

Approved

$3.67 / hour

00:00:49 / completion time

Pros

Cons

Low paying garbage.
Rejects submissions on subjective (opinion-based) hits.

Advice to Requester

If you're going to be picky, raise your rates. Your 82% approval rate on subjective hits indicates that you're probably rejecting people for not agreeing with your own opinion.
Jan 21, 2021 | 3 workers found this helpful.

Want to see Fondazione Bruno Kessler's full profile?

Create Your Account

or Login

Fondazione Bruno Kessler


AS4Z9T8D863WS MTurk Search Contact Requester

Recently Reviewed HITs


Hate speech
Hate speech detection
Hate speech identification
HateSpeech Classification
Offensive content detection - BLM

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact