TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

Ai2 Israel

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 56
  • HITs 340

Ai2 Israel Ratings


Workers feel this requester pays fairly

Good Communication

Approves Quickly

No Rejections

No Blocks
Sorry, your browser doesn't support canvas elements.

Ai2 Israel Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

Nikoratos Average Pace
Reviews: 85
Points: 298
Ratings: 27
AI research - Decompose complex questions into steps. Random guesses not approved! - $0.30

Underpaid

Unrated

Approved

$3.00 / hour

00:06:00 / completion time

Pros

It's not a bad task. Needs a better interface and higher pay to be worthwhile.

Cons

The interface is not good. See Advice to Requester Section. If I'm just being dense and not doing it right please let me know and maybe show me where I'm going wrong.

Advice to Requester

The interface needs to be more flexible. A few suggestions.
1) Eliminate the need to type the # every time.
2) When I type #1 and hit Enter #12 is what I get. To get #1 I have to type #1 and then press the up arrow then press Enter. You have to use #1 in every single task and that gets aggravating in a hurry.
3) A delete function for every step instead of just the last in the sequence.
4) The ability to reorder already written Steps. Drag and drop preferably but I'd settle for being able to do it by changing the number of the Step manually. That field can't be changed now.
5) Drag and drop on the bubbles for list entries would be nice too. As would being able to delete them individually instead of having to delete everything that comes after it on the line.

The task is interesting. The interface makes it not worth doing unfortunately. If there's something you can do about that it would increase the appeal considerably.
Apr 20, 2019 | 4 workers found this helpful.

xqyzt New Reviewer
Reviews: 2
Points: 10
Ratings: 5
AI research - Decompose complex questions into steps. Random guesses not approved! - $0.30

Fair

Excellent

Approved

$5.02 / hour

00:03:35 / completion time

Pros

The requester is not as rejection happy as the hit description would have you believe. They want you to do well and give you ample information to do so, including feedback and reviews of submitted hits. The hourly can really vary depending on how complex the command.

Cons

The pay is on the lower side. Some I've done around a $16/hr, others closer to $5/hr.

Advice to Requester

It would be nice to have these sorted by difficulty and have the more difficult ones pay more for the time they take.
Apr 19, 2019 | 4 workers found this helpful.

Alexandra in VT Average Pace
Reviews: 8,350
Points: 5,826
Ratings: 671
Teach-Your-AI game: fool the AI and teach it by asking yes/no questions. - $0.70

Underpaid

Unrated

Approved

$4.03 / hour

00:10:25 / completion time

Pros

Cons

"To verify that Workers actually complete 100 points, we require each Worker to enter a unique 200 point completion code to your HIT. The code will appear when you reach 200 points in the game."

This is unclear. First it says this, but then, above where you put the completion code, it asks for the 100 point code. I went to 200, just in case.
Dec 1, 2020 | 1 worker found this helpful.

Want to see Ai2 Israel's full profile?

Create Your Account

or Login

Ai2 Israel


A367KVG928LL2U MTurk Search Contact Requester

Recently Reviewed HITs


**UPDATED QUALIFICATIONS** Write creative information-seeking questions for a particular person-of-interest
AI research - Answer Questions using Wikipedia Evidence
AI research - Answering SIMPLE Questions using Wikipedia Evidence
AI research - Decompose complex questions into steps. Random guesses not approved!
Creative Question Writing

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact