TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

James Forrest

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 38
  • HITs 39

James Forrest Ratings


Workers feel this requester pays generously

Unrated

Approves Quickly

No Rejections

No Blocks
Sorry, your browser doesn't support canvas elements.

James Forrest Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

Marj Relaxed Pace
Reviews: 33
Points: 195
Ratings: 20
Automatically Explaining Machine Learning Decisions - $4.00

Unrated

Unrated

Approved

$12.89 / hour

00:18:37 / completion time

Pros

none

Cons

Be extremely careful of this requester.

Claimed I was rejected for the following:

Your answer to 'Which variable is most important in the model`s decision and explain why', were all too short, being only a feature name
Your answers to the questions 'Please explain why the model decided to refuse credit' & 'How could the subject of the decision change the values of their data, to make the model change it`s decision from rejecting to accepting the credit application?' were cut and pasted from my text not your own words.
Additionally
Your work was suspiciously quick (6 minutes not my piloted time of around 20 minutes)
Your work was very similar to another MTurkers HIT

I counted words, took a full 20 minutes. I lost my internet briefly while working on the hit and had to reopen the hit. Apparently they did not pay attention to the timer on their survey answers.

I wrote every question answer myself, restating the question, not cutting and pasting as we were you should.

They claim my answers were "similar to another" what BS we all work independently.

This requestor has no idea what they are doing.

Avoid like the plague if you value you want to get paid for the work you do.

EDIT: The requester has contacted back saying there is a tech issue and the rejection will be overturned.

Advice to Requester

Have some ethics. They are sorely lacking.
Sep 7, 2021 | 1 worker found this helpful.

Lumius Proficient Worker
Reviews: 1,379
Points: 3,289
Ratings: 336
Comprehending Explanations of Machine Learning Decisions - D1 - $7.50

Generous

Unrated

Approved

$67.33 / hour

00:06:41 / completion time

Pros

Quick approval.
Write 3 sentences of at least 10 words, explaining a scenario, then do this two more times.
9 total sentences writing, 3 sentences optional writing, a few bubbles, then done.
Brief.
Probably worth baking.
>99% approval.

Cons

Writing.
Threatens rejection for "low effort"
Nov 12, 2021 | 1 worker found this helpful.

skittles Careful Reader
Reviews: 695
Points: 749
Ratings: 231
Comprehending Explanations of Machine Learning Decisions - E Qualifier - $7.50

Unrated

Unrated

Approved

$14.93 / hour

00:30:09 / completion time

Pros

Run Forrest Run!

Cons

Look at three examples, explain why credit application was refused. Threats of rejection if explanations are less than ten words. 9 sentences in total.
Nov 5, 2021 | 1 worker found this helpful.

Want to see James Forrest's full profile?

Create Your Account

or Login

James Forrest


A3MOWK2PSLSY5 MTurk Search Contact Requester

Recently Reviewed HITs


Automatically Explaining Machine Learning Decisions
Comprehending Explanations of Machine Learning Decisions - B
Comprehending Explanations of Machine Learning Decisions - B1
Comprehending Explanations of Machine Learning Decisions - B2
Comprehending Explanations of Machine Learning Decisions - B3

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact