TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

Jessica Echterhoff

Is this your requester account?
No Institutional Affiliation
  • Overview
  • Reviews 60
  • HITs 35

Jessica Echterhoff Ratings


Workers feel this requester pays fairly

Unrated

Approves Quickly

Rejections Reported

No Blocks
Sorry, your browser doesn't support canvas elements.

Jessica Echterhoff Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

Yoyo Average Pace
Reviews: 68
Points: 1,899
Ratings: 186
Movie Browsing - $0.80

Unrated

Unrated

Rejected (Unfair)

$3.78 / hour

00:12:41 / completion time
  • HIT Rejected

Pros

Cons

This HIT is completely broken. Not only did it take nearly 13 minutes to complete, due to how many times I had to restart it, but in the end my work was rejected, with the excuse "wrong/no code"

Well, I put the code in the study gave me, so which is it. Was it wrong, or did I put no code in?

To explain the HIT, you were sent to a page where you would see a movie poster and a description of the film. You would then rate the film on a scale of 0 to 10 on how much you wanted to see it (if you chose 0 for any, that was an automatic rejection, stupidly).

After going through as many as you wanted, you'd select on you wanted to see, and then fill out a little survey in the end. After going through 18 movies, I selected one and finished everything up...only for the HIT to get stuck in a loading cycle, unable to generate a code. After waiting for a couple minutes, I reloaded the page and tried again.

I had to do the same thing another 3 times (not going through nearly as many movies, of course) before I finally got it to work and got my code.

In addition to putting in the code, I also had to write down the name of the movie I chose, as well as an explanation for why I chose it in two separate boxes provided on the HIT page. You would think this would also qualify as proof in the event that there was something wrong with the code, but no.

I would avoid any HITs that use the "selectmovie.streamlit.app" URL, because it is clearly buggy and broken. I think it's a custom website that may have been created specifically for this study, but either way, you should not use it, because it does not work.

Advice to Requester

Make sure your websites actually work.

Also, if there is proof beyond that code that the Worker did the required work (for example, they wrote a paragraph about why they chose one of the specific movies on the list, and named the movie by its title, then maybe that should be a sign they actually went through the process of completing it) before you hand out rejections like candy.
Jan 26, 2024

DareAngel3 Fast Reader
Reviews: 12,539
Points: 11,042
Ratings: 1,408
Item Ratings For Books - $2.50

Generous

Unrated

Pending

$26.55 / hour

00:05:39 / completion time

Pros

books

Cons

50 to rate; could take longer if you aren't a fast reader or hadn't read the book(s) in question
May 23, 2021

turker4hire Relaxed Pace
Reviews: 731
Points: 1,202
Ratings: 101
Contrastive Sentences - $0.01

Unrated

Unrated

Approved

$4.50 / hour

00:00:08 / completion time

Pros

- Short easy hits, but another penny or two would make these nice fillers.

Cons

Feb 21, 2022

Want to see Jessica Echterhoff's full profile?

Create Your Account

or Login

Jessica Echterhoff


A2ZJ7PBO5BH411 MTurk Search Contact Requester

Recently Reviewed HITs


Contrastive Sentences
Decide on your preference of recommendation
Does the text scene explanation accurately describe the what's happening in the scene?
Item Ratings For Books
Item Ratings For Home an Kitchen Products

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact