Full Text Available

Note: Clicking the button above will open the full text document at the original institutional repository in a new window.

Bias in the Loop: How Humans Evaluate AI-Generated Suggestions

Saved in:
Bibliographic Details
Published in:Harvard Data Science Review
Format: Online Article RSS Article
Published: 2026
Subjects:
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1864030191467102208
collection WordPress RSS
FRELIP Feed Integration
container_title Harvard Data Science Review
description
discipline_display Technology & Engineering
discipline_facet Technology & Engineering
format Online Article
RSS Article
genre Journal Article
id rss_article:32192
institution FRELIP
journal_source_facet Harvard Data Science Review
publishDate 2026
publishDateSort 2026
record_format rss_article
spellingShingle Bias in the Loop: How Humans Evaluate AI-Generated Suggestions
Data Mining
Technology & Engineering — Computing
Technology & Engineering
sub_discipline_display Technology & Engineering — Computing
sub_discipline_facet Technology & Engineering — Computing
subject_display Data Mining
Technology & Engineering — Computing
Technology & Engineering
Data Mining
Technology & Engineering — Computing
Technology & Engineering
subject_facet Data Mining
Technology & Engineering — Computing
Technology & Engineering
title Bias in the Loop: How Humans Evaluate AI-Generated Suggestions
title_auth Bias in the Loop: How Humans Evaluate AI-Generated Suggestions
title_full Bias in the Loop: How Humans Evaluate AI-Generated Suggestions
title_fullStr Bias in the Loop: How Humans Evaluate AI-Generated Suggestions
title_full_unstemmed Bias in the Loop: How Humans Evaluate AI-Generated Suggestions
title_short Bias in the Loop: How Humans Evaluate AI-Generated Suggestions
title_sort bias in the loop: how humans evaluate ai-generated suggestions
topic Data Mining
Technology & Engineering — Computing
Technology & Engineering
url https://hdsr.mitpress.mit.edu/pub/nrcn4h7d