A graduate student uses an AI research tool to generate summ…
A graduate student uses an AI research tool to generate summaries of 38 journal articles for their literature review. The tool provides one-paragraph summaries stating “key findings” and “study implications.” The student incorporates these summaries into their literature review without consulting original articles. During thesis defense, a committee member questions a claim that “research shows first-generation students benefit most from peer mentoring compared to faculty mentoring.” The student cites an article via the AI summary. Checking the original article, the committee member finds: The article actually compared peer mentoring to no mentoring, not to faculty mentoring The article’s conclusion was specifically about a particular population (STEM majors at selective institutions) but the AI summary generalized broadly The article examined mentoring satisfaction, not actual student outcomes The original title was “Perceived Benefits of Peer Mentoring in STEM”—the AI summary had dropped the word “perceived” The student responds: “The AI tool is reliable for summarizing research. Reading all articles would be inefficient. This is a reasonable use of technology in research.” The fundamental problem with the student’s approach is: