Abstract
Generative AI agents (e.g., ChatGPT) provide a private, low-latency environment for knowledge acquisition, distinct from public online communities where social evaluation is prevalent. This study investigates how the removal of social evaluation threats in AI interactions alters learners' help-seeking strategies. We analyzed a matched corpus of 30,000 dialogue turns from LMSYS-Chat-1M (Human-AI) and Stack Exchange (Human-Human) using computational linguistic methods (LIWC-22 and RoBERTa). Results show that in AI interactions, learners almost completely abandon defensive impression management strategies (such as hedges) and politeness markers that are mandatory in human communities. Furthermore, contrary to the expectation that users would "confess" ignorance to AI, we found that learners adopt an authoritative "Director" stance rather than a humble "Petitioner" role. These findings suggest that AI is not merely a social substitute but a functional tool that allows users to bypass the cognitive costs of social negotiation. The shift implies a trade-off: AI maximizes information retrieval efficiency but potentially reduces the 'desirable difficulties' associated with problem formulation. This transformation highlights a potential for reduced practice in the cognitive processes traditionally required to structure ambiguity in collaborative settings.