Abstract
Large language models have transformed scientific writing, which facilitates text drafting and revision, but at the same time introduces ethical and epistemological risks. Even though their use promotes linguistic equity, the lack of transparency and the manipulation of information threaten academic integrity. AI detectors -such as Originality.ai, ZeroGPT, or Turnitin- show variable effectiveness and do not provide conclusive results, especially against "text humanizers." AI-generated texts are characterized by formal coherence, but also by predictability and stylistic uniformity. Therefore, detection must be combined with ethical and critical evaluation made by humans, and it must be understood that true scientific integrity depends on intellectual judgment rather than technological automation.