Abstract
ROUGE is a common objective for extractive summarization because n-gram overlap aligns with sentence-level selection. However, models that focus only on ROUGE often choose sentences with similar content, and the resulting summaries contain redundant information. We propose DiCo-EXT, a training framework that integrates two new loss terms into a standard extractive model: a semantic consistency term and a diversity penalty. The consistency module encourages selected sentences to stay close to document-level meaning, and the diversity penalty reduces semantic overlap within the summary. Both components are fully differentiable and can be optimized together with the base loss, without extra heuristics or multi-stage post-processing. Experiments on CNN/DailyMail, XSum, and WikiHow show lower redundancy and higher lexical diversity, while ROUGE remains comparable to a strong baseline. These results indicate that simple training objectives can balance coverage and redundancy without increasing model size or architectural complexity.