Abstract
This work introduces a novel question answering (QA) framework that integrates commonsense knowledge from ConceptNet with deep contextual embeddings from BERT using a graph neural network for structured reasoning. For each question-answer pair, the system constructs a relevant subgraph from ConceptNet, which is then processed using Graph Attention Network v2 (GATv2) to capture semantic relationships among concepts. In parallel, BERT encodes the question-answer pair to provide contextual language representations. These two representations are fused into a joint embedding that combines structured knowledge with unstructured text understanding, enabling richer inference. Evaluations on the CommonsenseQA and OpenBookQA datasets show accuracy improvements of 82.3% and 86.21%, respectively, surpassing existing leading methods. These results highlight the effectiveness of combining knowledge graphs with language models for complex QA tasks requiring commonsense reasoning.