In this paper, we present Pre-CoFactv3, a comprehensive framework comprised of Question Answering and Text
Classification components for fact verification. Leveraging In-Context Learning, Fine-tuned Large Language Models
(LLMs), and the FakeNet model, we address the challenges of fact verification. Our experiments explore diverse
approaches, comparing different Pre-trained LLMs, introducing FakeNet, and implementing various ensemble methods.
Notably, our team, Trifecta, secured first place in the AAAI-24 Factify 3.0 Workshop, surpassing the baseline accuracy
by 103% and maintaining a 70% lead over the second competitor. This success underscores the efficacy of our approach and
its potential contributions to advancing fact verification research.