Skip to content

Latest commit

 

History

History
36 lines (27 loc) · 2.13 KB

README.md

File metadata and controls

36 lines (27 loc) · 2.13 KB

project-qa

A rule-based information extraction Python script to answer questions about sample stories

inspiration

Builds upon work done by Dr. Ellen Riloff in A Rule-based Question Answering System for Reading Comprehension Tests, with marginal improvements in precision and recall.

setup

Commands in included QA-script.txt should be run to install relevant packages and download spaCy corpus. Gensim relies on preconstructed word2vec vectors. These can be built from any sufficiently large corpus, but in this case I recommend the Google News dataset. They should be parsed into Gensim's KeyedVectors model and saved in ./data/training/word2vec.vectors.

approach

This script assigns a score to each sentence based on:

  • Distributional similarity of word embeddings
  • Named entity recognition
  • Sentence root identification

Then, attempts to filter the sentence down to relevant noun phrases or named entities. Detailed explanation contained in these slides

testing

This script is built to answer questions of any type given a relevant document. Sample input formats are shown in data/testset1. A scoring program has been included to output precision, recall, and F-score of produced answers.

performance

Running QA requires significant memory overhead as it loads w2v and spaCy corpus into memory before beginning. Afterwards reading and answering each story takes runtime on the order of one second, the majority of which is parsing the story into a spaCy nlp model. On a final test set of documents and questions, this system produced the following results:

  • RECALL = 0.5511
  • PRECISION = 0.3849
  • F-MEASURE = 0.4533

tools

Special thanks to Dr. Ellen Riloff at the University of Utah.