@@ -32,7 +32,7 @@ Based on Maslow's pyramid and looking at the concepts from [ConceptNet](https://
[Paul&Frank(2019)](https://www.aclweb.org/anthology/N19-1368/)s approach is based on narrative texts. As their dataset they use the ROCStories dataset ([Mostafazadeh, 2016]()) which contains a collection of narrative texts. We extend this approach by looking at argumentative texts and extending the procedure for our usage.
## Approach
First of all, we look for a suitable [data set](#dataset). It is important here that these are argumentative texts. The selected data set must then be prepared according to the format required by the model. To do this, we begin by manually annotating all of the four hundred essays from our data set with one of the maslow and one of the reiss motives. <br> After having done that we use the fleiss kappa score to calculate the Inter Annotator Agreement. To produce a gold standard, each of us annotated an additional 75 essays, resulting in 100 essays total wich have been annotated by all 4 members. *E.g. If I annotated essays 1-101 in the first pass, I will now annotate the first 25 of each of 102-202, 203-303, and 304-404.*<br> Since our selected data was already seperated into train and test, we only had to put our train and test files into correct format (see files attached). <br> For our project we only looked at the last paragraph of each essay, as this was the closing argument and provided a good overview of the opinions and standpoints of the author (more details under [data set](#dataset)). For this, we selected the last paragraph of each essay of our dataset using *Comparer.py* and compared each word in it with the concepts from ConceptNet to generate a list of concepts for the sentence. <br> After doing that we started with the steps equivalent to [this](https://github.com/debjitpaul/Multi-Hop-Knowledge-Paths-Human-Needs). Because some of the steps and attached files from Debjit Pauls github were not working for us (see project_report, problems) we changed a few things, which can be taken from our README.md. After constructing the subgraphs for every sentece and extracting the relevant knowledge paths, we can extract the human needs from the created knowledge paths and assign them to the essays (*Human_needs_assigner.py*). <br> The last thing we do is evaluate and assess our obtained results. <br> For textual analysis we wanted to use OpenFraming. But unfortunately the tool is still not working -even today- (25 of march 2021)- due to an internal server error.
First of all, we look for a suitable [data set](#dataset). It is important here that these are argumentative texts. The selected data set must then be prepared according to the format required by the model. To do this, we begin by manually annotating all of the four hundred essays from our data set with one of the maslow and one of the reiss motives. <br> After having done that we use the fleiss kappa score to calculate the Inter Annotator Agreement. To produce a gold standard, each of us annotated an additional 75 essays, resulting in 100 essays total wich have been annotated by all 4 members. *E.g. If I annotated essays 1-101 in the first pass, I will now annotate the first 25 of each of 102-202, 203-303, and 304-404.*<br> Since our selected data was already seperated into train and test, we only had to put our train and test files into correct format (see files attached). <br> For our project we only looked at the last paragraph of each essay, as this was the closing argument and provided a good overview of the opinions and standpoints of the author (more details under [data set](#dataset)). For this, we selected the last paragraph of each essay of our dataset using *Comparer.py* and compared each word in it with the concepts from ConceptNet to generate a list of concepts for the sentence. <br> After doing that we started with the steps equivalent to [this](https://github.com/debjitpaul/Multi-Hop-Knowledge-Paths-Human-Needs). Because some of the steps and attached files from Debjit Pauls github were not working for us (see project_report, problems) we changed a few things, which can be taken from our README.md. After constructing the subgraphs for every sentece and extracting the relevant knowledge paths, we can extract the human needs from the created knowledge paths and assign them to the essays (*Human_needs_assigner.py*). <br> The last thing we do is evaluate and assess our obtained results. <br> For textual analysis we wanted to use OpenFraming. But unfortunately the tool is still not working todate (25th march 2021)- due to an internal server error.