Commit 9f7d678e authored by pirapakaran's avatar pirapakaran
Browse files

Update project_report.md

parent bbd2e795
Loading
Loading
Loading
Loading
+13 −2
Original line number Diff line number Diff line
@@ -24,6 +24,13 @@ Another observation we made during our annotation was that the Maslow and Reiss
<br>
After extracting the knowledge paths, we found that ConceptNet also has a problem with assigning Reiss motives. While the Maslow motives were often well assigned, ConceptNet showed a clear preference for some motives of Reiss. For example, the Reiss motive social was assigned much more often and also in places where a person would have already tended towards love/belonging. Thus ConceptNet has difficulties distinguishing social from concrete feelings such as love.

#### Assigning the human needs
After extracting the knowledgepaths, we found that the neural model of @DebjitPaul was not immediately executable without errors. Again there were a lot of version problems, the versions given in the [gitlab]() are outdated and every version change caused a problem with another package. Since we didn't want to invest all our time again in finding out the correct versions of another project and @debjitpaul couldn't help either, we decided to develop our own method for this. Since the knowledgepaths are already returned according to their expressiveness, we used them in *humans_needs_assginer.py* by using and assigning the human needs of the first path we found. 
<br>
 
#### evaluation of results
After the human needs have been successfully assigned for all essays, we evaluate the whole thing. In *human_needs_evaluation.py" we have written a code for evaluation. Here we calculate Precision, Accuracy, Recall and F1-Score and with these measurements we analyze the evaluation. Concrete comments and visualisations of the evaluation can be found in the jupyter notebook. 

## The Tools and how they worked
### Problems that accured

@@ -42,13 +49,17 @@ In the end, we found that the following complications were to blame: <br>
- and tensorflow 2.0 has so many technical changes, that it is not compatible with our models 
> The final versions that worked for us can be found in README.md
<br> 
All the above mentioned points had to be determined and improved in the course of development. This very often set us back in the overall process. <br> 
<br>
Unfortunately, there were also problems with the attached files in the [github](https://github.com/debjitpaul/Multi-Hop-Knowledge-Paths-Human-Needs). For example, we kept getting a namespace error "'Namespace' object has no attribute 'txtfile'" for weeks which prevented us from executing the coude, and  we could never get rid of it completely. In the Zoom meetings with @debjitpaul we addressed this problem until it turned out that there was a typo in his original file which was responsible for the error. Instead of calling a "txtfile" it should have been "inputfile". We are aware that these are minor issues. However, this accumulation of problems led to significant time losses in our project and we had to put in more time in the last weeks than we had planned for.
<br> 
All the above mentioned points had to be determined and improved in the course of development. This very often set us back in the overall process. <br> Unfortunately, there were also problems with the attached files in the [github](https://github.com/debjitpaul/Multi-Hop-Knowledge-Paths-Human-Needs). For example, we kept getting a namespace error "'Namespace' object has no attribute 'txtfile'" for weeks which prevented us from executing the coude, and  we could never get rid of it completely. In the Zoom meetings with @debjitpaul we addressed this problem until it turned out that there was a typo in his original file which was responsible for the error. Instead of calling a "txtfile" it should have been "inputfile". We are aware that these are minor issues. However, this accumulation of problems led to significant time losses in our project and we had to put in more time in the last weeks than we had planned for.
<br> 
Due to a lack of capacity on our own computers, we turned to our institute's technology group for cluster access. They got back to us and granted us access. After everything went well here at first, we again reached a point where we lacked the necessary rights to further execute our project while also encountering a number or errors. Unfortunately, despite contacting them several times, we finally did not hear back from the technology group. Because of this, we had to work on our own computers again and start from where we left off. Despite the fact that we divided it among ourselves, the program needed an average of about one hour per sentence to construct a subgraph, while especially long sentences with big context lists took up to 3 hours. Having 400 essays to work through, it took us quite a long time. 
<br> <br>
<br> 
<br>
As already mentioned briefly in our project_proposal we wanted to do a textual analysis with the OpenFraming tool. Again, we had already prepared our data to match the model and were ready to work with the tool. Unfortunately, we received an error message right at the first step. After contacting the OpenFraming team, they initially suspected that the error must lie with us. Therefore, they gave us instructions on what the files should look like (csv file, comma separated). But despite several attempts to change the files, the error could not be solved. After weeks, the OpenFraming team determined that it was an internal error that only they could fix. However, they soon realized that it was a slightly bigger problem and offered us the alternative of working with docker files. But even here there were several errors on their part, which is why we no longer got to work with their tool. Although several members of the OpenFraming team contacted us by mail afterwards and let us know that they were working on it, they broke off contact at some point. Unfortunately, the tool is still not working till date (25th march 2021). Of course we don't blame anyone for this. We are well aware that everyone has their own work to do. We only regret that we had invested so much time and effort into the tool and in the end we could not work with it. The textual analysis complementary to our obtained results would have been certainly very interesting.


### Lessons learned

Encountering all these problems taught us many things during the timespan of this software project.