Skip to content

Pytorch implementation of the paper "Learning to Embed Sentences Using Attentive Recursive Trees".

Notifications You must be signed in to change notification settings

shijx12/AR-Tree

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Attentive Recursive Tree (AR-Tree)

This repository is the pytorch implementation of paper

Learning to Embed Sentences Using Attentive Recursive Trees.

Jiaxin Shi, Lei Hou, Juanzi Li, Zhiyuan Liu, Hanwang Zhang.

In this paper, we propose an Attentive Recursive Tree model (AR-Tree), where the words are dynamically located according to their importance in the task. Specifically, we construct the latent tree for a sentence in a proposed important-first strategy, and place more attentive words nearer to the root; thus, AR-Tree can inherently emphasize important words during the bottom-up composition of the sentence embedding. If you find this code useful in your research, please cite

@InProceedings{jiaxin_ARTree,
author = {Jiaxin Shi, Lei Hou, Juanzi Li, Zhiyuan Liu, Hanwang Zhang},
title = {Learning to Embed Sentences Using Attentive Recursive Trees},
booktitle = {AAAI},
year = {2019}
}

Requirements

  • python==3.6
  • pytorch==0.4.0
  • ete3
  • torchtext
  • nltk

Preprocessing

Before training the model, you need to first prepare data. First of all, you need to download the GloVe 300d pretrained vector as we use it for initialization in all experiments. After unzipping it, you need to convert the txt file to pickle file by

python pickle_glove.py --txt </path/to/840B.300d.txt> --pt </output/file/name>

Next we begin to prepare training corpus.

SNLI

  1. Download the SNLI 1.0 corpus.
  2. Preprocess the original SNLI corpus and create the cache file by the following command:
python snli/dump_dataset.py --data </path/to/the/corpus> --out </path/to/the/output/file>

The output file will be used in the data loader when training or testing.

SST

  1. Download the SST corpus. OK that's enough, the torchtext package will help us.

Age

  1. We have attach this corpus as the file age/age2.zip. You need to unzip it first.
  2. Create the cache file by the following command:
python age/dump_dataset.py --glove-path </path/to/840B.300d.txt> --data-dir </path/to/unzipped/folder> --save-path </output/file/name>

Train

You can directly run these scripts to train the AR-Tree on different datasets:

  • snli/run_snli.sh to train on SNLI.
  • sst/run_sst2.sh to train on SST2.
  • sst/run_sst5.sh to train on SST5.
  • age/run_age.sh to train on Age. Note that you should change the argument value of --data-path, --glove-path, and --save-dir according to your directory.

We implement two training strategies, which can be specified by the argument --model-type. The reinforcement learning described in our paper is selected by --model-type RL. Another implementation is --model-type STG, which uses straight-through gumble softmax instead of REINFORCE. --model-type Choi corresponds to Choi's TreeLSTM model, regarded as a baseline in our paper.

Test

You can run evaluate.py for testing:

python evaluate.py --ckpt </path/to/checkpoint> --data-path </path/to/data> --mode ['vis', 'val']

Note that --mode vis is used for visualization of the learned tree structures, while --mode val is to calculate the accuracy on the test set.

Acknowledgement

We refer to some codes of these repos:

About

Pytorch implementation of the paper "Learning to Embed Sentences Using Attentive Recursive Trees".

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published