Abstract:When using sequence-to-sequence (seq2seq) network structure to generate abstract text, sometimes we can''t get the model to capture the main points of the original text. One of the reasons is that the original input sequence in the network is too long, which will make the model lose some important lexical feature information in the process of learning. In order to solve this problem, this paper proposes a model framework based on seq2seq and attention, in which the encoder is composed of bi-directional GRU and the context vector is composed of the output of encoder and the word feature vector extracted by convolution neural network. The lexical features also contain n-gram and part of speech feature information. Finally, unidirectional GRU is used to decode the context vector to generate the abstract. The model mainly uses convolutional neural network to control lexical information. Compared with other models, the model is effective.