With the integration and development of convolutional neural network and FPGA technology, more and more structures of convolutional neural network based on FPGA have been studied. Aiming at the problems of miniaturization and parallelization of convolutional neural network, this paper proposes a design of high-speed universal convolutional layer IP core using VHDL language according to the characteristics of convolutional neural network and FPGA devices. This design is different from traditional mode of customization of convolution layer based on FPGA, and greatly improves the portability of convolution module by encapsulating the convolution layer into IP core. In this design, the size of convolution window is changed by changing the transfer parameters, and the convolution operation is carried out by pipeline. The simulation results show that the IP core structure of the convolutional layer not only increases the portability of the convolutional module, but also ensures the computing speed, which provides a feasible implementation method for the implementation of convolutional neural network on miniaturized devices.