This paper introduces an open-source, deep Self End-to-end Learning Framework, ‘DEEPSELF.’ This framework can be used as a toolkit of deep self end-to-end learning framework for multi-modal signals.

It is one of the first public toolkits assembling a series of state-of-the-art deep learning technologies.

Four key features of this frameworks include:

1. It can be used to analyze a variety of multi-modal signals, including images, audio, and single or multi-channel sensor data.

2. It provides multiple options for pre-processing, e.g., filtering, or spectrum image generation by Fourier or wavelet transformation.

3. Multiple topologies in terms of NN, 1D/2D/3D CNN, and RNN/LSTM/GRU can be customized and a series of pre-trained 2D CNN models, e.g., AlexNet, VGGNet, ResNet can be used easily.

4. It can be flexibly used not only as a single model but also as a fusion of such.

Paper: https://arxiv.org/abs/2005.06993