DeepViT: Towards Deeper Vision Transformer

DeepViT

This repo is the official implementation of “DeepViT: Towards Deeper Vision Transformer”. The repo is based on the timm library (https://github.com/rwightman/pytorch-image-models) by Ross Wightman

Deep Vision Transformer is initially described in arxiv, which observes the attention collapese phenomenon when training deep vision transformers: In this paper, we show that, unlike convolution neural networks (CNNs)that can be improved by stacking more convolutional layers, the performance of ViTs saturate fast when scaled to be deeper. More specifically, we empirically observe that such scaling difficulty is caused by the attention collapse issue: as the transformer goes deeper, the attention maps gradually become similar and even much the same after certain layers. In other words, the feature maps tend to be identical in the top layers of

 

 

 

To finish reading, please visit source site