Efficient Neural Architecture Search (ENAS) in PyTorch
PyTorch implementation of Efficient Neural Architecture Search via Parameters Sharing. ENAS reduce the computational requirement (GPU-hours) of Neural Architecture Search (NAS) by 1000x via parameter sharing between models that are subgraphs within a large computational graph. SOTA on Penn Treebank language modeling. Prerequisites Python 3.6+ PyTorch==0.3.1 tqdm, scipy, imageio, graphviz, tensorboardX Usage Install prerequisites with: conda install graphviz pip install -r requirements.txt To train ENAS to discover a recurrent cell for RNN: python main.py –network_type rnn –dataset ptb –controller_optim adam […]
Read more 
			 
			 
			 
			 
			 
			 
			 
			 
			