Blog


Anaconda Navigator Launcher Icon

posted Oct 20, 2018, 7:21 PM by MUHAMMAD MUN`IM AHMAD ZABIDI   [ updated Oct 20, 2018, 7:23 PM ]

First get a suitable icon. It's already in your drive but it's located so deep in thes system it's easier just to Google for it. For example get it from here:

$ wget https://www.psych.mcgill.ca/labs/mogillab/anaconda2/pkgs/anaconda-navigator-1.4.3-py27_0/lib/python2.7/site-packages/anaconda_navigator/static/images/anaconda-icon-1024x1024.png

There you go!



Next move it to the pixmaps directory

# mv anaconda-icon-1024x1024.png /usr/share/pixmaps/anaconda-navigator.png

Enter the following text in your favorite text editor. The command:

$ atom Anaconda.desktop

The text follows. The Exec option is the location of the executable. The Icon option is the filename in the pixmaps directory.

[Desktop Entry]
Name=Anaconda
Exec=/home/raden/anaconda3/bin/anaconda-navigator
Terminal=false
Type=Application
Categories=Development;Science;IDE;Qt;
Icon=anaconda-navigator

The move the file in to the Applications 'registry':

# mv Anaconda.desktop /usr/share/applications

Test it by hitting the Windows key on your Ubuntu keyboard:



Deep NLP

posted Oct 20, 2018, 5:08 PM by MUHAMMAD MUN`IM AHMAD ZABIDI   [ updated Oct 20, 2018, 5:09 PM ]

curl & wget

posted Oct 20, 2018, 3:20 AM by MUHAMMAD MUN`IM AHMAD ZABIDI   [ updated Oct 20, 2018, 5:11 PM ]

curl is a tool to transfer data from or to a server, using one of thesupported protocols (DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP,IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS,TELNET and TFTP). The command is designed to work without user inter?action.

  • Basic usage is fairly simple - just pass the URL as input to the curl command, and redirect the output to a file.

curl http://releases.ubuntu.com/18.04/ubuntu-18.04-desktop-amd64.iso.torrent > test.torrent

  • force curl to use the name of the file being downloaded as the local file name. This can be done using the -O command line option.

curl -O http://releases.ubuntu.com/18.04/ubuntu-18.04-desktop-amd64.iso.torrent

  • If you want curl to follow the redirect, use the -L command line option instead.

curl -L http://www.oneplus.com

  • Resume a download from point of interruption?

curl -C - -O http://releases.ubuntu.com/18.04/ubuntu-18.04-desktop-amd64.iso

wget is similar to curl. On Linux, wget is more common than curl. Whereas curl is the equivalent in MacOS.

  • To download the whole directory, wget is better
wget -r http://releases.ubuntu.com/18.04/ubuntu-18.04-desktop-amd64.iso

word2vec

posted Oct 18, 2018, 11:32 PM by MUHAMMAD MUN`IM AHMAD ZABIDI   [ updated Oct 20, 2018, 5:12 PM ]

Google's Word2Vec is a deep-learning inspired method that focuses on the meaning of words. Word2Vec attempts to understand meaning and semantic relationships among words. It works in a way that is similar to deep approaches, such as recurrent neural nets or deep neural nets, but is computationally more efficient.

Observe TensorFlow speedup on GPU relative to CPU

posted Sep 29, 2018, 5:40 AM by MUHAMMAD MUN`IM AHMAD ZABIDI   [ updated Oct 18, 2018, 5:09 PM ]



https://colab.research.google.com/notebooks/gpu.ipynb#scrollTo=3IEVK-KFxi5Z

My Other Work Machine

posted Sep 19, 2018, 8:45 PM by MUHAMMAD MUN`IM AHMAD ZABIDI   [ updated Sep 19, 2018, 8:45 PM ]

munim@SpeedyGonzalez:~$ nvidia-smi
Fri Sep 21 11:45:43 2018      
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.48                 Driver Version: 390.48                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1050    Off  | 00000000:01:00.0  On |                  N/A |
|  0%   53C    P8    N/A /  70W |    610MiB /  1992MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                              
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0       994      G   /usr/lib/xorg/Xorg                           273MiB |
|    0      1145      G   /usr/bin/gnome-shell                         172MiB |
|    0      1764      G   ...opt/mendeleydesktop/bin/mendeleydesktop     2MiB |
|    0      1823      G   ...-token=97671117E5B60B147CB2265A7234769B   159MiB |
+-----------------------------------------------------------------------------+

munim@SpeedyGonzalez:~$ xset led 3

My Work Machine

posted Aug 29, 2018, 3:16 AM by MUHAMMAD MUN`IM AHMAD ZABIDI   [ updated Sep 19, 2018, 8:46 PM ]

raden@moon-rocket:~$ nvidia-smi
Wed Aug 29 18:10:43 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.77                 Driver Version: 390.77                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 108...  Off  | 00000000:01:00.0  On |                  N/A |
| 11%   48C    P5    27W / 250W |    602MiB / 11175MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1105      G   /usr/lib/xorg/Xorg                            18MiB |
|    0      1152      G   /usr/bin/gnome-shell                          49MiB |
|    0      1446      G   /usr/lib/xorg/Xorg                           205MiB |
|    0      1590      G   /usr/bin/gnome-shell                         142MiB |
|    0      2223      G   ...-token=279DE13EC5DAB0D9D06AE25019692603   183MiB |
+-----------------------------------------------------------------------------+

For Newcomers to AI

posted Aug 26, 2018, 3:52 PM by MUHAMMAD MUN`IM AHMAD ZABIDI   [ updated Aug 26, 2018, 3:53 PM ]

https://montrealartificialintelligence.com/academy/#Getting-Started-Readings-Source-Code-and-Science

https://montrealartificialintelligence.com/academy/#Getting-Started-Readings-Source-Code-and-Science

Why Small Deep Neural Nets

posted Aug 26, 2018, 12:40 AM by MUHAMMAD MUN`IM AHMAD ZABIDI   [ updated Aug 26, 2018, 2:19 AM ]


Published by Microsoft Research Sep 18, 2017

Differences of NN in small devices:
NN in gadgets compared to datacenters
  • Usually safety-critical (except smartphones) vs rarely for datacenters
  • Low-power is required vs nice-to-have
  • Real-time is required vs preferable
Desirable properties on NN on gadgets:
  • sufficiently high accuracy
  • low computational complexity
  • low energy usage
  • small model size

Advantages of small models:
  1. Fewer parameters means bigger opportunities for scaling training - 145X speedup on 256 GPUs for FireNet (CVPR 2016), 47x speedup for GoogLeNet
  2. Enables complete on-chip integration of CNN model with weights - no need for off-chip memory -> dramatically reduces energy for inference, up-close/personal data gathering, integration with sensor
  3. Enables continuous wireless updates of models if retraining is required
Seven ways to squeeze:
  1. Replace FC with CNN
  2. Kernel reduction: reduce height x width of filters e.g. 3x3 -> 1x1
  3. Channel reduction: reduce the number of filters and channels
  4. Evenly spaced downsampling: early vs late vs evenly spaced (gradual) downsampling
  5. Depthwise separable convolutions: apply convolutions only to some channels
  6. Shuffle layer:  idea 2 & idea 5 channels to talk to each other the first time
  7. Distillation & Compression: refer to paper on Deep Compression. Many ways to do it.

1-10 of 85

Comments