posted Jan 21, 2019, 10:40 PM by MUHAMMAD MUN`IM AHMAD ZABIDI
[
updated Jan 21, 2019, 10:46 PM
]
Guess what it does?
module smart( input [2:0] x, output [7:0] y); assign y = 1 << x; endmodule
It's a 3:8 decoder!
The following is how it's done, normally:
module naive(sel, res); input [2:0] sel; output [7:0] res; reg [7:0] res;
always @(sel or res) begin case (sel) 3'b000 : res = 8'b00000001; 3'b001 : res = 8'b00000010; 3'b010 : res = 8'b00000100; 3'b011 : res = 8'b00001000; 3'b100 : res = 8'b00010000; 3'b101 : res = 8'b00100000; 3'b110 : res = 8'b01000000; default : res = 8'b10000000; endcase end endmodule
Now why would you want to do normal?
|
posted Jan 6, 2019, 5:40 AM by MUHAMMAD MUN`IM AHMAD ZABIDI
[
updated Jan 12, 2019, 12:04 AM
]
posted Jan 6, 2019, 5:36 AM by MUHAMMAD MUN`IM AHMAD ZABIDI
[
updated Jan 10, 2019, 2:00 AM
]
Task: Datapath for Naive Multiplier Submitted by Groups 1, 2, 4 & 5.
Group 1 - Jamil, Hazin, Azrin & Fatin:
Group 2 - Hafizuddin, Syahir & Zulazwan:
Group 4 - Md Shukri & Wan Naim
Group 5 - Wan Ahmad:
|
posted Jan 6, 2019, 5:28 AM by MUHAMMAD MUN`IM AHMAD ZABIDI
[
updated Jan 9, 2019, 10:03 PM
]
Task: Sequence Detector with Finite State Machine Submitted by Groups 1-5.
Group 1 - Jamil, Hazin, Azrin & Fatin:
Group 2 - Hafizuddin, Syahir & Zulazwan:
Group 3 - Gautham, Rais & Zulhilmi
Group 4 - Syukri & Wan Mohd Naim
Group 5 - Wan Muhammad Fadzli: |
posted Jan 6, 2019, 5:16 AM by MUHAMMAD MUN`IM AHMAD ZABIDI
[
updated Jan 6, 2019, 5:08 PM
]
Task: Basic I/O with EPM240 CPLD Board
Group 1 - Jamil, Hazin, Azrin & Fatin:
Group 2 - Hafizuddin, Syahir & Zulazwan: Group 3 - Zulhilmi, Gautham & Rais:
Group 4: Group 5 - Wan Muhammad Fadzli:
Group 6 - Fara Hanis, Khairul Izwan & Nik Mohd Hazlin:
|
posted Jan 5, 2019, 10:28 PM by MUHAMMAD MUN`IM AHMAD ZABIDI
[
updated Jan 6, 2019, 5:02 PM
]
Task: Accumulator based counter with reset and 7-seg display.
|
posted Dec 5, 2018, 2:42 AM by MUHAMMAD MUN`IM AHMAD ZABIDI
[
updated Dec 5, 2018, 2:51 AM
]
Language modeling is key to many interesting problems such as speech recognition, machine translation, or image captioning. The goal of the problem is to fit a probabilistic model which assigns probabilities to sentences. It does so by predicting next words in a text given a history of previous words.
Language modeling is key to many interestin
g problems such as speech recognition, machine translation, or image captioning. |
posted Dec 3, 2018, 10:22 PM by MUHAMMAD MUN`IM AHMAD ZABIDI
[
updated Dec 3, 2018, 10:28 PM
]
clinfo is a simple command-line application that enumerates all possible (known) properties of the OpenCL platform and devices available on the system. Let's install and see what I have.
$ cat /proc/cpuinfo | grep 'name'| uniq model name : AMD FX(tm)-8350 Eight-Core Processor
$ sudo apt install clinfo
$ clinfo
Number of platforms 1
Platform Name NVIDIA CUDA
Platform Vendor NVIDIA Corporation
Platform Version OpenCL 1.2 CUDA 9.1.84
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer
Platform Extensions function suffix NV
Platform Name NVIDIA CUDA
Number of devices 1
Device Name GeForce GTX 1080 Ti
Device Vendor NVIDIA Corporation
Device Vendor ID 0x10de
Device Version OpenCL 1.2 CUDA
Driver Version 390.87
Device OpenCL C Version OpenCL C 1.2
Device Type GPU
Device Topology (NV) PCI-E, 01:00.0
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Linker Available Yes
Max compute units 28
Max clock frequency 1582MHz
Compute Capability (NV) 6.1
Device Partition (core)
Max number of sub-devices 1
Supported partition types None
Max work item dimensions 3
Max work item sizes 1024x1024x64
Max work group size 1024
Preferred work group size multiple 32
Warp size (NV) 32
|
posted Nov 2, 2018, 8:34 AM by MUHAMMAD MUN`IM AHMAD ZABIDI
[
updated Nov 2, 2018, 8:37 AM
]
posted Nov 2, 2018, 8:32 AM by MUHAMMAD MUN`IM AHMAD ZABIDI
[
updated Dec 3, 2018, 8:55 PM
]
A popular idea in modern machine learning is to represent words by vectors. These vectors capture hidden information about a language, like word analogies or semantic. - An introduction to word embeddings
- Introduction to Word Embeddings: Problems and Theory
- [Hamilton 2016] Hamilton, William L., et al. “Inducing domain-specific sentiment lexicons from unlabeled corpora.” arXiv preprint arXiv:1606.02820 (2016).
- [Kusner 2015] Kusner, Matt, et al. “From word embeddings to document distances.” International Conference on Machine Learning. 2015.
- [Mikolov 2013a] Mikolov, Tomas, Wen-tau Yih, and Geoffrey Zweig. “Linguistic regularities in continuous space word representations.” hlt-Naacl. Vol. 13. 2013.
- [Mikolov 2013b] Mikolov, Tomas, et al. “Efficient estimation of word representations in vector space.” arXiv preprint arXiv:1301.3781 (2013).
- [Mikolov 2013c] Mikolov, Tomas, et al. “Distributed representations of words and phrases and their compositionality.” Advances in neural information processing systems. 2013.
- [Mikolov 2013d] Mikolov, Tomas, Quoc V. Le, and Ilya Sutskever. “Exploiting similarities among languages for machine translation.” arXiv preprint arXiv:1309.4168 (2013).
|
|