Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 |
Tags
- dart
- 파이썬
- dart new
- widget
- swift concurrency
- riot api dart
- generate parentheses dart
- PlatformException(sign_in_failed
- dart.dev
- com.google.GIDSignIn
- docker overview
- flutter ios 폴더
- dart new 키워드
- flutter android 폴더
- 발로란트 api dart
- 롤토체스 api dart
- flutter statefulwidget
- AnimationController
- tft api dart
- flutter bloc
- valorant api dart
- lol api dart
- flutter widget
- swift 동시성
- keychain error
- Architectural overview
- leetcode dart
- flutter
- 파이썬 부동소수점
- 롤 api dart
Archives
- Today
- Total
aspe
ML - Recurrent Neural Network(RNN) 본문
RNN
- Type of ANN in which a hidden node is connected to an edge with a direction to form a circulating structure.
- It is suitable for processing data that sequentially appears along the time axis, such as voice or characters.
- The I/O length is free.
- Structure can be made diversely and flexibly as needed.
FNN vs RNN
Basic Structure of RNN
From Vailla RNN to LSTM
- It's Activation function(tanh) causes Vanishing gradients thr process of Backpropagation
LSTM
Sequence-to-Sequence Model (Seq2Seq)
Typical Many-to-Many Encoder-Decoder Model.
Encoder abstracts inputs to one accumulated vector.
Decoder uses the accumulated vector that encoder produces to make outputs.
Attention
There are too many words that have similar meaning from input words.
So, To make decoder choices more relative words with encoder's inputs, Attention was created.
In its process, Inner product of vectors -> Softmax -> One-hot encoding are used.
모든 사진의 출처는 건국대학교 컴퓨터공학부 김학수 교수님의 강의자료 일부입니다.
Comments