일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | ||||
4 | 5 | 6 | 7 | 8 | 9 | 10 |
11 | 12 | 13 | 14 | 15 | 16 | 17 |
18 | 19 | 20 | 21 | 22 | 23 | 24 |
25 | 26 | 27 | 28 | 29 | 30 | 31 |
- Logistic Regression
- VGGNet
- BERT
- Clustering
- textmining
- 군집화
- cross domain
- LSTM
- RNN
- TFX
- Attention
- Support Vector Machine
- 경사하강법
- ResNet
- Python
- NER
- 자기조직화지도
- stemming
- Generative model
- gaze estimation
- Ann
- tensorflow
- Transfer Learning
- Binary classification
- nlp
- Gradient Descent
- NMF
- AI 윤리
- SOMs
- MLOps
- Today
- Total
목록Artificial Intelligence (64)
juooo1117

Tangent Lines The slope of the tangent line gives the instantaneous rate of chnage(순간변화율). This is also called the derivative(도함수) of the function at that point, or 𝑓'(𝑎) Derivative Formula Exponents Exponents count how many times factors repeat in a number. 3^4 is pronounced "three to the fourth power" or "three to the fourth" * 4^2 : can be pronounced "four to the second" but also "four square..

Cartesian Plane 데카르트 좌표계; Axes and quadrants Distance Formula Clustering Consider set S = {O, B, D} 가 있다고 가정할 때, the nearest neighbor of A in S is D. the second nearest neighbor of A in S is O. the third nearest neighbor of A in S is B. Demystify formulas for equations of lines; Point-slope form & Slope-intercept form m > 0 : positive slope (증가) m < 0 : negative slope (감소) *y intercept : y절편 Fun..

Sets 2 ∈ A : "2 is an element of A" 8 ∉ A : "8 is not an element of A" Cardinality: The cardinality(size) of a set is the number of elements in it. |A| = 4 : "there are 4 elements in A, so the cardinality is 4" Example using set theory X = set of people in a clinical trial, VBS: very bad syndrome 이라고 가정할 때, S = {𝒳 ∈ X : 𝒳 has VBS} H = {𝒳 ∈ X : 𝒳 does not have VBS} (단, X = S ∪ H, S ∩ H = ∅ 이라고 가정..

Part3. Logistic Regression & Artificial Neural Network Linear Regression vs Logistic Regression 로지스틱 회귀분석은 목적변수가 0 or 1 로 주어지는 binary classification 을 푸는데 최적화되어있다. - Linear Regression : Predicted Y can exceed 0 and 1 range - Logistic Regression : Predicted Y lies within 0 and 1 range Logistic Regression 독립변수가 비선형적으로 영향을 미칠 때 사용된다. Expression; 누운 s 자로 구분이 되기 위해서 거치는 과정 Confusion Matrix (분류 정확도 측정..

Part2. 고객에게 최적의 상품을 제시하는 Recommendation Algorithm Recommender System 3 Methodologies to Implement a Recommender System (1) Content-based Recommendations (= content-based filtering) A method of finding similar products and recommending those items based on the user's past product preference history. target user 가 원하는 상품과 비슷한 상품을 찾아내는 방법론 비슷한 상품을 찾아낼 때는, - target user 의 content-based profile (e.g...

Part1. B2B 고객 행동 예측 방법론 B2B version. Targeting Product matching Right time Expected Revenue WHO? → Binary Classification - Logistic regression - ANN - Decision tree - K-nearest neighbor - SVM - RFM (CRM) WHAT? → Recommendations - Content-based filtering - Collaborative filtering WHEN? → Purchase interval HOW MUCH? → Demand forecating Data Analytics Descriptive Analytics (과거-현재 이해) Describes the ..

Part 1. Causality What is Causality? Causality is influence by which one event, process, state, or object (a cause) contributes to the production of another event, process, state, or object (an effect) where the cause is partly responsible for the effect, and the effect is partly dependent on the cause. → 하나의 어떤 무엇인가가 다른 무엇을 생성함에 있어서 영향을 미치는 것! (인과성) 기계학습 : 데이터의 상관성 학습 환경에 변화를 줌으로써 원하는 최종적인 상태로 ..

Part 6. Self-Supervised Learning and Large-Scale Pre-Trained Models Self-Supervised Learning? : Given unlabeled data, hide part of the data and train the model so that it can predict such a hidden part of data, given the remaining data. Transfer Learning from Self-Supervised Pre-trained Model Pre-trained models using a particular self-supervised learning can be fine-tuned to improve the accuracy..

Part 5. How Transformer Model Works? Attention module can work as a sequence encoder and a decoder in seq2seq with attention. In other words, RNNs or CNNs are no longer necessary, but all we need is attention modules. Transformer → solving Long-term Dependency Problem Scaled Dot-product attention - As 𝑑𝑘 gets large, the variance of 𝑞𝑇𝑘 increases. (더 고차원 vector의 내적값들이 되므로 분산이 더 커진다.) - Some value..

Part 4. Seq2Seq with Attention for Natural Language Understanding and Generation RNN(Recurrent Neural Networks) Given a sequence data, we recursively run the same function over time. → sequence data에 특화된 형태이다. We can process a sequence of vectors x by applying a recurrence formula at every time step → 동일한 function을 반복적으로 호출하는 형태를 가지고 있다. (시간에 따라 순차적으로 변화하는 입력이 들어온다면, 현재 시간과 이전 시간의 hidden state v..