Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | ||||
4 | 5 | 6 | 7 | 8 | 9 | 10 |
11 | 12 | 13 | 14 | 15 | 16 | 17 |
18 | 19 | 20 | 21 | 22 | 23 | 24 |
25 | 26 | 27 | 28 | 29 | 30 | 31 |
Tags
- Ann
- 군집화
- stemming
- VGGNet
- Gradient Descent
- Logistic Regression
- NMF
- Support Vector Machine
- nlp
- ResNet
- cross domain
- 자기조직화지도
- Clustering
- Python
- Generative model
- textmining
- RNN
- Transfer Learning
- NER
- 경사하강법
- Attention
- MLOps
- AI 윤리
- tensorflow
- gaze estimation
- Binary classification
- SOMs
- TFX
- BERT
- LSTM
Archives
- Today
- Total
목록Regularization (1)
juooo1117

Part 2. Bias and Variance Generalization in ML 기계학습 알고리즘의 능력을 말한다. An ML model's ability to perform well on new unseen data rather than just the data that it was trained on. → 학습과정에서 보지 못한 새로운 data에 대해서 잘 하는 것이 더 중요하다. Learning algorithm maximizes accuracy on training examples. Strongly related to the concept of overfitting. (*overfitting = poor generalization) *New unseen data에 대한 일반화 능력을 좀 더 높..
Artificial Intelligence/LG Aimers: AI전문가과정
2024. 1. 7. 12:38