Notice
Recent Posts
Recent Comments
| 일 | 월 | 화 | 수 | 목 | 금 | 토 |
|---|---|---|---|---|---|---|
| 1 | ||||||
| 2 | 3 | 4 | 5 | 6 | 7 | 8 |
| 9 | 10 | 11 | 12 | 13 | 14 | 15 |
| 16 | 17 | 18 | 19 | 20 | 21 | 22 |
| 23 | 24 | 25 | 26 | 27 | 28 | 29 |
| 30 |
Tags
- 프로그래머스 파이썬
- 금융문자분석경진대회
- hackerrank
- 자연어처리
- dacon
- Baekjoon
- leetcode
- 캐치카페
- 맥북
- 편스토랑
- Kaggle
- programmers
- Docker
- AI 경진대회
- 파이썬
- SW Expert Academy
- 백준
- 코로나19
- Real or Not? NLP with Disaster Tweets
- PYTHON
- ChatGPT
- 더현대서울 맛집
- 데이콘
- github
- gs25
- 우분투
- 편스토랑 우승상품
- Git
- 프로그래머스
- ubuntu
Archives
- Today
- Total
솜씨좋은장씨
[Kaggle DAY06]Real or Not? NLP with Disaster Tweets! 본문
Kaggle/Real or Not? NLP with Disaster Tweets
[Kaggle DAY06]Real or Not? NLP with Disaster Tweets!
솜씨좋은장씨 2020. 2. 19. 02:18728x90
반응형
Kaggle 도전 6회차!
오늘은 좀 더 간단한 신경망 모델을 사용해보려고합니다.
첫번째 제출
model2 = Sequential()
model2.add(Embedding(max_words, 100, input_length=23)) # 임베딩 벡터의 차원은 32
model2.add(Flatten())
model2.add(Dense(128, activation='relu'))
model2.add(Dense(2, activation='sigmoid'))
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
history = model2.fit(X_train_vec, y_train, epochs=3, batch_size=32, validation_split=0.1)

결과

두번째 제출
model2 = Sequential()
model2.add(Embedding(max_words, 100, input_length=23)) # 임베딩 벡터의 차원은 32
model2.add(Flatten())
model2.add(Dense(128, activation='relu'))
model2.add(Dense(2, activation='sigmoid'))
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
history = model2.fit(X_train_vec, y_train, epochs=1, batch_size=32, validation_split=0.1)

결과

세번째 제출
model2 = Sequential()
model2.add(Embedding(max_words, 100, input_length=23)) # 임베딩 벡터의 차원은 32
model2.add(Flatten())
model2.add(Dense(64, activation='relu'))
model2.add(Dense(2, activation='sigmoid'))
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
history2 = model2.fit(X_train_vec, y_train, epochs=1, batch_size=32, validation_split=0.1)

결과

네번째 제출
model2 = Sequential()
model2.add(Embedding(max_words, 100, input_length=23)) # 임베딩 벡터의 차원은 32
model2.add(Flatten())
model2.add(Dense(32, activation='relu'))
model2.add(Dense(2, activation='sigmoid'))
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
history2 = model2.fit(X_train_vec, y_train, epochs=1, batch_size=32, validation_split=0.1)

결과

다섯번째 제출
model2 = Sequential()
model2.add(Embedding(max_words, 100, input_length=23)) # 임베딩 벡터의 차원은 32
model2.add(Flatten())
model2.add(Dense(32, activation='relu'))
model2.add(Dense(2, activation='sigmoid'))
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
history2 = model2.fit(X_train_vec, y_train, epochs=1, batch_size=16, validation_split=0.1)

결과

'Kaggle > Real or Not? NLP with Disaster Tweets' 카테고리의 다른 글
| [Kaggle DAY08]Real or Not? NLP with Disaster Tweets! (0) | 2020.03.05 |
|---|---|
| [Kaggle DAY07]Real or Not? NLP with Disaster Tweets! (0) | 2020.03.04 |
| [Kaggle DAY05]Real or Not? NLP with Disaster Tweets! (0) | 2020.02.18 |
| [Kaggle DAY04]Real or Not? NLP with Disaster Tweets! (0) | 2020.02.16 |
| [Kaggle DAY03]Real or Not? NLP with Disaster Tweets! (0) | 2020.02.15 |
Comments