관리 메뉴

솜씨좋은장씨

[Kaggle DAY06]Real or Not? NLP with Disaster Tweets! 본문

Kaggle/Real or Not? NLP with Disaster Tweets

[Kaggle DAY06]Real or Not? NLP with Disaster Tweets!

솜씨좋은장씨 2020. 2. 19. 02:18
728x90
반응형

Kaggle 도전 6회차!

오늘은 좀 더 간단한 신경망 모델을 사용해보려고합니다.

 

첫번째 제출

model2 = Sequential()
model2.add(Embedding(max_words, 100, input_length=23)) # 임베딩 벡터의 차원은 32
model2.add(Flatten())
model2.add(Dense(128, activation='relu'))
model2.add(Dense(2, activation='sigmoid')) 
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) 
history = model2.fit(X_train_vec, y_train, epochs=3, batch_size=32, validation_split=0.1)

결과

 

두번째 제출

model2 = Sequential()
model2.add(Embedding(max_words, 100, input_length=23)) # 임베딩 벡터의 차원은 32
model2.add(Flatten())
model2.add(Dense(128, activation='relu'))
model2.add(Dense(2, activation='sigmoid')) 
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) 
history = model2.fit(X_train_vec, y_train, epochs=1, batch_size=32, validation_split=0.1)

결과

 

세번째 제출

model2 = Sequential()
model2.add(Embedding(max_words, 100, input_length=23)) # 임베딩 벡터의 차원은 32
model2.add(Flatten())
model2.add(Dense(64, activation='relu'))
model2.add(Dense(2, activation='sigmoid')) 
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) 
history2 = model2.fit(X_train_vec, y_train, epochs=1, batch_size=32, validation_split=0.1)

결과

 

네번째 제출

model2 = Sequential()
model2.add(Embedding(max_words, 100, input_length=23)) # 임베딩 벡터의 차원은 32
model2.add(Flatten())
model2.add(Dense(32, activation='relu'))
model2.add(Dense(2, activation='sigmoid')) 
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) 
history2 = model2.fit(X_train_vec, y_train, epochs=1, batch_size=32, validation_split=0.1)

결과

 

다섯번째 제출

model2 = Sequential()
model2.add(Embedding(max_words, 100, input_length=23)) # 임베딩 벡터의 차원은 32
model2.add(Flatten())
model2.add(Dense(32, activation='relu'))
model2.add(Dense(2, activation='sigmoid')) 
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) 
history2 = model2.fit(X_train_vec, y_train, epochs=1, batch_size=16, validation_split=0.1)

결과

Comments