관리 메뉴

솜씨좋은장씨

[Kaggle DAY04]Real or Not? NLP with Disaster Tweets! 본문

Kaggle/Real or Not? NLP with Disaster Tweets

[Kaggle DAY04]Real or Not? NLP with Disaster Tweets!

솜씨좋은장씨 2020. 2. 16. 15:05
728x90
반응형

Kaggle 도전 4회차!

오늘은 저번 CNN-LSTM모델에서 마지막 레이어의 activation이 sigmoid가 아닌 softmax였던 것을 sigmoid로 바꾸어 시도해보았습니다.

 

첫번째 제출

model = Sequential() 
model.add(Embedding(max_words, 100, input_length=23)) 
model.add(Dropout(0.2)) 
model.add(Conv1D(128, 3, padding='valid', activation='relu', strides=1)) 
model.add(MaxPooling1D(pool_size=4)) 
model.add(LSTM(128)) 
model.add(Dense(2, activation='sigmoid')) 
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) 
history = model.fit(X_train_vec, y_train, epochs=3, batch_size=32, validation_split=0.1)

결과

 

두번째 제출

model2 = Sequential() 
model2.add(Embedding(max_words, 100, input_length=23)) 
model2.add(Dropout(0.2)) 
model2.add(Conv1D(128, 3, padding='valid', activation='relu', strides=1)) 
model2.add(MaxPooling1D(pool_size=4)) 
model2.add(LSTM(128)) 
model2.add(Dense(2, activation='sigmoid')) 
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) 
history = model2.fit(X_train_vec, y_train, epochs=1, batch_size=32, validation_split=0.1)

결과

 

세번째 제출

model2 = Sequential() 
model2.add(Embedding(max_words, 100, input_length=23)) 
model2.add(Dropout(0.2)) 
model2.add(Conv1D(128, 3, padding='valid', activation='relu', strides=1)) 
model2.add(MaxPooling1D(pool_size=4)) 
model2.add(LSTM(64)) 
model2.add(Dense(2, activation='sigmoid')) 
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) 
history2 = model2.fit(X_train_vec, y_train, epochs=1, batch_size=32, validation_split=0.1)

결과

 

네번째 제출

model2 = Sequential() 
model2.add(Embedding(max_words, 100, input_length=23)) 
model2.add(Dropout(0.1)) 
model2.add(Conv1D(128, 3, padding='valid', activation='relu', strides=1)) 
model2.add(MaxPooling1D(pool_size=4)) 
model2.add(LSTM(32)) 
model2.add(Dense(2, activation='sigmoid')) 
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) 
history2 = model2.fit(X_train_vec, y_train, epochs=1, batch_size=32, validation_split=0.1)

결과

 

다섯번째 제출

model2 = Sequential() 
model2.add(Embedding(max_words, 100, input_length=23)) 
model2.add(Dropout(0.1)) 
model2.add(Conv1D(128, 3, padding='valid', activation='relu', strides=1)) 
model2.add(MaxPooling1D(pool_size=4)) 
model2.add(LSTM(32)) 
model2.add(Dense(2, activation='sigmoid')) 
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) 
history2 = model2.fit(X_train_vec, y_train, epochs=1, batch_size=16, validation_split=0.1)

결과

Comments