관리 메뉴

솜씨좋은장씨

[Kaggle DAY03]Real or Not? NLP with Disaster Tweets! 본문

Kaggle/Real or Not? NLP with Disaster Tweets

[Kaggle DAY03]Real or Not? NLP with Disaster Tweets!

사용자 솜씨좋은장씨 2020. 2. 15. 15:54
반응형

Kaggle 도전 3회차!

데이터 전처리는 1회차와 2회차 동일하게 실행하고 모델만 Bi-LSTM에서 CNN-LSTM으로 바꾸어보았습니다.

 

첫번째 제출

model = Sequential() 
model.add(Embedding(max_words, 100, input_length=23)) 
model.add(Dropout(0.2)) 
model.add(Conv1D(128, 3, padding='valid', activation='relu', strides=1)) 
model.add(MaxPooling1D(pool_size=4)) 
model.add(LSTM(128)) 
model.add(Dense(2, activation='softmax')) 
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) 
history = model.fit(X_train_vec, y_train, epochs=3, batch_size=32, validation_split=0.1)

결과

 

두번째 제출

model2 = Sequential() 
model2.add(Embedding(max_words, 100, input_length=23)) 
model2.add(Dropout(0.2)) 
model2.add(Conv1D(128, 3, padding='valid', activation='relu', strides=1)) 
model2.add(MaxPooling1D(pool_size=4)) 
model2.add(LSTM(128)) 
model2.add(Dense(2, activation='softmax')) 
model2.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) 
history = model2.fit(X_train_vec, y_train, epochs=1, batch_size=32, validation_split=0.1)

결과

 

세번째 제출

model2 = Sequential() 
model2.add(Embedding(max_words, 100, input_length=23)) 
model2.add(Dropout(0.2)) 
model2.add(Conv1D(128, 3, padding='valid', activation='relu', strides=1)) 
model2.add(MaxPooling1D(pool_size=4)) 
model2.add(LSTM(64)) 
model2.add(Dense(2, activation='softmax')) 
model2.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) 
history2 = model2.fit(X_train_vec, y_train, epochs=1, batch_size=32, validation_split=0.1)

결과

 

네번째 제출

model2 = Sequential() 
model2.add(Embedding(max_words, 100, input_length=23)) 
model2.add(Dropout(0.1)) 
model2.add(Conv1D(128, 3, padding='valid', activation='relu', strides=1)) 
model2.add(MaxPooling1D(pool_size=4)) 
model2.add(LSTM(32)) 
model2.add(Dense(2, activation='softmax')) 
model2.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) 
history2 = model2.fit(X_train_vec, y_train, epochs=1, batch_size=32, validation_split=0.1)

결과

 

다섯번째 제출

model2 = Sequential() 
model2.add(Embedding(max_words, 100, input_length=23)) 
model2.add(Dropout(0.1)) 
model2.add(Conv1D(128, 3, padding='valid', activation='relu', strides=1)) 
model2.add(MaxPooling1D(pool_size=4)) 
model2.add(LSTM(32)) 
model2.add(Dense(2, activation='softmax')) 
model2.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) 
history2 = model2.fit(X_train_vec, y_train, epochs=1, batch_size=16, validation_split=0.1)

결과

반응형
0 Comments
댓글쓰기 폼