Tracking channel states using Machine Learning

1 visualización (últimos 30 días)
Susan
Susan el 22 de Jul. de 2019
Comentada: Susan el 25 de Jul. de 2019
I am new in AI and would like to apply machine learning to estimate the channel states. I have a set of data. It is a matrix of 10000*8. Each row of this matrix is regarding a time step, i.e., 1st row = current time step (t), 2nd row = next time step (t+1), and so on. Each column is related to one transmitter, and I have 8 different transmitters. Each transmitter at each time step estimates the channel status and assigns one value form the set {0,1,2,-1}. for example the n-th row has a form of [ -1 0 0 0 1 0 0 2].
Knowing the status of the channel in the time step t I would like to know what is the channel states in time step (t+1).
I used the MLP and got the mse = 0.04 but the activate function that I used is either 'tansig' or 'logsig'. the results then is in [0, 1] or [-1, 1] and I don't know how to convert them to {-1, 0, 1, 2}. I have also applied LSTM but the mse error is 0.31. So I don't know why I get such a large mse. Any suggestions would be greatly appreciated.

Respuestas (1)

Vimal Rathod
Vimal Rathod el 25 de Jul. de 2019
The Multi-Layer Perceptron Algorithm is generally used for Regression and the use case you are working on is Classifying and predicting the channel state of the future. Thus, for such time series classification LSTM networks are a better option. Coming to the getting a high MSE value it can be reduced by using the following methods:
  1. Changing the hyper parameters of the model layers.
  2. Increasing the size of data input.
  3. Try increasing number of layers and number of neurons in the layers.
  1 comentario
Susan
Susan el 25 de Jul. de 2019
Thank you very much Vimal for your reply.
As you suggested I try to increase the number of neurons in the layer and increase the size of data input. It helps for sure but not much.
Here is my code and I attached the data. Would you please kindly take a look and tell me what I am doing wrongly? I do appreciate your time and help.
rom math import sqrt
from numpy import concatenate
from matplotlib import pyplot
from pandas import read_csv
from pandas import DataFrame
from pandas import concat
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import mean_squared_error
from keras.models import Sequential
from keras.layers import Dense, Softmax, Dropout, Flatten, Activation
from keras.layers import LSTM
import numpy as np
# convert series to supervised learning
def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):
n_vars = 1 if type(data) is list else data.shape[1]
df = DataFrame(data)
cols, names = list(), list()
# input sequence (t-n, ... t-1)
for i in range(n_in, 0, -1):
cols.append(df.shift(i))
names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
# forecast sequence (t, t+1, ... t+n)
for i in range(0, n_out):
cols.append(df.shift(-i))
if i == 0:
names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
else:
names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
# put it all together
agg = concat(cols, axis=1)
agg.columns = names
# drop rows with NaN values
if dropnan:
agg.dropna(inplace=True)
return agg
# load dataset
dataset = read_csv('Channel_states_m0.csv', header=None)
nRow = len(dataset) - 1
values = dataset.values
reframed = series_to_supervised(values, 1, 1)
print(reframed.head())
# split into train and test sets
values = reframed.values
print(values)
n_train = int(30/100 * nRow)
print(n_train)
train = values[:n_train, :]
test = values[n_train:, :]
# split into input and outputs
train_X, train_y = train[:, :8], train[:, 8:]
test_X, test_y = test[:, :8], test[:,8:]
# reshape input to be 3D [samples, timesteps, features]
train_X = train_X.reshape((train_X.shape[0], 1, train_X.shape[1]))
test_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))
# design network
model = Sequential()
model.add(LSTM(200, activation='relu', input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(100, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(8))
model.compile(loss='mae', optimizer='adam')
# fit network
history = model.fit(train_X, train_y, epochs=100, batch_size=72, validation_data=(test_X, test_y), verbose=2, shuffle=False)
# plot history
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='test')
pyplot.legend()
pyplot.show()

Iniciar sesión para comentar.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by