Converting a binary matrix to decimal.

1 visualización (últimos 30 días)
level1
level1 el 23 de Oct. de 2021
Comentada: level1 el 24 de Oct. de 2021
I change the 3-bit binary matrix(a, e) to decimal -> change the decimal to a 4-bit binary matrix -> Add changed a and e = A
The following errors occur.
bin2dec :
The input argument must be a cell type array consisting of a character type vector, string type, or character type vector.
What is the way to correct the error?
Below is the code content.
function A = Xasum(X)
N = 64;
for k = 1 : N
x = X(k, :);
a = x(:, [1,2,3]);
e = x(:, [4,5,6]);
bin2dec(a);
dec2bin((a),4);
bin2dec(e);
dec2bin((e),4);
A = (a + e)';
end
end
X = [0 0 0 0 0 0;
0 0 0 0 0 1;
0 0 0 0 1 0;
0 0 0 0 1 1;
0 0 0 1 0 0;
0 0 0 1 0 1;
0 0 0 1 1 0;
0 0 0 1 1 1;
0 0 1 0 0 0;
0 0 1 0 0 1;
0 0 1 0 1 0;
0 0 1 0 1 1;
0 0 1 1 0 0;
0 0 1 1 0 1;
0 0 1 1 1 0;
0 0 1 1 1 1;
...
...
...
1 1 1 1 0 1;
1 1 1 1 1 0;
1 1 1 1 1 1;
];

Respuestas (1)

Jan
Jan el 23 de Oct. de 2021
Editada: Jan el 23 de Oct. de 2021
The error message is clear: bin2dec requires a CHAR vector as input or a cell string. See:
doc bin2dec
You provide a numerical vector consisting on 1s and 0s.
This part of the code does not assigbn the results to a varible, so the computations are lost and only a waste of time:
bin2dec(a);
dec2bin((a),4);
bin2dec(e);
dec2bin((e),4);
What do you want to achieve? a and e are not changed here.
The result A is overwritten in each iteration:
A = (a + e)';
At the end, A contains the value of the last iteration k=N only. Is this wanted?
What is the wanted output for:
X = [0 0 0 0 0 0; ...
0 0 0 0 0 1; ...
0 0 0 0 1 0];
  3 comentarios
Jan
Jan el 24 de Oct. de 2021
Again: What is the wanted output for the given X?
Splitting the array is easy:
X = [0 0 0 0 0 0; ...
0 0 0 0 0 1; ...
0 0 0 0 1 0];
X1 = X(:, 1:3)
X2 = X(:, 4:6)
Expanding this to 4 bits means inserting columns with zeros:
nRow = size(X, 1);
Y = [zeros(nRow, 1), X1, zeros(nrow, 1), X2];
level1
level1 el 24 de Oct. de 2021
sorry.I didn't know you were asking me to explain the result in code.
D = [0 0 0 0;
0 0 0 1;
0 0 1 0;
0 0 1 1;
0 1 0 0;
0 1 0 1;
0 1 1 0;
0 1 1 1;
1 0 0 0;
1 0 0 1;
1 0 1 0;
1 0 1 1;
1 1 0 0;
1 1 0 1;
1 1 1 0;
1 1 1 1;
];
Divide the input 6 bits into 3 bits. After that, add two three bits. I want to print that value in 4 bits.
There are 64 results.
I'm not good at English, so I'll mark it with numbers.
6bits input64 -> Neural network system -> 4bits output64

Iniciar sesión para comentar.

Categorías

Más información sobre Numeric Types en Help Center y File Exchange.

Etiquetas

Productos


Versión

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by