Combining matrices of different sizes

211 visualizaciones (últimos 30 días)
Syed
Syed el 23 de Jul. de 2013
Comentada: kevin harianto el 6 de Abr. de 2022
Hi
How can i combine two matrices of different sizes?
e.g,
x = [1 2 3 4;5 6 7 8;9 10 11 12;13 14 15 16]; y = [1 2 3;4 5 6;7 8 9];
and
z = [x y]; % where z is also a matrix
how should i combine to x & y to get z?
I tried using the cat function but it gives an error:"Dimensions of matrices being concatenated are not consistent".
  5 comentarios
Syed
Syed el 23 de Jul. de 2013
z should be like this:
z = [1 2 3 4 1 2 3;5 6 7 8 4 5 6;9 10 11 12 7 8 9;13 14 15 16 0 0 0]
Gabriel Mühlebach
Gabriel Mühlebach el 25 de Jul. de 2016
Or you can use cell. For ex: a = [1,2;3,4]; b = ones(3,3); c = {a,b}; Like that you can put different matrix size in a same variable...

Iniciar sesión para comentar.

Respuesta aceptada

Andrei Bobrov
Andrei Bobrov el 23 de Jul. de 2013
Editada: Andrei Bobrov el 23 de Jul. de 2013
x = [1 2 3 4;5 6 7 8;9 10 11 12;13 14 15 16];
y = [1 2 3;4 5 6;7 8 9];
[i1,j1] = ndgrid(1:size(x,1),1:size(x,2));
[i2,j2] = ndgrid(1:size(y,1),(1:size(y,2))+size(x,2));
z = accumarray([i1(:),j1(:);i2(:),j2(:)],[x(:);y(:)]);
or
sx = size(x);
sy = size(y);
a = max(sx(1),sy(1))
z = [[x;zeros(abs([a 0]-sx))],[y;zeros(abs([a,0]-sy))]]
  7 comentarios
Akshat Rastogi
Akshat Rastogi el 30 de Oct. de 2018
Editada: Akshat Rastogi el 30 de Oct. de 2018
Hello,
I managed to solve the problem myself.
% Initialization
X={region(1).points(:,1)};
Y={region(1).points(:,end)};
% Loop over the set of coordinates
for i=2:size(region,2)
X=[X,{region(i).points(:,1)}];
Y=[Y,{region(i).points(:,end)}];
end
Subsequently X,Y can be used to create Polyshape object.
kevin harianto
kevin harianto el 6 de Abr. de 2022
for some reason im still getting Arrays have incompatible sizes for this operation error.
LocationNew = [[appendingArray;zeros(abs([a 0]-sA))], [Location;zeros(abs([a,0]-sL))]];
I am trying to add the 1D array to the 3D array together in order to match the imageInput of 64, 1856, 3.
classdef LidarSemanticSegmentation < lidar.labeler.AutomationAlgorithm
% LidarSemanticSegmentation Automation algorithm performs semantic
% segmentation in the point cloud.
% LidarSemanticSegmentation is an automation algorithm for segmenting
% a point cloud using SqueezeSegV2 semantic segmentation network
% which is trained on Pandaset data set.
%
% See also lidarLabeler, groundTruthLabeler
% lidar.labeler.AutomationAlgorithm.
% Copyright 2021 The MathWorks, Inc.
% ----------------------------------------------------------------------
% Step 1: Define the required properties describing the algorithm. This
% includes Name, Description, and UserDirections.
properties(Constant)
% Name Algorithm Name
% Character vector specifying the name of the algorithm.
Name = 'Lidar Semantic Segmentation';
% Description Algorithm Description
% Character vector specifying the short description of the algorithm.
Description = 'Segment the point cloud using SqueezeSegV2 network.';
% UserDirections Algorithm Usage Directions
% Cell array of character vectors specifying directions for
% algorithm users to follow to use the algorithm.
UserDirections = {['ROI Label Definition Selection: select one of ' ...
'the ROI definitions to be labeled'], ...
'Run: Press RUN to run the automation algorithm. ', ...
['Review and Modify: Review automated labels over the interval ', ...
'using playback controls. Modify/delete/add ROIs that were not ' ...
'satisfactorily automated at this stage. If the results are ' ...
'satisfactory, click Accept to accept the automated labels.'], ...
['Accept/Cancel: If the results of automation are satisfactory, ' ...
'click Accept to accept all automated labels and return to ' ...
'manual labeling. If the results of automation are not ' ...
'satisfactory, click Cancel to return to manual labeling ' ...
'without saving the automated labels.']};
end
% ---------------------------------------------------------------------
% Step 2: Define properties you want to use during the algorithm
% execution.
properties
% AllCategories
% AllCategories holds the default 'unlabelled', 'Vegetation',
% 'Ground', 'Road', 'RoadMarkings', 'SideWalk', 'Car', 'Truck',
% 'OtherVehicle', 'Pedestrian', 'RoadBarriers', 'Signs',
% 'Buildings' categorical types.
AllCategories = {'unlabelled'};
% PretrainedNetwork
% PretrainedNetwork saves the pretrained SqueezeSegV2 network.
PretrainedNetwork
end
%----------------------------------------------------------------------
% Note: this method needs to be included for lidarLabeler app to
% recognize it as using pointcloud
methods (Static)
% This method is static to allow the apps to call it and check the
% signal type before instantiation. When users refresh the
% algorithm list, we can quickly check and discard algorithms for
% any signal that is not support in a given app.
function isValid = checkSignalType(signalType)
isValid = (signalType == vision.labeler.loading.SignalType.PointCloud);
end
end
%----------------------------------------------------------------------
% Step 3: Define methods used for setting up the algorithm.
methods
function isValid = checkLabelDefinition(algObj, labelDef)
% Only Voxel ROI label definitions are valid for the Lidar
% semantic segmentation algorithm.
isValid = labelDef.Type == lidarLabelType.Voxel;
if isValid
algObj.AllCategories{end+1} = labelDef.Name;
end
end
function isReady = checkSetup(algObj)
% Is there one selected ROI Label definition to automate.
isReady = ~isempty(algObj.SelectedLabelDefinitions);
end
end
%----------------------------------------------------------------------
% Step 4: Specify algorithm execution. This controls what happens when
% the user presses RUN. Algorithm execution proceeds by first
% executing initialize on the first frame, followed by run on
% every frame, and terminate on the last frame.
methods
function initialize(algObj,~)
% Load the pretrained SqueezeSegV2 semantic segmentation network.
outputFolder = fullfile(tempdir, 'Pandaset');
pretrainedSqueezeSeg = load(fullfile(outputFolder,'trainedSqueezeSegV2PandasetNet.mat'));
% Store the network in the 'PretrainedNetwork' property of this object.
algObj.PretrainedNetwork = pretrainedSqueezeSeg.net;
end
function autoLabels = run(algObj, pointCloud)
% Setup categorical matrix with categories including
% 'Vegetation', 'Ground', 'Road', 'RoadMarkings', 'SideWalk',
% 'Car', 'Truck', 'OtherVehicle', 'Pedestrian', 'RoadBarriers',
% and 'Signs'.
autoLabels = categorical(zeros(size(pointCloud.Location,1), size(pointCloud.Location,2)), ...
0:12,algObj.AllCategories);
%A = zeros(10000,10000);
%filling in the minimum required resolution
% to meet the neural network's specification.
%(first iteration failed) pointCloud.Location = zeros(65,1856,5);
%Due to an error we must append the various point cloud data
%first.
%next we can add in the ptCloud locations
% Location(:,:,1) = pointCloud.Location;
% Location = zeros(65,1856,5);
% adding the additional elements to the array.
appendingArray = zeros(64,1856,3);
Location = [pointCloud];
%We now have to convert the location 1D array to 3D array
% permute(reshape(TheList, 300, 300, 400), [2 1 3]).
%reshape(Location, 1,[]);
% Location = permute(reshape(Location, 3, 1856, 64), [3 2 1]);
%using the concatenation to add in the third dimention
%C = cat(3, A, B);
%Location = cat(3,Location, appendingArray)
%adding arrays of different sizes
% sx = size(x);
%sy = size(y);
%a = max(sx(1),sy(1))
%z = [[x;zeros(abs([a 0]-sx))],[y;zeros(abs([a,0]-sy))]]:
sA = size(appendingArray);
sL = size(Location);
a = max(sA(1),sL(1));
%Note: we are trying to add in the elements together to meet the imageInput
%size requirement.
LocationNew = [[appendingArray;zeros(abs([a 0]-sA))], [Location;zeros(abs([a,0]-sL))]];
% Location =[Location, appendingArray];
%This will also be applied to the pointCloud Intensity levels
% as these are also analyzed by the machine learning algorithm.
%(Pushed aside for later modifications) pointCloud.Intensity = zeros(64,1865);
pointCloud=LocationNew;
% Convert the input point cloud to five channel image.
I = helperPointCloudToImage(pointCloud);
% Predict the segmentation result.
predictedResult = semanticseg(I, algObj.PretrainedNetwork);
autoLabels(:) = predictedResult;
%using this area we would be able to continuously update the latest file on
% sending the output towards the CAN Network or atleast ensure that the
% item is obtainable
% This area would work the best.
%first we must
end
end
end
function helperDisplayLabelOverlaidPointCloud(I,predictedResult)
% helperDisplayLabelOverlaidPointCloud Overlay labels over point cloud object.
% helperDisplayLabelOverlaidPointCloud(I,predictedResult)
% displays the overlaid pointCloud object. I is the 5 channels organized
% input image. predictedResult contains pixel labels.
ptCloud = pointCloud(I(:,:,1:3),Intensity = I(:,:,4));
cmap = helperPandasetColorMap;
B = ...
labeloverlay(uint8(ptCloud.Intensity),predictedResult,Colormap = cmap,Transparency = 0.4);
pc = pointCloud(ptCloud.Location,Color = B);
ax = pcshow(pc);
set(ax,XLim = [-70 70],YLim = [-70 70])
zoom(ax,3.5)
end
function cmap = helperPandasetColorMap
cmap = [[30 30 30]; % Unlabeled
[0 255 0]; % Vegetation
[255 150 255]; % Ground
[237 117 32]; % Road
[255 0 0]; % Road Markings
[90 30 150]; % Sidewalk
[255 255 30]; % Car
[245 150 100]; % Truck
[150 60 30]; % Other Vehicle
[255 255 0]; % Pedestrian
[0 200 255]; % Road Barriers
[170 100 150]; % Signs
[255 0 255]]; % Building
cmap = cmap./255;
end
function image = helperPointCloudToImage(ptcloud)
% helperPointCloudToImage converts the point cloud to 5 channel image
image = ptcloud.Location;
image(:,:,5) = ptcloud.Intensity;
rangeData = iComputeRangeData(image(:,:,1),image(:,:,2),image(:,:,3));
image(:,:,4) = rangeData;
index = isnan(image);
image(index) = 0;
end
function rangeData = iComputeRangeData(xChannel,yChannel,zChannel)
rangeData = sqrt(xChannel.*xChannel+yChannel.*yChannel+zChannel.*zChannel);
end

Iniciar sesión para comentar.

Más respuestas (1)

suresh s
suresh s el 23 de Jul. de 2013
hi Syed,
If the Size of both x and y matrix are same than only you can concatenated, otherwise we can't concatenate the matrix in Matlab
  4 comentarios
Dmitriy Antselevich
Dmitriy Antselevich el 27 de Feb. de 2019
Ah yes, let's argue semantics on a Matlab forum.
People are not computers, if someone is asking a question they clearly have trouble with the concept and need the answer to be helpful not 1:1 correct.
Walter Roberson
Walter Roberson el 27 de Feb. de 2019
If someone asks to do X, then us telling them that X is not possible is correct and helpful. There is no requirement that we guess at all the different things that they might maybe have wanted to do.
I've done that, you know: taken a vague question and listed off pages and pages of things that the person might have meant, and how to do each of the possibilities, and the advantages and disadvantages of each of them and the contexts in which you might need each one. The response I got back was "Thanks." No vote, no acceptance of the my Answer, and in particular, no clarification of which one they had intended and no highlighting of what particular parts of what I said had especially clarified the situation for the person (so no feedback as to how my answer might have been improved). It took me hours to write up.
Other times, I have taken a few hours to write up responses to all the various things someone might have meant, only for the person to say the question was about something else completely that no-one could ever reasonably have perceived from what they had written.
I notice, Dmitriy, that you have posted no Answers at all. It is not clear that you have any significant experience in answering questions, deducing all of the various things that someone might mean, and explaining all of the possibilities.

Iniciar sesión para comentar.

Categorías

Más información sobre Point Cloud Processing en Help Center y File Exchange.

Etiquetas

Productos

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by