How do I plot linear decision boundary in 2 classes

3 visualizaciones (últimos 30 días)
Murat Aydemir
Murat Aydemir el 26 de Feb. de 2017
Respondida: Prateekshya el 8 de Oct. de 2024
I have 2 different dataset that have different mu and sigma, and X vector such as [1.8; 1.8]. And also I know probability of each classes that P(ω1)= P(ω2) = 1/2
I want to ploting linear decision boundary between these two dataset but I don't have any idea how to do. My code is down below, here
X = [1.8; 1.8];
u1 = [1;1]; u2 = [3;3];
s1 = [1 0;0 1]; s2 = [1 0;0 1];
Pr1 = 1/2;
Pr2 = 1/2;
r = mvnrnd(u1,s1,500);
plot(r(:,1), r(:,2), '+r');
hold on
r = mvnrnd(u2,s2,500);
plot(r(:,1), r(:,2), '+b');
hold on
grid on
W1 = (u1')/(s1(1,1))^2;
W10 = (u1'*u1)/(-2*s1(1,1)) + log(Pr1);
g1 = W1'.*X + W10;
W2 = (u2')/(s2(1,1))^2;
W20 = (u2'*u2)/(-2*s2(1,1)) + log(Pr2);
g2 = W2'.*X + W20;
Is there someone who can give any idea to me please?

Respuestas (1)

Prateekshya
Prateekshya el 8 de Oct. de 2024
Hello Murat,
To plot the linear decision boundary between two Gaussian-distributed datasets with given means and covariances, you can use the concept of discriminant functions. The decision boundary is where these discriminant functions are equal, i.e., .
Given your datasets and parameters, you can follow these steps:
  • Define the Discriminant Functions:
For each class , the linear discriminant function can be expressed as:
where:
  • Calculate the Coefficients:
Since your covariance matrices are identity matrices, the inverse is simply the same matrix, and the calculations simplify.
  • Find the Decision Boundary:
The decision boundary is found by setting , which simplifies to solving:
  • Plot the Decision Boundary:
Solve the above equation for in terms of to get the equation of a line, and plot this line.
Here is the updated code:
% Given data and parameters
u1 = [1; 1];
u2 = [3; 3];
s1 = [1 0; 0 1];
s2 = [1 0; 0 1];
Pr1 = 1/2;
Pr2 = 1/2;
% Generate random samples
r1 = mvnrnd(u1, s1, 500);
r2 = mvnrnd(u2, s2, 500);
% Plot the samples
plot(r1(:,1), r1(:,2), '+r');
hold on;
plot(r2(:,1), r2(:,2), '+b');
grid on;
% Calculate discriminant coefficients
W1 = inv(s1) * u1;
W10 = -0.5 * (u1' * inv(s1) * u1) + log(Pr1);
W2 = inv(s2) * u2;
W20 = -0.5 * (u2' * inv(s2) * u2) + log(Pr2);
% Calculate the decision boundary
W_diff = W1 - W2;
W0_diff = W10 - W20;
% Decision boundary line: W_diff' * X + W0_diff = 0
% Solve for X2 in terms of X1: W_diff(1)*X1 + W_diff(2)*X2 + W0_diff = 0
% X2 = -(W_diff(1)/W_diff(2)) * X1 - (W0_diff/W_diff(2))
x1_vals = linspace(min([r1(:,1); r2(:,1)]), max([r1(:,1); r2(:,1)]), 100);
x2_vals = -(W_diff(1)/W_diff(2)) * x1_vals - (W0_diff/W_diff(2));
% Plot the decision boundary
plot(x1_vals, x2_vals, '-k', 'LineWidth', 2);
xlabel('X1');
ylabel('X2');
title('Linear Decision Boundary');
legend('Class 1', 'Class 2', 'Decision Boundary');
hold off;
The output of this code looks like:
I hope this helps!

Categorías

Más información sobre Linear and Nonlinear Regression en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by