File Exchange

image thumbnail

Non Sorting Genetic Algorithm II (NSGA-II)

version 1.2 (432 KB) by Víctor Martínez-Cagigal
Bearable and compressed implementation of Non Sorting Genetic Algorithm II (NSGA-II)

47 Downloads

Updated 25 Nov 2019

View License

This function performs a Non Sorting Genetic Algorithm II (NSGA-II) for minimizing continuous functions. The implementation is bearable, computationally cheap, and compressed (the algorithm only requires one file: NSGAIII.m). An 'example.m' script is provided in order to help users to use the implementation. It is also noteworthy to mention that the code is highly commented for easing the understanding. This implementation is based on the paper of Deb et al. (2002), "A fast and elitist multiobjective genetic algorithm: NSGA-II".

Cite As

Víctor Martínez-Cagigal (2020). Non Sorting Genetic Algorithm II (NSGA-II) (https://www.mathworks.com/matlabcentral/fileexchange/65494-non-sorting-genetic-algorithm-ii-nsga-ii), MATLAB Central File Exchange. Retrieved .

Comments and Ratings (17)

Sichen Lu

@Hi Víctor,
Can I define the increment/step of a decision variable in this code?

Sichen Lu

Dear Víctor,
Thank you for your clarification!

Hi @Sichen Lu,

You don't need to load the data. That was just typed in the examples in order to load the True Pareto-fronts for comparison and visualization purposes. In a real problem, you could not have that data, since you want to reach that solution using the NSGA-II

Sichen Lu

Dear Víctor,
Thank you for submitting the code. I found your file very clean and helpful.
I wonder how should I change/modify "load('Schaffer.mat')". Why do I need to load the Matlab data?
Many thanks!

Mahtab Munawar

Mahtab Munawar

Mahtab Munawar

Best ever code to handle integer.
Thanks a lot sir.

Alexander Hagg

Awesome :)

Dear Alexander,

thank you for pointing it out, but it was already solved! :) I included that when you suggested the Gaussian distribution:

mutatedPop = Q + ms.*repmat((var_max-var_min)',N,1).*randn(N,nVar);
minVal = repmat(var_min',N,1);
maxVal = repmat(var_max',N,1);
mutatedPop(mutatedPop<minVal) = minVal(mutatedPop<minVal);
mutatedPop(mutatedPop>maxVal) = maxVal(mutatedPop>maxVal);

Regards

Alexander Hagg

I would like to post another correction, which only popped up after my fix. As var_max and var_min are used to determine the limits of the search space, my fix actually contains a bug.

mutStrength = 0.05;
mutatedPop = Q + mutStrength*repmat((var_max-var_min)',N,1).*randn(N,nVar);

Notice that the mutation operator, although it is now correctly perturbing the parents, can produce solutions that are not clamped between the var_min and var_max parameters. So here is a fix to my bugfix (vectorized):

mutStrength = 0.05;
mutatedPop = Q + mutStrength.*randn(N,nVar);

% Clamp population between minimum and maximum parameter values var_min
% and var_max
tooLarge = (mutatedPop > var_max');
clampedValues = tooLarge.*var_max';
mutatedPop(tooLarge) = clampedValues(tooLarge);

tooSmall = (mutatedPop < var_min');
clampedValues = tooSmall.*var_min';
mutatedPop(tooSmall) = clampedValues(tooSmall);

There might be a more elegant way to express this but at least it is readable.

Alexander Hagg

Glad to be of help! And very happy to see an active contributor :).

@Alexander Hagg, Thank you for that ultra-useful comment! I was not aware of the Gaussian distribution approach until this moment. I have also tested your suggestion and not only it reaches faster convergences, but also better results, specially in functions like ZDT1. So, I have implemented your suggestion about the mutation operator now and I am updated the code (of course, citing your comment).

I appreciate it! :)

Alexander Hagg

Thanks for your submission. Using it for some experiments right now. Always great when people upload their code.

I noticed a mistake in your implementation, compared to the original formulation of NSGA-II and most evolutionary algorithms. The mutation operator is supposed to take the old population and perturb it. This is usually done by drawing from a normal distribution and adding the mutation on top of the old population. However, you assign random vectors to the publication when mutation occurs.

Q = parent; % Children initialized
mutatedPop = repmat((var_max-var_min)',N,1).*rand(N,nVar) + repmat(var_min',N,1);
mut_idx = rand(N,nVar) < pm;
Q(mut_idx) = mutatedPop(mut_idx); % Children overwritten

This works on the Kursawe function you use per default in the example, however it should take significantly longer than using correct mutations. Here is a suggestion how you could improve your implementation by using perturbations of the original population and adding a normally distribution perturbation on top of it. I have added a mutation strength, the standard deviation of the normal distribution, to allow parameterizing the strength of the mutation, which is often domain dependent (although 2-10% should work fine). (not forgetting crossover, but just left it out to make this short)

mutStrength = 0.05;
mutatedPop = Q + mutStrength*repmat((var_max-var_min)',N,1).*randn(N,nVar);

I've compared the performance of this mutation operator to the performance of your original implementation on the Kursawe function. Ran both implementations for 50 iterations. I measured the average distance to the ground truth Pareto front. Here is the figure: https://drive.google.com/open?id=1nxExexi3yqWfel-zje8btFnN1_OKJ5iG
Just for the post-Google era: Your implementation takes about 40 iterations to converge, mine about 15. This difference will be much larger in high-dimensional parameter/genetic spaces, as your "random search" will not be able to find improvements easily in those vast search spaces.

Mermankey

Why the algorithm does not approach the optimal pareto of solutions?

Yuyang Wang

Works and easy to understand and modify.

Why should it reach better results? They are complementary methods, and they also have different parameters to adjust.

Andrés F. Abril

Why does the MOPSO generate better results than the NSGA II? In the ZDT functions, the NSGA II does not approach the optimal pareto of solutions.

Updates

1.2

The old mutation operator is substituted by a weighted normal distribution approach, as suggested by Alexander Hagg, which reaches faster convergences.

1.1.1.0

True ParetoFronts for the examples are now uploaded

MATLAB Release Compatibility
Created with R2017a
Compatible with any release
Platform Compatibility
Windows macOS Linux