Genetic Algorithms – NSGA ii

A popular optimization algorithm (genetic algorithm) is NSGA. As we get to discuss the inner workings of Non Dominated Sorting Genetic Algorithm (NSGA), we need to understand a few pivotal concepts.

Genetic Algorithms

To understand the basics of how GA’s work, you can start with this link here.

Multi Objective Optimization

When dealing with only one Objective function it feels like an easier task, however, when you have multiple objectives to satisfy, the complexity increases. For example:

If we want to maximize the output of a machine given the budget. 

However, if there are multiple objectives such as:

Find a Car with minimum cost but maximum comfort. What is interesting here is that these objectives are conflicting in nature, that is if we consider more expensive cars the comfort is likely to increase as well.

Non-Dominated (Pareto Optimal) & Dominated Solutions

SolutionMax (Z1,Z2)Type of Solution
1(10,8)Non-Dominated
2(8,6)Dominated
3(10,5)Dominated

Assuming that there are two objectives (Z1,Z2) for maximization optimization problem then Pareto efficiency or Pareto optimality is a state of allocation where Z1 = 10,Z2 = 8) is the best solution, and there is no way to reallocate without making at least one objective function worse off. Pareto front is a set of solutions which are Pareto optimal.

When there are multiple objectives it becomes important to bring into consideration all the pareto optimal solutions. For example if in a maximization problem you have (Z1 = 12, Z2 = 1) and (Z1 = 15, Z2 = 2) as the pareto optimal solutions. In such a case the user would need the option to evaluate both these options and make a choice based on the priority of these objectives.

Elitist

Elitist selection is a selection strategy where a limited number of individuals with the best fitness values are chosen to pass to the next generation. That is Elitism involves copying a small proportion of the fittest candidates, unchanged, into the next generation.

Now let’s begin!!!

Elitist Non-Dominated Sorting Genetic Algorithm (NSGA ii)

Non-Dominated Sorting is a ranking selection method. NSGA’s are similar to any other GA except for the way the selection operator works.

  • In GA, the steps of initialization, fitness function work as usual. That is, for the population (all parents and off-springs) of all the solutions we calculate the Fitness value/ Objective value.
  • For selection:
    • We sort all solutions of the solution space according to the level of non-domination. That is the first front will contain a list of all Non-dominated solutions. Second front contains solutions dominated only by the first front and so on. In the following picture let f1 and f2 be the fitness/objective functions that are being minimized. Here, the pareto frontier is the first front with the best solutions and layer 1 consists of solutions that are dominated by only the first front.
      Source.
    • Assign Fitness rank based on the respective front the solution lies on. For Example:
FrontFitness Rank
F11
F22
F33
F44

Consider our population/solution space is:
{1,2,3,4,5,6,7,i,ii,iii,iv}

   Where: 
   Parent Population      {1,2,3,4,5,6,7} 
   Off-spring Population  {i,ii,iii,iv,v}

   Pareto Frontier 
   F1:           {1,2,iii,v}
   F2:           {3,5,i,iv}
   F3:           {4,7,ii}
   F4:           {6}

Let us define set P as the final set of selected solutions. At first this set is empty. As the parent population consists of 7 places, therefore P has a maximum of 7 spots to fill.
> F1 has 4 solutions and these are the best solutions, so let P be {1,2,iii,v}.
> We still have 3 positions to fill, we can’t include F2 directly as it has 4 solutions.

Crowding Distance (CD)

  • Calculate the Crowding Distance (CD) for each solution: Crowding distance is another parameter that helps determine how densely populated a solution space is. That is we are looking at the distance between say the ith solution and it’s neighboring solutions. These steps are carried out for each front:
    • Initialize Distance to be 0 for all solutions of front F.
    • Sort these solutions based on the value of each objective function.
    • The distance of the boundary points is assigned as infinity.
    • Calculate the Euclidean distance for the remaining set of solutions.
Multi Objective Genetic Algorithms (NSGA ii) 1

For the purpose of explanation assume that all these solutions fall on Front F2:

SolutionMax (Z1,Z2)Rank k(Z1)Rank k(Z2)I(dk)(Z1)I(dk)(Z2)Final I(dk)
3(10,8)22– 0.11– 0.21– 0.21
5(5,10)41InfinityInfinityInfinity
i(15,5)14InfinityInfinityInfinity
iv(8,7)33– 0.16– 0.34– 0.34

In the above table we have calculated the fitness function (column 2) for the respective solutions (column 1). As our objective is to maximize the fitness function, ranking is carried out for both Z1 and Z2 in descending order. Further, the crowding distance has been calculated.

    For Solution 1, consider this example: 
    Crowding Distance calculation:-
    I(dk)(Z1) = 0 + (8-15)/(60 - 0) = - 0.11
    (Assuming of all solutions Max Z1 = 60 and Min Z1 = 0)
    I(dk)(Z2) = -0.11 + (7-10)/(30-2) = - 0.21
    (Assuming of all solutions Max Z2 = 30 and Min Z2 = 2) 

Now, Consider F2 ordered according to final Crowding Distances {Final I(dk)} in the table. We get {5,i,3,iv}.

Going back to set P which currently is {1,2,iii,v} and has still got three spots to fill. Based on the crowding distance we can pick {5,i,3} the most diverse set of solutions.
  • Tournament Selection: The Final step of Selection. At this point P = {1,2,iii,v,5,i,3} For Tournament Selection consider these pairs created randomly as an example:
Solution PairsFitness Rank (FR)Crowding Distance (CD)CriteriaFinal Selection
5,32,2Infinity,-0.21CD5
2,51,2FR2
iii,i1,2FRiii
  • Make randomly selected k pairs of solutions from set P. Here k = 7.
    • For each pair compare the Fitness rank and crowding distance.
    • If a pair has different fitness ranks (Eg. 2,5) select the solution with the lowest rank.
    • If the solutions in the pair have same fitness ranks that is they belong to the same pareto front, then compare the crowding distance. Pick the one with the most distance (Eg. 5 in 5,3).
  • Post selection, Rest of the steps namely crossover, mutation follow as usual.
  • The termination criteria is also the same as defined in the article about GA.

About Us

Data science discovery is a step on the path of your data science journey. Please follow us on LinkedIn to stay updated.

About the writers:

  • Ankit Gadi: Driven by a knack and passion for data science coupled with a strong foundation in Operations Research and Statistics has helped me embark on my data science journey.

Leave a Reply