1 Introduction
Automatic programming Rich is a machine learning technique by which computer programs are generated automatically in any arbitrary language. SP Olmo is an automatic programming technique which uses SI algorithms as search engine or learning algorithms. The grammarbased SP is a type of SP in which Contextfree Grammar
(CFG) is used to generate computer programs in a target language. Genetic Programming (GP)
Kozais a evolutionary algorithm in which treestructured genome is used to represent a computer program and Genetic algorithm (GA) is used as a learning algorithm. Grammatical Evolution (GE)
Ryan ,Neill1 is a variant of grammarbased GP (GGP) Mckay in which linear genome i.e., array of integer codons is used to represent a genotype and BackusNaur Form (BNF) of CFG is used to generate the computer programs (i.e., phenotype) from the genotype. Generally, variablelength GA is used as a learning algorithm in GE. SP uses GPlike treestructure genome which represents a computer program and SI algorithms are used as learning algorithms. O. Roux and C. Fonlupt Roux proposed ant programming in which ant colony optimizer (ACO) Dorigo was used to generate the computer programs. D. Karaboga et al. Karaboga proposed artificial bee colony programming (ABCP) for symbolic regression and artificial bee colony (ABC) algorithm Karaboga2 was used as learning algorithm. A. Mahanipour and H. Nezamabadipour Mahanipour proposed Gravitation Search Programming (GSP) in which Gravitation Search Algorithm (GSA) Rashedi was used as a learning algorithm. The grammarbased SP are the variants of GE where SI algorithms are used as search engines or learning algorithms through genotypetophenotype mapping using BNF of CFG. Grammatical Swarm (GS) Neill2 ,Neill3 ,Neill4 , Grammatical Bee Colony (GBC) Si1 , and Grammatical Fireworks algorithm (GFWA) Si2are grammarbased SP. Particle Swarm Optimizer (PSO)
Kennedy was used as search engine in GS. ABC algorithm was used as search engine in GBC and Fireworks algorithm (FWA) Tan was used as search engine in GFWA. Till date, as per author’s best knowledge, MothFlame Optimization (MFO) MFO and Whale Optimization algorithm (WOA) WOA are not used in automatic programming. Therefore, in the current work, two grammarbased SP algorithms GMFO and GWO are proposed. In GMFO algorithm, MFO is used as a search engine to generate computer programs automatically through genotypetophenotype mapping using BNF of CFG. Similar to GMFO, WOA is used as a search engine in GWO for automatic computer program generation. The proposed two methods are applied to Santa Fe Ant Trail, symbolic regression and 3input multiplexer problem. The results of GMFO and GWO are compared to the results of GFWA and GBC. The experimental results demonstrates that the proposed two methods can be used in automatic computer program generation in any arbitrary language.2 Materials & Methods
In this work, two grammarbased SP such as GMFO and GWO are proposed. In GMFO, MFO algorithm is used as a search engine or learning algorithm to generate computer programs automatically through genotypetophenotype mapping. In GMFO, each individual is an array of integer codons in the range and it represents the genotype and derived computer program from genotype using BNF of CFG is known as phenotype. First of all, it is primary concern to define problem specific CFG in BNF. An example of CFG in BNF is given below:
1. <expr> := (<expr><op><expr>) (0)  <var> (1) 2. <op> := + (0)   (1)  * (2)  / (3) 3. <var> := x1 (0)  x2 (1)
The th individual of the search engine, i.e., set of integer codons are initialized as follows:
(1) 
A mapping process maps the rule numbers from the codons in the derivation process of computer programs in the following way: rule=(codon integer value) MOD (number of rules for the current nonterminal)
The representation of genotype, phenotype and genotypetophenotype mapping are given in Figure 1. Similar to GFMO, the positions of whale are the set of integer codons in the range . The same genotypetophenotype mapping process as in GMFO is used to generate the computer programs. During the derivation process, if the derivation is run out of codons, then the process restarts from the beginning. This process is called as wrapping. After a certain number of wrapping, if there is any nonterminal remain in the derived program, then the corresponding individual is denoted as invalid. The invalid individual is replaced by the valid individual later on during the search process.
Search engine or learning algorithms are another important part of GMFO and GWO. As already mentioned before, MFO and WOA are used as search engines in GMFO and GWO respectively. MFO algorithm is based on the transverse orientation, i.e., the navigation method of moths in nature. Moths fly in the night by maintaining a fixed angle with respect to the moon for travelling in a straight line for long distance. But they are trapped in a useless or deadly spiral around the artificial light source. This phenomena is modelled in MFO algorithm. The detail description of MFO algorithm can be obtained from MFO
. WOA is another natureinspired metaheuristic algorithm which mimics the social behaviour of humpback whales. The humpback whales live alone or in group and their favourite prey is krill and small fish herds. Their foraging behaviour, i.e., bubblenet feeding method is done by creating bubbles along a circle. The Bubblenet attacking method (i.e., exploitation phase) has two steps namely shrinking encircling mechanism and spiral updating position. The searching mechanism for prey is used to create exploration of the search space. The detail of WOA can be obtained from
WOA .3 Experimental Setup
3.1 Benchmark Problems
Three benchmark problems such as Santa Fe Ant Trail (SFAT), symbolic regression, and 3input multiplexer are chosen for the experiment. The objective of the SFAT problem is to find out the program by which an ant can eat 89 piece of food placed in grid in 600 time steps. The target function for symbolic regression problem is and fitness cases are generated randomly in the range [1,1]. input multiplexer problem has fitness cases. The detail descriptions and problem specific defined grammar can be obtained from Si1 ,Si2 .
3.2 Parameters Settings
The parameters of GMFO are set as the following: number of search agents () = , dimension = . The parameters of GWO are set as the following: number of search agents () = , dimension = . The other parameters in GMFO and GWO are dynamically controlled as in MFO ,WOA As the functions are not evaluated for invalid individual in the above algorithms, the maximum number generations or iterations are not fixed for the comparative study. Therefore, each of GFW, GBC, GMFO, and GWO algorithms is allowed to run for maximum number of function evaluations (FEs) in a single run. All algorithms are terminated when they reach the maximum FEs or target error. The target errors are set to 0, 0.01 and 0 for ant trail, symbolic regression and 3multiplexer problems respectively. The numbers of wrapping are set to , and for ant trail, symbolic regression and 3multiplexer problems respectively.
3.3 PC Configuration

Operating System: Windows 10 Pro

CPU: Intel(R) Core(TM) i79700K @3.6GHz

RAM: 64 GB

Software: Matlab 2018a
4 Results & Discussion
The proposed GMFO and GWO algorithms are applied to Santa Fe Ant Trail (SFAT), symbolic regression, and 3input multiplexer problems. The experiments are repeated for 30 times independently for each algorithms. The mean and standard deviation of bestrunerrors over
independent runs are given in Table 1. The number of successful runs and success rates (in %) over independent runs are given in Table 2. The success rate (SR) is calculated as follows:The number of successful runs and success rates (in %) over independent runs are given in Table 3. The results of GFWA and GBC are obtained from study Si2 as the current work is the part of the same project.
Algorithms  Santa Fe Ant Trail  Symbolic Regression  3Multiplexer 

GFWA  6.65 (7.246)  
GBC  0.7(0.4661)  
GMFO  
GWO  23.57(17.2880) 
Algorithms  Santa Fe Ant Trail  Symbolic Regression  3Multiplexer 

GFWA  15(50.00%)  
GBC  9(30.00%)  
GMFO  2(6.67%)  
GWO 
Algorithms  Santa Fe Ant Trail  Symbolic Regression  3Multiplexer 

GFWA  
GBC  23549(10420.00)  
GMFO  28224.9(6812.3395)  21586.9(13097.6074)  
GWO 
From Table 1, it is observed that the GWO performs better than other algorithms for SFAT problem. GFWA performs better than other algorithms for symbolic regression problem. GBC performs better than others for 3multiplexer problem. If the results of GMFO are compared with GWO, it can be observed that GWO provides a higher accuracy than GMFO for SFAT and symbolic regression problems whereas GMFO provides higher accuracy than GWO only for 3multiplexer problem.
From Table 2, it is observed that the success rate of GMFO is higher than all others algorithms for SFAT problem. GFWA provides higher success rate than others for regression problem whereas GBC provides higher success rate than others for multiplexer problem. If the success rates of GMFO and GWO are compared, then it can be observed that GMFO has higher success rate than GWO for SFAT and multiplexer problems and there is a tie for regression problem.
From Table 3, it is observed that the mean FEs taken by GMFO is lower than other algorithms for SFAT and regression problems whereas the mean FEs taken by GBC is lower than others for multiplexer problem.
The computer programs evolved by GMFO and GWO are given below:
The successful ant program evolved by GMFO (ant eats all pieces of food):
if(foodahead()) if(foodahead()) if(foodahead()) if(foodahead()) if(foodahead()) move(); else if(foodahead()) if(foodahead()) left(); else if(foodahead()) if(foodahead()) left(); else left(); end; else if(foodahead()) right(); else if(foodahead()) if(foodahead()) if(foodahead()) left(); else left(); end; else move(); end; else move(); end; end; end; end; else if(foodahead()) if(foodahead()) if(foodahead()) if(foodahead()) if(foodahead()) right(); else left(); end; else left(); end; else move(); end; else left(); end; else move(); end; end; end; else right(); end; else if(foodahead()) move(); else move(); end; end; else left(); end; else left(); end; move(); left(); if(foodahead()) move(); else if(foodahead()) right(); else right(); end; end; right();
The ant program evolved by GWO (ant eats out of pieces of food):
if(foodahead()) left(); else right(); end; right(); if(foodahead()) move(); else left(); end; move(); left();
A successful program evolved by GMFO for symbolic regression problem (absolute error = ):
plus(times(x,plus(times(x,plus(x,times(x,times(pdivide(x,x),x)))), x)),x)
A successful program evolved by GWO for symbolic regression problem (absolute error = ):
times(plus(plus(times(minus(times(x,x),pdivide(x,x)),x),x),x), plus(x,pdivide(x,x)))
A successful program evolved by GMFO for 3multiplexer problem (absolute error = 0):
nor(nor(nor(x3,x1),nor(nor(x1,nor(x1,x1)),x2)),nor(nand(nor(x2,x1), nor(x1,x1)),x3))
A program evolved by GWO for 3multiplexer problem (absolute error = 1):
nand(or(x3,x1),x2)
From the discussion of the results, it is found that no single algorithm performs better than all other algorithms for all problems in this study. The above presented results are the experimental evidence of the fact that the proposed GMFO and GWO algorithms can be used in automatic computer program generation in any arbitrary language.
5 Conclusion
This paper presents two grammarbased swarm programming methods namely GMFO and GWO. The proposed methods are applied to solve SFAT, symbolic regression, and 3input multiplexer problems. The experimental results demonstrate that the proposed methods can be used to generate computer programs automatically in any arbitrary language. In this study, the basic version of MFO and WOA are utilized as search engines or learning algorithms in genotypetophenotype mapping. The update version of these algorithms can be used to obtain better performance in automatic computer program generation. In the future, the GMFO and GWO can be applied to realworld problems such as data classification and regression analysis.
References
 (1) Rich C, Waters R.C (1998) Automatic Programming: Myths and Prospects, IEEE Computer, 21(8): 40–51
 (2) Koza J.R (1992) Genetic Programming: On the Programming of Computers by Means of Natural Selection, MIT Press.
 (3) Olmo J.L, Romero J.R, Ventura S.(2014) Swarmbased metaheuristics in automatic programming: a survey, WIREs Data Mining Knowl Discov 2014. doi: 10.1002/widm.1138
 (4) Ryan C, Collins J.J, O’Neill M (1998) Grammatical Evolution: Evolving Programs for an Arbitrary Language, In: BanzhafW, Poli R, Schoenauer M, Fogarty T.C. (eds.) EuroGP 1998. LNCS, vol. 1391, Springer, Heidelberg, 1998 :83–95

(5)
O’Neill M, Ryan C (2001) Grammatical Evolution, IEEE Transactions on Evolutionary Computation 5(4):349–358
 (6) 6. Mckay R.I, Hoai N.X, Whigham P.A, Shan Y, O’Neill M (2010) Grammarbased genetic programming: a survey, Genetic Programming and Evolvable Machines, 11: 365–396.
 (7) Roux O, Fonlupt C (2000) Ant programming: or how to use ants for automatic programming. In: International Conference on Swarm Intelligence (ANTS); 121–129.
 (8) Dorigo M, Maniezzo V, Colorni A (1996) Ant system: optimization by a colony of cooperating agents. IEEE Trans Syst Man Cybern B Cybern, 26:29–41. doi:10.1109/3477.484436.
 (9) Karaboga D, Ozturk C, Karaboga N, Gorkemli B (2012) Artificial bee colony programming for symbolic regression. Inform Sci, 209:1–15. doi: 10.1016/j.ins.2012.05.002.
 (10) Karaboga D (2005) An Idea Based On Honey Bee Swarm For Numerical Optimization, In: Technical ReportTR06, Erciyes University, Engineeing Faculty, Computer Engineering Department
 (11) Mahanipour, A, Nezamabadipour, H (2019) GSP: an automatic programming technique with gravitational search algorithm. Appl Intell 49, 1502–1516 doi:10.1007/s1048901813277
 (12) Rashedi E, NezamabadiPour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci 179:2232–2248
 (13) O’Neill M, Brabazon A (2004) Grammatical swarm, In: Genetic and Evolutionary Computation Conference (GECCO) :163–174.
 (14) O’Neill M, Brabazon A (2006) Grammatical Swarm: The Generation of Programs by Social Programming, Natural Computing 5(4): 443–462
 (15) O’Neill M, Leahy F, Brabazon A (2006) Grammatical swarm: a variablelength particle swarm algorithm, Swarm Intelligent Systems, Studies in Computational Intelligence. Springer: 59–74.

(16)
Kennedy J, Eberhart R (1995) Particle Swarm Optimization, In: IEEE International Conference on Neural Networks, Perth, Australia
 (17) Si T, De A, Bhattacharjee A.K (2013) Grammatical Bee Colony, In: B.K. Panigrahi et al. (Eds.): SEMCCO 2013, Part I, LNCS 8297:436–445.
 (18) Si T (2016) Grammatical Evolution Using Fireworks Algorithm, In: M. Pant et al. (eds.), Proceedings of Fifth International Conference on Soft Computing for Problem Solving, Advances in Intelligent Systems and Computing 436, DOI 10.1007/9789811004483 4.
 (19) Tan Y, Zhu Y (2010) Firework Algorithm for Optimization, In: Y. Tan et al.(Eds): ICSI 2010, Part I, LNCS 6145: 355–364, SpringerVerlag Berlin Heidelberg.
 (20) Mirjalili S (2015) MothFlame Optimization Algorithm: A Novel Natureinspired Heuristic Paradigm, KnowledgeBased Systems (2015), doi: http://dx.doi.org/10.1016/j.knosys.2015.07.006
 (21) Mirjalili S, Lewis A (2016) The Whale Optimization Algorithm, Advances in Engineering Software, 95, pp. 51–67.
Comments
There are no comments yet.