`library(CGNM)`

- Wish to fit relatively complex parameterized model/curve/function to the data; however, unsure of the appropriate initial guess (e.g., “start” argument in nls function). => CGNM searches the parameter from multiple initial guess so that the use can specify rough initial range instad of a point initial guess.
- When the practical identifiability (if there is only one best fit parameter or not) is unknown. => CGNM will find multiple set of best fit parameters; hence if the parameter is not practically identifiable then the multuple best-fit parameters found by CGNM will not converge to a point.
- When the model is a blackbox (i.e., cannot be expricitly/easily write out as a “formula”) and the model may not be continuous with respect to the parameter. => CGNM makes minimum assumptions on the model so all user need to provide is a function that takes the model parameter and simulation (and what happen in the function, CGNM does not care).

- When the you already know where approximately the best fit parameter is and you just need to find one best fit parameter. => Simply use nls and that will be faster.
- When the model is relatively simple and computation cost is not an issue. => CGNM is made to save the number of model evaluation dueing the parameter estimation, so may not see much advantage compared to the conventional multi-start methods (e.g.repeatedly using nls from various “start”).

To illutrate the use of CGNM here we illustrate how CGNM can be used to estiamte two sets of the best fit parameters of the pharmacokinetics model when the drug is administered orally (known as flip-flop kinetics).

```
=function(x){
model_function
=c(0.1,0.2,0.4,0.6,1,2,3,6,12)
observation_time=1000
Dose=1
F
=10^x[1]
ka=10^x[2]
V1=10^x[3]
CL_2=observation_time
t
=ka*F*Dose/(V1*(ka-CL_2/V1))*(exp(-CL_2/V1*t)-exp(-ka*t))
Cp
log10(Cp)
}
```

`=log10(c(4.91, 8.65, 12.4, 18.7, 24.3, 24.5, 18.4, 4.66, 0.238)) observation`

Here we have specified the upper and lower range of the initial guess.

```
=Cluster_Gauss_Newton_method(nonlinearFunction=model_function,
CGNM_resulttargetVector = observation,
initial_lowerRange = c(-2,-2,-2),initial_upperRange = c(2,2,2), saveLog=TRUE, num_minimizersToFind = 500)
#> [1] "checking if the nonlinearFunction can be evaluated at the initial_lowerRange"
#> [1] "Evaluation Successful"
#> [1] "checking if the nonlinearFunction can be evaluated at the initial_upperRange"
#> [1] "Evaluation Successful"
#> [1] "checking if the nonlinearFunction can be evaluated at the (initial_upperRange+initial_lowerRange)/2"
#> [1] "WARNING: nonlinearFunction evaluation at (initial_upperRange+initial_lowerRange)/2 NOT Successful."
#> [1] "Generating initial cluster. 498 out of 500 done"
#> [1] "Generating initial cluster. 500 out of 500 done"
#> [1] "Iteration:1 Median sum of squares residual=3.32738991108342"
#> [1] "Iteration:2 Median sum of squares residual=2.05925909328566"
#> [1] "Iteration:3 Median sum of squares residual=0.927177322210779"
#> [1] "Iteration:4 Median sum of squares residual=0.826709624370546"
#> [1] "Iteration:5 Median sum of squares residual=0.208309634832506"
#> [1] "Iteration:6 Median sum of squares residual=0.0101751539334935"
#> [1] "Iteration:7 Median sum of squares residual=0.00734944340121696"
#> [1] "Iteration:8 Median sum of squares residual=0.0073492355434049"
#> [1] "Iteration:9 Median sum of squares residual=0.00734923408846931"
#> [1] "Iteration:10 Median sum of squares residual=0.00734923408255183"
#> [1] "Iteration:11 Median sum of squares residual=0.0073492340824076"
#> [1] "Iteration:12 Median sum of squares residual=0.00734923408238793"
#> [1] "Iteration:13 Median sum of squares residual=0.00734923408238322"
#> [1] "Iteration:14 Median sum of squares residual=0.00734923408238244"
#> [1] "Iteration:15 Median sum of squares residual=0.00734923408238119"
#> [1] "Iteration:16 Median sum of squares residual=0.00734923408238107"
#> [1] "Iteration:17 Median sum of squares residual=0.00734923408238102"
#> [1] "Iteration:18 Median sum of squares residual=0.00734923408238093"
#> [1] "Iteration:19 Median sum of squares residual=0.00734923408238092"
#> [1] "Iteration:20 Median sum of squares residual=0.0073492340823809"
#> [1] "Iteration:21 Median sum of squares residual=0.00734923408238086"
#> [1] "Iteration:22 Median sum of squares residual=0.00734923408238085"
#> [1] "Iteration:23 Median sum of squares residual=0.00734923408238084"
#> [1] "Iteration:24 Median sum of squares residual=0.00734923408238083"
#> [1] "Iteration:25 Median sum of squares residual=0.00734923408238083"
```

```
head(acceptedApproximateMinimizers(CGNM_result))
#> [,1] [,2] [,3]
#> [1,] -0.03315193 1.280397 0.9946394
#> [2,] -0.28575783 1.027791 0.9946394
#> [3,] -0.28575781 1.027791 0.9946394
#> [4,] -0.03315194 1.280397 0.9946394
#> [5,] -0.28576125 1.027785 0.9946409
#> [6,] -0.28575781 1.027791 0.9946394
```

```
=Cluster_Gauss_Newton_Bootstrap_method(CGNM_result, nonlinearFunction=model_function)
CGNM_bootstrap#> [1] "checking if the nonlinearFunction can be evaluated at the initial_lowerRange"
#> [1] "Evaluation Successful"
#> [1] "checking if the nonlinearFunction can be evaluated at the initial_upperRange"
#> [1] "Evaluation Successful"
#> [1] "checking if the nonlinearFunction can be evaluated at the (initial_upperRange+initial_lowerRange)/2"
#> [1] "WARNING: nonlinearFunction evaluation at (initial_upperRange+initial_lowerRange)/2 NOT Successful."
#> Warning in dir.create(saveFolderName): 'CGNM_log_bootstrap' already exists
#> [1] "Generating initial cluster. 200 out of 200 done"
#> [1] "Iteration:1 Median sum of squares residual=0.0108969313242831"
#> [1] "Iteration:2 Median sum of squares residual=0.0105758136719659"
#> [1] "Iteration:3 Median sum of squares residual=0.0105754961410758"
#> [1] "Iteration:4 Median sum of squares residual=0.0105754951543413"
#> [1] "Iteration:5 Median sum of squares residual=0.0105754951543413"
#> [1] "Iteration:6 Median sum of squares residual=0.0105754951253053"
#> [1] "Iteration:7 Median sum of squares residual=0.010575492023262"
#> [1] "Iteration:8 Median sum of squares residual=0.010575492023262"
#> [1] "Iteration:9 Median sum of squares residual=0.010575492023262"
#> [1] "Iteration:10 Median sum of squares residual=0.0105754918922891"
#> [1] "Iteration:11 Median sum of squares residual=0.0105754914182058"
#> [1] "Iteration:12 Median sum of squares residual=0.0105754911695478"
#> [1] "Iteration:13 Median sum of squares residual=0.0105754911695478"
#> [1] "Iteration:14 Median sum of squares residual=0.0105754911564162"
#> [1] "Iteration:15 Median sum of squares residual=0.0105754910807011"
#> [1] "Iteration:16 Median sum of squares residual=0.0105754909244216"
#> [1] "Iteration:17 Median sum of squares residual=0.0105754909244216"
#> [1] "Iteration:18 Median sum of squares residual=0.0105754909244216"
#> [1] "Iteration:19 Median sum of squares residual=0.0105754909130323"
#> [1] "Iteration:20 Median sum of squares residual=0.0105754909130323"
#> [1] "Iteration:21 Median sum of squares residual=0.0105754909130323"
#> [1] "Iteration:22 Median sum of squares residual=0.0105754909122759"
#> [1] "Iteration:23 Median sum of squares residual=0.0105754909122759"
#> [1] "Iteration:24 Median sum of squares residual=0.0105754909116818"
#> [1] "Iteration:25 Median sum of squares residual=0.0105754909116818"
```

To use the plot functions the user needs to manually load ggplot2.

`library(ggplot2)`

Despite the robustness of the algorithm not all approximate minimizers converge so here we visually inspect to see how many of the approximate minimizers we consider to have the similar SSR to the minimum SSR. Currently the algorithm automatically choose “acceptable” approximate minimizer based on Grubbs’ Test for Outliers. If for whatever the reason this criterion is not satisfactly the users can manually set the indicies of the acceptable approximat minimizers.

`plot_Rank_SSR(CGNM_result)`

`plot_paraDistribution_byHistogram(CGNM_bootstrap, ParameterNames=c("Ka","V1","CL_2"), ReparameterizationDef=c("10^x1","10^x2","10^x3"), bins = 50)`

`plot_goodnessOfFit(CGNM_result, plotType = 1, independentVariableVector = c(0.1,0.2,0.4,0.6,1,2,3,6,12), plotRank = seq(1,50))`

`plot_goodnessOfFit(CGNM_bootstrap, plotType = 1, independentVariableVector = c(0.1,0.2,0.4,0.6,1,2,3,6,12))`

```
plot_profileLikelihood(c("CGNM_log","CGNM_log_bootstrap"))
#> [1] "log saved in /private/var/folders/n8/mrjc8j9j45x93m7t_hmvg9nm0000gn/T/RtmpzLLDHR/Rbuild9b144a4bc6f5/CGNM/vignettes/CGNM_log is used to draw SSR/likelihood surface"
#> [1] "log saved in /private/var/folders/n8/mrjc8j9j45x93m7t_hmvg9nm0000gn/T/RtmpzLLDHR/Rbuild9b144a4bc6f5/CGNM/vignettes/CGNM_log_bootstrap is used to draw SSR/likelihood surface"
```

```
plot_2DprofileLikelihood("CGNM_log", numBins = 50, ParameterNames=c("log10(Ka)","log10(V1)","log10(CL_2)"), ReparameterizationDef = c("x1","x2","x3"))
#> [1] "log saved in /private/var/folders/n8/mrjc8j9j45x93m7t_hmvg9nm0000gn/T/RtmpzLLDHR/Rbuild9b144a4bc6f5/CGNM/vignettes/CGNM_log is used to draw SSR/likelihood surface"
#> Warning: stat_contour(): Zero contours were generated
#> Warning in min(x): no non-missing arguments to min; returning Inf
#> Warning in max(x): no non-missing arguments to max; returning -Inf
#> Warning: stat_contour(): Zero contours were generated
#> Warning in min(x): no non-missing arguments to min; returning Inf
#> Warning in max(x): no non-missing arguments to max; returning -Inf
#> Warning: stat_contour(): Zero contours were generated
#> Warning in min(x): no non-missing arguments to min; returning Inf
#> Warning in max(x): no non-missing arguments to max; returning -Inf
```

For the complete description and comparison with the conventional algorithm please see (https: //doi.org/10.1007/s11081-020-09571-2):

Aoki, Y., Hayami, K., Toshimoto, K., & Sugiyama, Y. (2020). Cluster Gauss–Newton method. Optimization and Engineering, 1-31.

Cluster Gauss-Newton method is an algorithm for obtaining multiple minimisers of nonlinear least squares problems \[ \min_{\boldsymbol{x}}|| \boldsymbol{f}(\boldsymbol x)-\boldsymbol{y}^*||_2^{\,2} \] which do not have a unique solution (global minimiser), that is to say, there exist \(\boldsymbol x^{(1)}\neq\boldsymbol x^{(2)}\) such that \[ \min_{\boldsymbol{x}}|| \boldsymbol{f}(\boldsymbol x)-\boldsymbol{y}^*||_2^{\,2}=|| \boldsymbol{f}(\boldsymbol x^{(1)})-\boldsymbol{y}^*||_2^{\,2}=|| \boldsymbol{f}(\boldsymbol x^{(2)})-\boldsymbol{y}^*||_2^{\,2} \,. \] Parameter estimation problems of mathematical models can often be formulated as nonlinear least squares problems. Typically these problems are solved numerically using iterative methods. The local minimiser obtained using these iterative methods usually depends on the choice of the initial iterate. Thus, the estimated parameter and subsequent analyses using it depend on the choice of the initial iterate. One way to reduce the analysis bias due to the choice of the initial iterate is to repeat the algorithm from multiple initial iterates (i.e. use a multi-start method). However, the procedure can be computationally intensive and is not always used in practice. To overcome this problem, we propose the Cluster Gauss-Newton method (CGNM), an efficient algorithm for finding multiple approximate minimisers of nonlinear-least squares problems. CGN simultaneously solves the nonlinear least squares problem from multiple initial iterates. Then, CGNM iteratively improves the approximations from these initial iterates similarly to the Gauss-Newton method. However, it uses a global linear approximation instead of the Jacobian. The global linear approximations are computed collectively among all the iterates to minimise the computational cost associated with the evaluation of the mathematical model.