CGNM: Cluster Gauss-Newton Method

library(CGNM)
#> Warning: replacing previous import 'lifecycle::last_warnings' by
#> 'rlang::last_warnings' when loading 'tibble'
#> Warning: replacing previous import 'lifecycle::last_warnings' by
#> 'rlang::last_warnings' when loading 'pillar'

When and when not to use CGNM

Use CGNM

Not to use CGNM

How to use CGNM

To illutrate the use of CGNM here we illustrate how CGNM can be used to estiamte two sets of the best fit parameters of the pharmacokinetics model when the drug is administered orally (known as flip-flop kinetics). To illustrate that the model can be definied flexibly, we use the RxODE package to define the model.

Prepare the model (\(\boldsymbol f\))

library(RxODE)
#> Warning: package 'RxODE' was built under R version 4.0.5
#> RxODE 1.1.5 using 1 threads (see ?getRxThreads)
#>   no cache: create with `rxCreateCache()`
#> ========================================
#> RxODE has not detected OpenMP support and will run in single-threaded mode
#> This is a Mac. Please read https://mac.r-project.org/openmp/
#> ========================================

model_text="
d/dt(X_1)=-ka*X_1
d/dt(C_2)=(ka*X_1-CL_2*C_2)/V1"

# here the model defined as above is compiled
model=RxODE(model_text)
#> ld: warning: directory not found for option '-L/usr/local/gfortran/lib/gcc/x86_64-apple-darwin18/8.2.0'
#> ld: warning: ignoring file /usr/local/gfortran/lib/libgfortran.dylib, building for macOS-x86_64 but attempting to link with file built for macOS-arm64
#> ld: warning: ignoring file /usr/local/gfortran/lib/libquadmath.dylib, building for macOS-x86_64 but attempting to link with file built for macOS-arm64

# here we define the model function where takes in the parameter vector x and return the model simulation
model_function=function(x){

observation_time=c(0.1,0.2,0.4,0.6,1,2,3,6,12)

theta <- c(ka=x[1],V1=x[2],CL_2=x[3])
ev <- eventTable()
ev$add.dosing(dose = 1000, start.time =0)
ev$add.sampling(observation_time)
odeSol=model$solve(theta, ev)
log10(odeSol[,"C_2"])
}

Prepare the data (\(\boldsymbol y^*\))

observation=log10(c(4.91, 8.65, 12.4, 18.7, 24.3, 24.5, 18.4, 4.66, 0.238))

Run Cluster_Gauss_Newton_method

Here we have specified the upper and lower range of the initial guess.

CGNM_result=Cluster_Gauss_Newton_method(nonlinearFunction=model_function,
targetVector = observation,
initial_lowerRange = c(0.1,0.1,0.1),initial_upperRange =  c(10,10,10))
#> [1] "checking if the nonlinearFunction can be evaluated at the initial_lowerRange"
#> [1] "Evaluation Successful"
#> [1] "checking if the nonlinearFunction can be evaluated at the initial_upperRange"
#> [1] "Evaluation Successful"
#> [1] "checking if the nonlinearFunction can be evaluated at the (initial_upperRange+initial_lowerRange)/2"
#> [1] "Evaluation Successful"
#> [1] "Generating initial cluster. 212 out of 250 done"
#> [1] "Generating initial cluster. 239 out of 250 done"
#> [1] "Generating initial cluster. 249 out of 250 done"
#> [1] "Generating initial cluster. 250 out of 250 done"
#> [1] "Iteration:1  Median sum of squares residual=5.12740156302682"
#> [1] "Iteration:2  Median sum of squares residual=2.62980262339735"
#> [1] "Iteration:3  Median sum of squares residual=1.07510906993148"
#> [1] "Iteration:4  Median sum of squares residual=0.824038150744166"
#> [1] "Iteration:5  Median sum of squares residual=0.288472367029159"
#> [1] "Iteration:6  Median sum of squares residual=0.0457763129432868"
#> [1] "Iteration:7  Median sum of squares residual=0.00889787936291236"
#> [1] "Iteration:8  Median sum of squares residual=0.00738789663208529"
#> [1] "Iteration:9  Median sum of squares residual=0.00734952759411758"
#> [1] "Iteration:10  Median sum of squares residual=0.00734926734548004"
#> [1] "Iteration:11  Median sum of squares residual=0.00734926726925012"
#> [1] "Iteration:12  Median sum of squares residual=0.00734926724600045"
#> [1] "Iteration:13  Median sum of squares residual=0.00734926723255445"
#> [1] "Iteration:14  Median sum of squares residual=0.00734926720070736"
#> [1] "Iteration:15  Median sum of squares residual=0.00734926719653654"
#> [1] "Iteration:16  Median sum of squares residual=0.00734926719385136"
#> [1] "Iteration:17  Median sum of squares residual=0.00734926718549663"
#> [1] "Iteration:18  Median sum of squares residual=0.00734926542281674"
#> [1] "Iteration:19  Median sum of squares residual=0.00734926542281674"
#> [1] "Iteration:20  Median sum of squares residual=0.00734926542281674"
#> [1] "Iteration:21  Median sum of squares residual=0.00734926534956708"
#> [1] "Iteration:22  Median sum of squares residual=0.00734926534956708"
#> [1] "Iteration:23  Median sum of squares residual=0.00734926534956708"
#> [1] "Iteration:24  Median sum of squares residual=0.00734926534956708"
#> [1] "Iteration:25  Median sum of squares residual=0.00734926534956708"

Obtain the approximate minimizers

head(acceptedApproximateMinimizers(CGNM_result))
#>           [,1]     [,2]     [,3]
#> [1,] 0.5178936 10.66077 9.877346
#> [2,] 0.5178907 10.66072 9.877399
#> [3,] 0.9264896 19.07185 9.877276
#> [4,] 0.9265046 19.07202 9.877325
#> [5,] 0.5178940 10.66077 9.877331
#> [6,] 0.9264461 19.07098 9.877079

Can run residual resampling bootstrap analyses using CGNM as well

CGNM_bootstrap=Cluster_Gauss_Newton_Bootstrap_method(CGNM_result, nonlinearFunction=model_function)
#> [1] "checking if the nonlinearFunction can be evaluated at the initial_lowerRange"
#> [1] "Evaluation Successful"
#> [1] "checking if the nonlinearFunction can be evaluated at the initial_upperRange"
#> [1] "Evaluation Successful"
#> [1] "checking if the nonlinearFunction can be evaluated at the (initial_upperRange+initial_lowerRange)/2"
#> [1] "Evaluation Successful"
#> [1] "Generating initial cluster. 200 out of 200 done"
#> [1] "Iteration:1  Median sum of squares residual=0.0125821872375042"
#> [1] "Iteration:2  Median sum of squares residual=0.0117948416513917"
#> [1] "Iteration:3  Median sum of squares residual=0.0113229591318635"
#> [1] "Iteration:4  Median sum of squares residual=0.0110482168349146"
#> [1] "Iteration:5  Median sum of squares residual=0.0108719427531259"
#> [1] "Iteration:6  Median sum of squares residual=0.0107965070514276"
#> [1] "Iteration:7  Median sum of squares residual=0.0107916541419828"
#> [1] "Iteration:8  Median sum of squares residual=0.0107916192914198"
#> [1] "Iteration:9  Median sum of squares residual=0.0107916192914198"
#> [1] "Iteration:10  Median sum of squares residual=0.0107916192914198"
#> [1] "Iteration:11  Median sum of squares residual=0.0107916192693381"
#> [1] "Iteration:12  Median sum of squares residual=0.0107916192693381"
#> [1] "Iteration:13  Median sum of squares residual=0.0107916188014728"
#> [1] "Iteration:14  Median sum of squares residual=0.0107916188014728"
#> [1] "Iteration:15  Median sum of squares residual=0.0107916188014728"
#> [1] "Iteration:16  Median sum of squares residual=0.0107916188014728"
#> [1] "Iteration:17  Median sum of squares residual=0.010791618715"
#> [1] "Iteration:18  Median sum of squares residual=0.0107916187133032"
#> [1] "Iteration:19  Median sum of squares residual=0.0107916187133032"
#> [1] "Iteration:20  Median sum of squares residual=0.0107916187133032"
#> [1] "Iteration:21  Median sum of squares residual=0.0107916187133032"
#> [1] "Iteration:22  Median sum of squares residual=0.0107916186833921"
#> [1] "Iteration:23  Median sum of squares residual=0.0107916186833921"
#> [1] "Iteration:24  Median sum of squares residual=0.0107916186833921"
#> [1] "Iteration:25  Median sum of squares residual=0.0107916186833921"

Visualize the CGNM modelfit analysis result

To use the plot functions the user needs to manually load ggplot2.

library(ggplot2)
#> 
#> Attaching package: 'ggplot2'
#> The following object is masked from 'package:RxODE':
#> 
#>     facet_wrap

Inspect the distribution of SSR of approximate minimizers found by CGNM

Despite the robustness of the algorithm not all approximate minimizers converge so here we visually inspect to see how many of the approximate minimizers we consider to have the similar SSR to the minimum SSR. Currently the algorithm automatically choose “acceptable” approximate minimizer based on Grubbs’ Test for Outliers. If for whatever the reason this criterion is not satisfactly the users can manually set the indicies of the acceptable approximat minimizers.

plot_Rank_SSR(CGNM_result)

plot_paraDistribution_byHistogram(CGNM_bootstrap,  ParameterNames=c("Ka","V1","CL_2"), ReparameterizationDef=c("x1","x2","x3"), bins = 50)

visually inspect goodness of fit of top 50 approximate minimizers

plot_goodnessOfFit(CGNM_result, plotType = 1, independentVariableVector = c(0.1,0.2,0.4,0.6,1,2,3,6,12), plotRank = seq(1,50))

plot model prediction with uncertainties based on residual resampling bootstrap analysis

plot_goodnessOfFit(CGNM_bootstrap, plotType = 1, independentVariableVector = c(0.1,0.2,0.4,0.6,1,2,3,6,12))

What is CGNM?

For the complete description and comparison with the conventional algorithm please see (https: //doi.org/10.1007/s11081-020-09571-2):

Aoki, Y., Hayami, K., Toshimoto, K., & Sugiyama, Y. (2020). Cluster Gauss–Newton method. Optimization and Engineering, 1-31.

The mathematical problem CGNM solves

Cluster Gauss-Newton method is an algorithm for obtaining multiple minimisers of nonlinear least squares problems \[ \min_{\boldsymbol{x}}|| \boldsymbol{f}(\boldsymbol x)-\boldsymbol{y}^*||_2^{\,2} \] which do not have a unique solution (global minimiser), that is to say, there exist \(\boldsymbol x^{(1)}\neq\boldsymbol x^{(2)}\) such that \[ \min_{\boldsymbol{x}}|| \boldsymbol{f}(\boldsymbol x)-\boldsymbol{y}^*||_2^{\,2}=|| \boldsymbol{f}(\boldsymbol x^{(1)})-\boldsymbol{y}^*||_2^{\,2}=|| \boldsymbol{f}(\boldsymbol x^{(2)})-\boldsymbol{y}^*||_2^{\,2} \,. \] Parameter estimation problems of mathematical models can often be formulated as nonlinear least squares problems. Typically these problems are solved numerically using iterative methods. The local minimiser obtained using these iterative methods usually depends on the choice of the initial iterate. Thus, the estimated parameter and subsequent analyses using it depend on the choice of the initial iterate. One way to reduce the analysis bias due to the choice of the initial iterate is to repeat the algorithm from multiple initial iterates (i.e. use a multi-start method). However, the procedure can be computationally intensive and is not always used in practice. To overcome this problem, we propose the Cluster Gauss-Newton method (CGNM), an efficient algorithm for finding multiple approximate minimisers of nonlinear-least squares problems. CGN simultaneously solves the nonlinear least squares problem from multiple initial iterates. Then, CGNM iteratively improves the approximations from these initial iterates similarly to the Gauss-Newton method. However, it uses a global linear approximation instead of the Jacobian. The global linear approximations are computed collectively among all the iterates to minimise the computational cost associated with the evaluation of the mathematical model.