archive-org.com » ORG » N » NONMEM.ORG

Total: 436

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • [NMusers] MM elimination vs dose
    79 respectively But as soon as a put Dose on CL OFV 13560 37 BSA on CL OFV 13651 29 and AGE on CL OFV 13942 51 the OFV decreases from the base model of 14138 61 Even taking into account the problems with the distribution of the likelihood ratio test and FO approximation these seem to be real decreases So has anyone any ideas for why dose drops the OFV in a linear compartmental model but a MM elimination model won t work All models minimized successfully without any singularities Is the nonlinearity so mild that it can t be supported by a nonlinear compartmental model Thanks for your help Pete Bonate Peter L Bonate PhD FCP Director Pharmacokinetics Genzyme Corporation 4545 Horizon Hill Blvd San Antonio TX 78229 phone 210 949 8662 fax 210 949 8219 email peter bonate genzyme com From Sam Liao sliao pharmaxresearch com Subject Re NMusers MM elimination vs dose Date Fri 30 Sep 2005 13 31 15 0400 Dear Peter I am not surprise to hear the problem you had on the MM elimination model Based on my expereience in order to estimate the Vmax and Km reliably you will need a cross over study with subjects received at least two different doses Sam Liao Pharmax Research From Nick Holford n holford auckland ac nz Subject Re NMusers MM elimination vs dose Date Sat 01 Oct 2005 05 43 09 1200 Peter Non linear PK can influence other processes apart from elimination Have you looked for non linear changes in volume of distribution You may also want to consider changes in bioavailability with dose there could be an issue with the IV formulation as doses increase I doubt if your drug is eliminated via the skin so why not look for a biologically

    Original URL path: http://nonmem.org/nonmem/nm/99sep302005.html (2016-04-25)
    Open archived version from archive


  • machine=AMD-64
    TRAN Step 5d Separate TRLD Step 6 Compile source files f Continue y n y y Step 6a Compile NONMEM this may take a while Compilation failed Please look for error messages in app nonmem nm emsgs txt Compiler Errors from app nonmem nm emsgs txt Tue Sep 27 13 55 10 EDT 2005 CN1 f In subroutine cn1 CN1 f 134 warning 65 CALL SCALE S NV OPSC Reference to unimplemented intrinsic SCALE at assumed EXTERNAL CN f In subroutine cn CN f 92 warning 60 CALL SCALE S NV OPSC Reference to unimplemented intrinsic SCALE at assumed EXTERNAL GETETA f In subroutine geteta GETETA f 45 warning 1 SUBJECT DATA Missing comma in FORMAT statement at INITL f In subroutine initl INITL f 535 warning 1 PRIOR Missing comma in FORMAT statement at RESCL f In subroutine rescl RESCL f 48 warning CALL SCALE S N 1 Reference to unimplemented intrinsic SCALE at assumed EXTERNAL tmp ccY739tN o text 0x47b In function commrg undefined reference to pred tmp ccY739tN o text 0x1314 In function commrg undefined reference to pred tmp ccY739tN o text 0x1617 In function commrg undefined reference to pred tmp ccMZPDsQ o text 0x372 In function final undefined reference to pred tmp ccJAqvQt o text 0x1c76 In function initl undefined reference to pred tmp ccJAqvQt o text 0x2138 more undefined references to pred follow collect2 ld returned 1 exit statusRobert D Johnson Ph D Procter and Gamble Pharmaceuticals Clinical Pharmacology and Pharmacokinetics 513 622 1571 From Bachman William MYD bachmanw iconus com Subject NMusers RE machine AMD 64 Date Wed 28 Sep 2005 13 35 35 0400 Bob I see one minor error the setup command should use a capital Oh for the optimization option instead of lowercase SETUP app nonmem f77 O usr bin ar

    Original URL path: http://nonmem.org/nonmem/nm/99sep282005.html (2016-04-25)
    Open archived version from archive

  • [NMusers] Dose and concentration unit differences
    the concentrations as ln mcg L the model doesn t run and has problem However if I change the units and use ng for dose and ln ng L for concentrations i e simply multiply both by 1000 the model runs and gives quite acceptable results Could anybody explain this I give a couple of lines of the data file below The data without any manipulations ID DATE DROP TIME AMT mcg DV ng mL EVID 4 5 15 2000 10 19 400 1 4 5 15 2000 10 25 15 9 0 4 5 15 2000 10 37 7 66 0 4 5 15 2000 11 04 4 28 0 4 5 15 2000 13 45 1 18 0 Dose in mcg Conc in ln mcg L ID DATE DROP TIME AMT mcg DV ng mL EVID 4 5 15 2000 10 19 400 1 4 5 15 2000 10 25 2 77 0 4 5 15 2000 10 37 2 04 0 4 5 15 2000 11 04 1 45 0 4 5 15 2000 13 45 0 17 0 Dose in ng Conc in ln ng L ID DATE DROP TIME AMT mcg DV ng mL EVID

    Original URL path: http://nonmem.org/nonmem/nm/98sep222005.html (2016-04-25)
    Open archived version from archive

  • [NMusers] Problems with recursive PRED models...
    estimations on individual data seem fine also Straightforward FO estimation leads to high condition numbers with the data I have Setting some OMEGA values to FIXED values gets the condition number to less than 1000 FOCE methods often abort with a floating point overflow error or rounding errors even those with a corresponding FO estimation having a low condition number I have had only one FOCE estimation run to completion The results and the prediction are reasonable and parameters are in the expected range However even small changes to the initial estimates cause floating point overflow errors and it is difficult to have confidence in the results when the estimation breaks so easily Before diving deeper into the specifics of this problem I think it is prudent to make sure that NONMEM can correctly implement the model Can anyone tell me if this model is indeed a recursive PRED type and whether or not the model can be correctly implemented in NONMEM Thanks very much Doug Eleveld The following control file aborts crashes after 2 iterations with a floating point overflow error Open Watcom compiler PROB Potentiation fitting DATA potpd prn IGNORE C INPUT ID TIME DV MDV CFLG AMT RATE CMT V1 V2 V3 CL Q2 Q3 SUBROUTINES ADVAN6 TOL 4 ABBREVIATED COMRES 1 MODEL COMP CENTRAL DEFDOSE COMP PERIF1 NOOFF NODOSE COMP PERIF2 NOOFF NODOSE COMP EFFECT NOOFF NODOSE COMP POTENT NOOFF PK S1 V1 KEO THETA 1 EXP ETA 1 EC50 THETA 2 EXP ETA 2 GAMM THETA 3 EXP ETA 3 POTR ABS THETA 4 ETA 4 POTK THETA 5 EXP ETA 5 SCAL THETA 6 ETA 6 PD1 COM 1 GAMM PD2 EC50 GAMM MNMB 1 PD1 PD1 PD2 F5 POTR MNMB 1000 DES C1 A 1 V1 Conc in V1 C2 A 2 V2

    Original URL path: http://nonmem.org/nonmem/nm/99sep222005.html (2016-04-25)
    Open archived version from archive

  • [NMusers] new networking group for Modeling and Simulation
    Our first event a membership information reception followed by dinner will be held on Friday October 14 in Lawrenceville NJ Everyone in the modeling and simulation area is encouraged to attend come learn about MoSAiC meet your peers and enjoy some great food and drinks Please visit our website http mosaicnj blogsome com for more information on MoSAiC and the details about the event There will be links for you

    Original URL path: http://nonmem.org/nonmem/nm/99sep212005.html (2016-04-25)
    Open archived version from archive

  • [NMusers] Calculation of the 90% CI values from pop PK model estimates
    from the covariance matrix of estimate rather than just using SE s may be more appropriate which could be what Immanuel implied with that his simple approach may not be good enough Including covariance of estimate would give tighter confidence intervals If AUC is only dependant on CL and dose the geometric mean AUC can be calculated at CL eta 0 without the need of simulation To account for the parameter uncertainties in the weight effect sampling from the covariance matrix of estimate could easily be done in almost any programming language eg MATLAB mvnrnd or SPlus R rmvnorm To calculate the effect of DDI1 on CL the 90 confidence interval for theta2 could simply be applied Using likelihood profiling may be beneficial for the DDI1 effect but if time and computer power allows a stratified bootstrap could be more reliable for calculating both DDI1 and weight effects One problem may be that many different concomitant medications were considered as possible DDI s which is implied by the name drug drug interaction 1 The estimate of the DDI1 effect is then often exaggerated due to selection bias This is so even if the p value for selection was corrected for the multiple comparisons of the many DDI s one can not know if DDI1 was one of many small clinically irrelevant interactions eg 15 effect and that DDI1 randomly seemed more important in these ten individuals which were on the concomitant medication OR if the estimated effect 30 is real If the highest DDI1 effect within the 90 confidence interval CL 50 lower is judged not to be clinically relevant the selection bias is not a big problem It will still affect the predictive performance of the model but predicting was not the current task To fully account for the selection bias is very computer intensive One way is to apply the whole covariate selection procedure on a HUGE amount of bootstrap datasets to estimate the selection bias i e the bias one could expect in the subset of datasets where the covariate has been selected Does anyone know if this has been done in our area If anyone would actually want to do this We have previously investigated the different contributions to selection bias and concluded that competition between correlated covariates is not increasing the selection bias much more if statistical significance is already required for covariate selection Given this I would believe that it is enough to only test for selecting DDI1 on the bootstrap datasets with the effect of weight on CL and any other covariate effects on other parameters already in the model This would massively reduce the time for the covariate analysis of a bootstrap dataset since no investigation of all other covariates which were not selected from the original dataset is necessary Taking this approach may also allow a reduction in the number of bootstrap datasets needed for a precise estimate of the selection bias Any comments to this idea Regards Jakob Jakob Ribbing

    Original URL path: http://nonmem.org/nonmem/nm/98sep152005.html (2016-04-25)
    Open archived version from archive

  • [NMusers] Interpreting IOV
    Simulation Global Clinical Pharmacokinetic and Clinical Pharmacology Johnson Johnson Pharmaceutical Research Development a Division of Janssen Pharmaceutica NV Turnhoutseweg 30 B 2340 Beerse Belgium 32 0 14 60 75 08 32 0 14 60 58 34 32 0 473 91 09 82 jperezru prdbe jnj com From j bulitta web de Subject Re NMusers Interpreting IOV Date Fri 16 Sep 2005 02 27 16 0200 Dear Dr Bonate in case you have a rather short 15 min duration of infusion and frequent samples within 5 20 min after end of the infusion you may observe IOV in V1 which may be influenced by the distribution of arterial and venous blood I saw a few datasets when the peak concentration was actually 5 15 min after end of the iv infusion instead of at the end of infusion for about 30 depending on the drug of the subjects I did not have replicated doses for those datasets and could not include IOV However dosing the zero order input into a mixing compartment gut and not into the central compartment improved the objective function typically by 50 points for those datasets with frequent blood sampling Best regards Juergen Juergen Bulitta M Sc Research Scientist IBMP Institute for Biomedical and Pharmaceutical Research Paul Ehrlich Strasse 19 90562 Nurnberg Heroldsberg Germany From Mats Karlsson mats karlsson farmbio uu se Subject RE NMusers Interpreting IOV Date Fri 16 Sep 2005 08 20 46 0200 Dear Pete It would be easier to say if something about the drug and study population and study circumstances was known Changing protein binding or blood composition w or w o food and different levels of physical activity could be explanations On the other hand it could always be modeling nonsense Best regards Mats Mats Karlsson PhD Professor of Pharmacometrics Div

    Original URL path: http://nonmem.org/nonmem/nm/99sep152005.html (2016-04-25)
    Open archived version from archive

  • [NMusers] Covariance: Matrix=S or Matrix=R
    of the R or S matrix The NONMEM repository has plenty of discussions on the relevance of estimated standard errors which I won t bring up here Steve Stephen Duffull School of Pharmacy University of Queensland Brisbane 4072 Australia Tel 61 7 3365 8808 Fax 61 7 3365 1688 Email sduffull pharmacy uq edu au www http www uq edu au pharmacy index html page 31309 Design http www uq edu au pharmacy sduffull POPT htm MCMC http www uq edu au pharmacy sduffull MCMC eg htm University Provider Number 00025B From mark e sale gsk com Subject RE Re 2 NMusers Covariance Matrix S or Matrix R Date Wed 14 Sep 2005 20 03 50 0400 Steve et al To continue beating this horse A saddle point is to be distinguished from a local minima All local search algorithms have the potential for local mimima period A saddle point is different A saddle point is a point at which the first derivative of the OBJ wrt parameters is close to zero i e the function is flat locally but one dimension if curving up and one dimension is curing down Minima and saddle points can be distinguished by using the second derivative is the surface curving up or down If the second derivative Hessian is poorly defined you can t be certain that the flatness isn t due to being at the top of a peak curving down a maxima in one dimension vs being a the bottom curving up a minima in another My understanding for what it is is that modern non linear regression algorithms are pretty robust to not getting stuck in saddle points of course depending on how well defined the surface is If the surface is flat as far as the algorithm can see it has a hard time telling if this is maxima or a minima But again this is a known problem for non linear regression and great effort has be applied to getting modern algorithms which NONMEM actually uses to address it robustly There are non linear regression like algorithms that are more robust to local minima They are complex inefficient and rarely used Other algorithms that are robust to local minima include the convexity stuff from the USC group and I suppose MCMC could be included as well seems to me it should not have a problem with local minima but I m not sure I think the text is unclear the R and S matrix tell you nothing about whether this is a local or global minima only if it is a minima or either kind or a saddle point I think the work global should be ignored Mark Sale M D Global Director Research Modeling and Simulation GlaxoSmithKline 919 483 1808 Mobile 919 522 6668 From Stephen Duffull sduffull pharmacy uq edu au Subject RE Re 2 NMusers Covariance Matrix S or Matrix R Date Fri 16 Sep 2005 08 49 23 1000 Hi Mark Thanks for your

    Original URL path: http://nonmem.org/nonmem/nm/99sep132005.html (2016-04-25)
    Open archived version from archive



  •