archive-org.com » ORG » N » NONMEM.ORG Total: 436 Choose link from "Titles, links and description words view": Or switch to
"Titles and links view". |

- [NMusers] 202 Fortran Error Message with LOG()

0 1 20 0 474218331 0 746087449 293 0 0 0 0 1 30 0 494363929 0 704483335 293 0 0 0 0 1 60 0 590277778 0 527162043 293 0 0 0 0 1 120 0 554716855 0 589297467 293 0 0 0 0 1 180 0 494363929 0 704483335 293 0 0 0 0 1 300 0 3687909 0 997525462 293 0 0 0 0 2 0 20 0 0001 9 210340372 293 0 0 0 1 2 10 0 454206924 0 789202405 293 0 0 0 0 2 20 0 465931965 0 763715653 293 0 0 0 0 2 30 0 517361111 0 659014174 293 0 0 0 0 2 60 0 629025765 0 463583061 293 0 0 0 0 2 120 0 543226651 0 610228641 293 0 0 0 0 2 180 0 491059447 0 711190085 293 0 0 0 0 2 300 0 327879 1 115110641 293 0 0 0 0 3 0 20 0 0001 9 210340372 295 0 0 0 1 3 10 0 525295223 0 643794845 295 0 0 0 0 3 20 0 534822866 0 625819678 295 0 0 0 0 3 30 0 549365942 0 598990499 295 0 0 0 0 3 60 0 568454777 0 564833517 295 0 0 0 0 3 120 0 511993425 0 669443496 295 0 0 0 0 3 180 0 486664654 0 720179989 295 0 0 0 0 3 300 0 3148957 1 155513806 295 0 0 0 0 4 0 20 0 0001 9 210340372 294 0 0 0 1 4 10 0 479938272 0 734097783 294 0 0 0 0 4 20 0 650127483 0 430586808 294 0 0 0 0 4 30 0 742351047 0 297933039 294 0 0 0 0 4 60 0 749865808 0 287861011 294 0 0 0 0 4 120 0 669249866 0 401597797 294 0 0 0 0 4 180 0 5512 0 595657559 294 0 0 0 0 4 300 0 404789786 0 904387394 294 0 0 0 0 5 0 20 0 0001 9 210340372 301 1 1 0 1 5 10 2 011423108 0 698842485 301 1 1 0 0 5 20 1 935050993 0 660133679 301 1 1 0 0 5 30 1 840596484 0 610089695 301 1 1 0 0 5 60 1 521168814 0 419478996 301 1 1 0 0 5 120 0 808541331 0 212523481 301 1 1 0 0 5 180 0 648483629 0 43311852 301 1 1 0 0 5 300 0 45677987 0 783553689 301 1 1 0 0 6 0 20 0 0001 9 210340372 303 1 1 0 1 6 10 1 257229603 0 228910572 303 1 1 0 0 6 20 1 244414251 0 218664938 303 1 1 0 0 6 30 1 216552603 0 196021124 303 1 1 0 0 6 60 1 111530462 0 10573786 303 1 1 0 0 6 120 0 68553744 0 377552164 303 1 1 0 0 6 180 0 623725174 0 472045434 303 1 1 0

Original URL path: http://nonmem.org/nonmem/nm/99jun222005.html (2016-04-25)

Open archived version from archive - [NMusers] NONMEM Tips #22 - June 20, 2005 - Upgrading to Intel 9.0Fortran for Windows

your system The environmental variables may still have settings that point to previous versions of the compiler so that if you run the compiler command ifort from a DOS window it may tell you that you are running version 8 0 instead of 9 0 To circumvent this issue manually edit your system environment variables Remove paths to version 8 0 from the PATH LIB INCLUDE variables and replace as necessary with the correct paths for version 9 0 The ifortvars bat found in the C Program Files Intel Compiler Fortran 9 0 IA32 Bin directory for my installation will provide the information you need for the variables for version 9 0 Once you have modified your environment test that you are getting version 9 0 by running ifort from a DOS window The response should look something like C nmi9 run ifort Intel R Fortran Compiler for 32 bit applications Version 9 0 Build 20050430Z Package ID W FC P 9 0 018 Copyright C 1985 2005 Intel Corporation All rights reserved ifort Command line error no files specified for help type ifort help I then ran the following tests 1 setup bat version 4 0 to test installing NONMEM using the new compiler no problems were encountered with the installation of NONMEM setup a c nmi9 ifort y link Similarly no problems were encountered with the CDsetup bat found on the CD ROM distribution of NONMEM D cdsetup d c nmi9cd ifort y link 2 running the control5 test problem C nmi9 run nmfe5 control5 report5 txt or 3 running PDx Pop with the new compiler the nonmemdir entry was changed in the PDx Pop ini to reflect the new directory that I used Tests were done on a system running Microsoft Windows 2000 Professional No compatibility problems

Original URL path: http://nonmem.org/nonmem/nm/98jun202005.html (2016-04-25)

Open archived version from archive - [NMusers] Compaq Digital Fortran no more

Fortran and asked to receive news of product upgrades Please do not reply to this message it is sent from an automated mailer and replies will not be read Hewlett Packard plans to discontinue Compaq Visual Fortran on December 31st 2005 We recommend our Compaq Visual Fortran customers take advantage of the migration path to Intel Visual Fortran 9 0 We have worked with Intel to make the migration as easy and attractive as possible Intel has created a special web page at http www intel com software products compilers upgrade to ivf htm It includes information on the migration path to Intel Visual Fortran Compilers including additional support for migration a white paper to help port applications from Compaq Visual Fortran to Intel Visual Fortran information on special upgrade pricing for CVF users and more Please note that we will continue to provide technical support for Compaq Visual Fortran at the vf support hp com e mail address through June 30 2006 We wish to thank you for having been a CVF customer Regards CVF Product Management Hewlett Packard Peter L Bonate PhD FCP Director Pharmacokinetics Genzyme Oncology 4545 Horizon Hill Blvd San Antonio TX 78229 phone 210 949

Original URL path: http://nonmem.org/nonmem/nm/99jun202005.html (2016-04-25)

Open archived version from archive - [NMusers] Simulation vs. actual data

parameters the degenerate posterior distribution for simulation without including uncertainty can give useful diagnostic information In trying to use a consistent terminology for the various intervals used to describe the time course of response I wonder if you would accept the following Confidence Interval Describes the uncertainty in the mean response It could be constructed by a non parametric bootstrap and using the resulting parameters for each bootstrap run to predict the population response at each time point The distribution of these population responses obtained from say 1000 bootstrap runs can be used to define the confidence interval The confidence interval says nothing about PPV or RUV but reflects only the estimation uncertainty in the population parameters I am not aware of any published use of this kind of interval applied to NONMEM analyses but would be interested to hear of this application Prediction Interval Decribes the variation in individual response which is attributable to PPV and RUV It may be obtained by a parametric bootstrap e g using SIM with NSUBPROBLEMS 1000 in NONMEM based on the final parameter estimates for the model that is being evaluated A 90 interval constructed from the empirical distribution of individual predicted responses with residual error at each time should contain 90 of the observed responses at that time This is the interval and method that is used for the visual predictive check VPC Holford 2005 The procedure is the same as the SPC and degenerate PPC It has been frequently referred to as a posterior predictive check e g Duffull 2000 Tolerance Interval Describes the uncertainty in the prediction interval by including uncertainty in the parameter estimates This could be done using the same procedure as the SPC but sampling from the covariance matrix of the estimate in addition to the variance covariance matrix for OMEGA and SIGMA I am not aware of anyone who has done this with NONMEM with both PPV and RUV but would be interested if someone could report any such experience In their definition of PPC Yano et al did not include the generation of an interval The PPC compares a statistic T computed on the observed data to the distribution of that statistic under a candidate model fitted to the data to derive a p value which we denote by pPPC However it is implicit in their methodology for calculating the probability of a response pPPC I would suggest that it might be better if the term PPC was restricted to the case where parameter uncertainty is included in the simulation process because this explicitly recognizes the role of the non degenerate posterior distribution I think that an interval which describes the variability in individual responses prediction interval is more commonly of interest than variability in the population response confidence interval A tolerance interval has some theoretical advantage over the prediction interval by being a bit more conservative i e wider intervals but most of the merits of this kind of model qualification approach will be seen in the computationally more convenient prediction interval It is directly applicable for evaluating the performance of a model to describe existing observations and for illustrating to non pharmacometricians what might be expected in a typical patient population The nomenclature could stand some improvement so that we can use these terms more precisely Nick Duffull SB Chabaud S Nony P Laveille C Girard P Aarons L A pharmacokinetic simulation model for ivabradine in healthy volunteers Eur J Pharm Sci 2000 10 4 285 94 Gobburu J Holford N Ko H Peck C Two step model evaluation tsme for model qualification In American Society for Clinical Pharmacology and Therapeutics Annual Meeting 2000 Los Angeles CA USA 2000 Abstract Holford NHG The Visual Predictive Check Superiority to Standard Diagnostic Rorschach Plots PAGE 2005 http www page meeting org default asp id 26 keuze abstract view goto abstracts orderby author abstract id 738 Yano Y Beal SL Sheiner LB Evaluating pharmacokinetic pharmacodynamic models using the posterior predictive check J Pharmacokinet Pharmacodyn 2001 28 2 171 92 From Perez Ruixo Juan Jose PRDBE JPEREZRU PRDBE jnj com Subject RE NMusers Simulation vs actual data Date Tue July 12 2005 2 52 pm Dears Interesting discussion For the tolerance interval I think it would be more accurate to sample THETAs OMEGAs and SIGMAs from the non parametric bootstrap replicates For each replicate NSUBPROBLEMS 100 should be implemented Then the 90 tolerance interval can be calculated from the empirical distribution of all individual predicted responses with residual error at each time That s how I implemented it previously for a simulation exercise see reference below The tolerance interval constructed as described above should contain 90 of the observed responses at each time In my experience the prediction interval as Nick described could be too narrow to include the 90 of the observed response at each time specially if uncertainty is high relative to PPV and RUV Reference Jolling K Perez Ruixo JJ Hemeryck A Vermeulen V Greway T Mixed effects Modelling of the Interspecies Pharmacokinetic Scaling of Pegylated Human Erythropoietin Eur J Pharm Sci 2005 24 465 475 Regards Juanjo From Nick Holford n holford auckland ac nz Subject Re NMusers Simulation vs actual data Date Tue July 12 2005 3 59 pm Juanjo Thanks for your comments and for the reference to your paper on pegylated EPO PK In this paper you describe the process for generating a tolerance interval as follows In addition the final model was used to simulate the pharmacokinetic profile of PEG EPO after i v and s c administration of a single dose of 1000 ug to 100 male subjects Uncertainty in fixed and random parameter estimates was considered during the simulations by replicating the above mentioned process 30 times using different values of fixed and random parameters randomly selected from the estimates of the bootstrap replicates If I understand you correctly this means simulating profiles in 3000 subjects with each block of 100 subjects having a different set of fixed and random effect parameters sampled from the non parametric bootstrap You then used the empirical distribution of 3000 responses at each time point to construct the interval The experience you mention below agrees with the theoretical expectations I mentioned for the tolerance interval compared with the prediction interval but I cannot find anything in your paper to support your statement experimentally Your Figure 4 shows 90 tolerance intervals for extrapolations to humans from non human species It has no observations for a visual predictive check Did you try generating tolerance and prediction intervals for the species you had data for If so what fraction of observed values would lie within the tolerance interval you generated and what fraction would lie within a prediction interval generated with a single set of fixed and random effects parameters e g using the final model parameter estimates I am interested in getting a concrete example of just how much different the tolerance and prediction intervals might be for a real NONMEM analysis On another topic I note your final model used an empirical allometric model using a combination of total weight and brain weights The brain weights were in fact just total weight multiplied by an empirical adjustment factor and not truly individual brain weights This makes it impossible to assert that brain weight itself is an independent covariate for the prediction of clearance As there is no plausible mechanistic reason to believe that the brain is an important clearance organ for EPO I would think the apparent benefit of adding brain weight to the model is an illustration of selection bias Ribbing J Jonsson EN Power Selection Bias and Predictive Performance of the Population Pharmacokinetic Covariate Model Journal of Pharmacokinetics and Pharmacodynamics 2004 31 2 109 134 You did not report results for a simple allometric model based on body weight alone with the theoretical exponents of 3 4 for CL and Q and 1 for V1 and V2 xYou report a reference model with estimated allometric exponents Did you try using a simple allometric model and test if the CI for the exponents were different from the theoretical values Nick From Perez Ruixo Juan Jose PRDBE JPEREZRU PRDBE jnj com Subject RE NMusers Simulation vs actual data Date Wed July 13 2005 7 40 am Nick Your interpretation of the process described to generate a tolerance interval is correct In theory I think it s better approach to sample from a non parametric bootstrap distribution than just sampling from the covariance matrix of the estimate in addition to the variance covariance matrix for OMEGA and SIGMA I agree it s easier to use the prediction interval than the tolerance interval Also the theoretical advantage of the tolerance interval in some cases could not be relevant low uncertainty but in some other it could be the only choice high uncertainty My suggestion is to test on case by case basis both intervals and if both give similar results then move forward with the prediction interval otherwise use the tolerance interval At risk of getting me out of the tolerance interval of my wife I did some simulations at home to answer your questions However I used other datasets where I had a model with between and within subject variability in several parameters The model was built with the dataset A and the 80 prediction and tolerance intervals were calculated for each concentration in the dataset B The prediction and the tolerance intervals contained 81 4 and 79 4 of the observations respectively Probably both numbers are very similar because RSE for fixed and random effects are lower than 15 and 40 respectively Therefore in this case low uncertainty I would continue the simulation and or evaluation work without considering the uncertainty At this stage I don t have any example where I can show you the tolerance interval is superior to prediction interval in terms of predictive performance Perhaps anyone else may have it and would be very interesting to see the results and the consequences One interesting thing I learned from this exercise is that tolerance interval can be narrower than prediction interval One potential reason is that the estimates for two random effects fall in the upper part of the non parametric bootstrap distribution for the same parameter but below the 90 confidence interval So when the uncertainty is considered more subproblems are simulated with lower variability and as a consequence the tolerance interval is narrower Finally I also wonder if anyone in the nmuser list would like to share any experience with prediction tolerance intervals for categorical data I guess the way to calculate those intervals is a bit more complex With respect to your comment on the allometric scaling as you know allometric model are empirical and not all equations relate directly to physiology In fact body weight and brain weight has been commonly used to predict the clearance of drugs in humans Mahmood I et al Xenobiotica 1996 In particular body weigh and brain weight has been recently used to predict from animal to human the clearance of protein drugs such rhuEPO and EPO beta Mahmood I J Pharm Sci 2004 I agree that brain is not an important clearance organ for EPO however brain weight was tested on the basis of Sacher equation which relates body weight and brain weight to the maximum lifetime potential MLP MLP is a measurement of the chronological time for a particular species necessary for a particular physiological event to occur The shorter the MLP the faster the biological cycles occur One may for instance consider the drug elimination as the physiological event to occur and then MLP or brain weight could be used to explain the difference in drug clearance across species with similar body weight In fact the brain weight in rabbit 0 56 of body weight is lower than the brain weight in monkey 1 80 of body weight So given the same body weight for both species see figure 3 of the paper then MLP in rabbits is shorter relative to monkeys 0 76 x 105 h versus 1 62 x 105 h and therefore the PEG EPO clearance in rabbits is faster as compared to monkeys The reference model we reported is a simple allometric model based on body weight alone From the RSE you can see that 95 CI were not different from the theoretical value Even in the final model you can derive the real exponent of body weight for CL In order to do that it is needed to consider the effect of brain weight because of its proportionality to body weight within a particular species Therefore 1 030 apparent exponent of weight cannot be directly compared to 0 75 without taking into account the exponent of brain weight Doing so 1 030 0 345 0 685 is obtained as the real exponent of weight which is very similar to the expected 0 75 I understand the real exponent of body weight is 0 75 Regards Juanjo From Kowalski Ken Ken Kowalski pfizer com Subject RE NMusers Simulation vs actual data Date Thu July 14 2005 1 22 pm Nick To make distinctions between the 3 different types of statistical intervals we only need consider two major classes of variability sampling variability and parameter uncertainty Here sampling variability can have multiple sources of variation due to sampling of subjects BSV different occasions within a subject WSV and sampling of observations RUV Moreover parameter uncertainty can be multivariate and can be thought of as the trial to trial variation in that we would get a different set of population parameter estimates each time we repeat the trial I still like my simple univariate normal mean model example which has only one source of sampling variability and one parameter to illustrate the differences in these intervals In the univariate normal example the sampling variability is estimated as the sample standard deviation SD the population mean parameter is estimated by the sample mean Ybar and the parameter uncertainty is the standard error of the mean SE SD sqrt N With these definitions I ll take another stab at defining the general forms of the 3 types of statistical intervals for the univariate normal mean model case and draw analogies to the more complex population modeling setting Degenerative Tolerance Interval Ybar k SD This is the form of a degenerate tolerance interval where we assume the population mean mu and standard deviation sigma are known without uncertainty given by the estimates Ybar and SD respectively It is degenerative in that we put all the probability mass on our point estimates Ybar and SD where a degenerative distribution is one in which the distribution has only one value with probability 1 The inference for this interval is on the individual Y s since the SD is the standard deviation of the individual Y s If we assume the Y s are normal with Ybar and SD known without uncertainty we know from our first course in probability and statistics that Ybar 1 SD Ybar 2 SD and Ybar 3 SD contain approximately 68 95 and 99 7 of the individual Y values respectively Degenerative Tolerance Interval Population PK Example To draw an analogy to population modeling and construction of degenerative tolerance intervals consider the following example Suppose we conduct a single dose PK study with N 12 healthy subjects at each of 5 doses with dense sampling of n time points observations per subject If we fit a population PK model to estimate thetas Omega and Sigma then use the model and estimates without uncertainty to simulate individual Y values for say 1000 hypothetical subjects at each dose for each time point we can then calculate Ybar and SD at each dose time point across the 1000 hypothetical subjects and construct degenerative tolerance intervals of the form Ybar k SD However typically what we do is compute Ybar as our point estimate and construct an asymmetric tolerance interval using the percentile method i e instead of using k SD to determine the endpoints of the interval the appropriate lower and upper quantiles from the empirical distribution of the simulated Y values are used Prediction Interval for a Single Future Observation Ybar k SD sqrt 1 1 N To calculate the variability in a single future observation Ynew we add the sampling variance SD 2 for the new observation and parameter uncertainty variance SE 2 to obtain Var Ynew SD 2 SE 2 SD 2 1 1 N and hence SD Ynew SD sqrt 1 1 N Note that when we ignore parameter uncertainty to obtain a degenerative prediction interval or when N is very large such than SE is near 0 the resulting interval collapses to the same as a degenerative tolerance interval since SD Ynew SD in this setting This may in part explain some of the confusion we have with the terminology The distinctions between these intervals collapse when we don t take into account parameter uncertainty Prediction Interval for a Single Future Observation Population PK Example To draw an analogy to population modeling using the population PK example I just described consider taking into account parameter uncertainty by assuming the posterior distribution of the parameter estimates follow a multivariate normal distribution with mean vector of parameter estimates and covariance matrix of the estimates from the COV step output Suppose now we obtain 1000 sets of population parameters thetas Omegas and Sigmas from a parametric simulation from this multivariate normal distribution Furthermore if we simulate a single hypothetical subject s profile for each set of 1000 population parameters we now have a set of 1000 individual Y s at each dose time point If we calculate the SD Ynew across the 1000 individual values we have taken into account both sampling variation Omegas and Sigma and parameter uncertainty multivariate normal posterior distribution The resulting SD Ynew will be larger than the SD calculated for the degenerate

Original URL path: http://nonmem.org/nonmem/nm/98jun142005.html (2016-04-25)

Open archived version from archive - [NMusers] Help!!!

PROJECT DIRECTORY c pdxpop1 1j example1 CHECK RUN ID S Y CONTROL FILE c pdxpop1 1j example1 101 ctl RESULT FILE c pdxpop1 1j example1 101 res EXECUTE FLAG Y START DATE Jun 14 2005 10 25 NONMEMDIR c nmv FORTRAN INTEL FOPTIONS nologo nbs w 4Yportlib Gs Ob1gyti Qprec div WARNINGS AND ERRORS IF ANY FOR PROBLEM 1 WARNING 2 NM TRAN INFERS THAT THE DATA ARE POPULATION ERRMSG C pdxpop1 1j main p exe 518 Can t run ifl Errors have been reported check GmniEvent Log Errors returned from go5entry Further processing of this run is skipped EXIT BEFORE LAST LINE MEANS AN EARLY EXIT OCCURRED LASTLINE is 101 The line number indicator is 489 OUTPUT IS COMPLETE WAITING FOR NEXT COMMAND From Bachman William MYD bachmanw iconus com Subject RE NMusers Help Date Tue June 14 2005 12 09 pm It means that PDx Pop cannot find the Intel 7 x compiler ifl exe ERRMSG C pdxpop1 1j main p exe 518 Can t run ifl The Intel compilers do not automatically set their own environmental variables If you type the compiler command from a DOS window you should get something like the following if the environment

Original URL path: http://nonmem.org/nonmem/nm/99jun142005.html (2016-04-25)

Open archived version from archive - [NMusers] NONMEM mathematical background

users Brand new to NONMEM I m looking for litterature references about the mathematical and statistical part behind NONMEM French statistician PhD student I m looking for information about both 1 the way to estimate the parameters and 2 optimal experimental design in the very case treated by NONMEM i e systems of nonlinear differential equations which have no closed form solution Any help much appreciated Thank you very much

Original URL path: http://nonmem.org/nonmem/nm/99jun132005.html (2016-04-25)

Open archived version from archive - [NMusers] From NMUser

Young mcyoung0808 yahoo com cn Subject NMusers From NMUser Date Fri June 10 2005 10 38 pm Dear NMUser Does anyone know how I can get the NM WIN for

Original URL path: http://nonmem.org/nonmem/nm/99jun102005.html (2016-04-25)

Open archived version from archive - [NMusers] What does nonmem do after iteration stops?

OMEGA 0 1 0 1 0 1 0 1 0 1 10 SIGMA 10 ESTIMATION MAX 9999 SIG 6 PRINT 1 TABLE TIME KEO EC50 GAMM POTR POTK SCAL NOHEADER NOPRINT FILE potent2 txt TABLE TIME CEFF DPOT NMB NOHEADER NOPRINT FILE potent3 txt From Nick Holford n holford auckland ac nz Subject Re NMusers What does nonmem do after iteration stops Date Fri June 3 2005 11 45 am Doug Two things that might take time to compute after minimization is complete are 1 The covariance step 2 POST HOC parameter estimates However your control stream does not request either of these so I am afraid I have no explanation for the long time take to generate table output What happens if you remove the TABLE records Nick From Ekaterina Gibiansky gibianskye guilfordpharm com Subject Re NMusers What does nonmem do after iteration stops Date Fri June 3 2005 12 08 pm Doug how big your ouput table files are Are you running NONMEM and writing the output to the drive on the same PC or the writing is going through the network May be it is something to do with the network speed of writing the files Katya From Eleveld DJ d j eleveld anest umcg nl Subject RE NMusers What does nonmem do after iteration stops Workaround Date Fri June 3 2005 1 54 pm Thanks to everyone for thier thoughtful suggestions The delay seems to be related to the table step The tables are fairly large about 700k and are written directly to the hard drive If I remove the TABLE lines then nonmem exits soon after it stops iterating as expected Interestingly if the FILE option isnt used i e the table is written to the output file then there is no delay I will

Original URL path: http://nonmem.org/nonmem/nm/99jun032005.html (2016-04-25)

Open archived version from archive