archive-org.com » ORG » N » NONMEM.ORG

Total: 436

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • [NMusers] SAS program or SAS macro to prepare NonMem ready data
    OCPB FDA From Sam Liao sliao pharmaxresearch com Subject RE NMusers SAS program or SAS macro to prepare NonMem ready data Date Wed December 8 2004 12 04 pm Hi Elliot We used SAS in NONMEM data preparation But each SAS program has to adapt to the structure of available SAS data sets and the nmv model of the study I may have a SAS program that you can use as a template to develop a SAS program for your study But it will require some experience in SAS programming Please give me a call if you need a template Sam Liao PharMax Research 732 3029550 From Michael J Fossler gsk com Subject RE NMusers SAS program or SAS macro to prepare NonMem ready data Date Wed December 8 2004 12 08 pm Although I am sure that vB and SAS do a fine job of building NM datasets if you are a new user looking to develop some programming skills in this area I would recomend that you give S plus or its open source version R a serious look S was design to perform data manipulation particularly data arranged in columns The advantage of this data vector orientation is that there are many simple one step commands in S that will do what many lines of a traditional programming language where you might need many for next loops can accomplish This is just my perception but it seems to me that far too many modelers still build datasets by hand in Excel particularly in small companies or in academia where there aren t SAS programmers for hire Not only is this prone to error but it s way too much work If you are new to population PK take the time to learn a tool to help you with this important step it will pay off quickly Mike Michael J Fossler Pharm D Ph D F C P Principal Clinical Pharmacokineticist Clinical Pharmacokinetics Modeling Simulation GlaxoSmithKline 610 270 4797 FAX 610 270 5598 Cell 443 350 1194 Michael J Fossler gsk com From Leonid Gibiansky leonidg metrumrg com Subject RE NMusers SAS program or SAS macro to prepare NonMem ready data Date Wed December 8 2004 12 14 pm Just checked S 6 1 Release 1 for Windows 2 Gig processor took 3 sec to sort 1 million random numbers 12 sec to sort 1 million character variables the same numbers transformed to character variables Not too bad never had problems with data creation in S Leonid From Sam Liao sliao pharmaxresearch com Subject RE NMusers SAS program or SAS macro to prepare NonMem ready data Date Wed December 8 2004 12 39 pm Hi Yaning Do you mind if I ask your opinion from regulatory stand point I am not sure how we can be sure the integrity of the data when other softwares were used in nmv data preparation For NONMEM analysis which will be a part of regulatory submission SAS can provide a log file that keeps track of all the data manipulation and derivation Sam Liao PharMax Research From Beasley Bach Nhi t BEASLEYN cder fda gov Subject RE NMusers SAS program or SAS macro to prepare NonMem ready data Date Wed December 8 2004 12 45 pm HI Elliot I use SAS to format datasets for Nonmem There are a few lines that I consistently use but I have not worked with one dataset yet that allowed me to reuse an entire code These lines really won t save you any time but you can contact me if you d like In addition to what s already been mentioned another problem is that datasets are set up in so many different ways and can be inconsistent within themselves different formats used within one column that I ve always had to write a separate code for each NM dataset Not to be cynical but I just don t see it happening in the near future Good luck On another note my personal bias is to stick with programs like SAS SPlus etc for your programming since these programs leave you a string as opposed to cutting and pasting in programs like Excel where mistakes have a greater potential to happen Nhi Beasley Pharm D Pharmacometrics FDA beasleyn cder fda gov From Wang Yaning WangYA cder fda gov Subject RE NMusers SAS program or SAS macro to prepare NonMem ready data Date Wed December 8 2004 1 34 pm First of all I want to clarify that all my posts here only reflect my personal opinion As far as I know there is no regulatory preference for any commercial software for data manipulation On the other hand it is a good practice to use softwares that can keep track of all the data manipulation steps so that we can easily reproduce the results or make further changes SAS may be the most popular software for data manipulation here in FDA since all the data files are submitted as SAS transport files But some of my colleagues use S because they are more familiar with S Also some of my colleagues use a program called JMP Leonid My earlier comment on S sort function was based on my experience with S two years ago on S 2000 bound with TS2 I was trying to sort a large simulated data frame based on two columns ID TIME It was much slower compared with SAS at that time Maybe I was not using the most efficient command or S 6 1 improves this function Anyway I am more comfortable with SAS From Chapel Sunny Sunny Chapel pfizer com Subject RE NMusers SAS program or SAS macro to prepare NonMem ready data Date Wed December 8 2004 2 03 pm I use SAS and Splus for Nonmem datasets but I just want to point out that WinNonlin has some functionality for Nonmem data creation I don t know how good it is though Splus is convenient but it doesn

    Original URL path: http://nonmem.org/nonmem/nm/99dec072004.html (2016-04-25)
    Open archived version from archive

  • [NMusers] posthoc step
    the discrepancy the more shrinkage there is Ken From Nick Holford n holford auckland ac nz Subject Re NMusers posthoc step Date Wed December 8 2004 2 34 pm Leonid I have also done something similar and found similar results Without relying on any explicit calculation it seems that if the prior for K is 10 with a SD of 2 and data of 3 observations simulated with K 1 that the posterior estimate of K might be much closer to 10 than to 1 A value of 1 drawn from N 10 SD 2 is quite unlikely NORMDIST 1 10 2 TRUE is 3E 6 The POSTHOC estimate of 9 78 which is obtained using the NM TRAN code and data Jerry supplied supports this The SAS MAP estimate of 0 9437 that Jerry reported seems unreasonable given such a strong prior of SD 2 for K 10 I have used NONMEM with both the pseudo observation DATA method which Jerry mentioned and the undocumented PRIOR method in NONMEM V METHOD ZERO and METHOD COND The estimate of Khat for all methods remains stubbornly at 10 with SD of 2 When the SD for the prior on K was increased to 9 then Khat changes abruptly from 10 to 0 94 I was rather surprised not to find a gradual change in Khat as the SD for the prior on K was increased As Leonid shows below the transition from Khat of 10 to Khat 1 happens over a very narrow range between SD 8 and SD 9 In addition to this sharp transition I also found spikes of Khat dropping to 1 at certain values for the SD of K These spikes were not present when I used NONMEM VI with the DATA method of Bayesian estimation and the transition SD was at a lower value 7 35 see attached PDFs Bayesian Estimation with NONMEM V pdf Bayesian Estimation with NONMEM VI pdf Nick Nick Holford Dept Pharmacology Clinical Pharmacology University of Auckland 85 Park Rd Private Bag 92019 Auckland New Zealand email n holford auckland ac nz tel 64 9 373 7599x86730 fax 373 7556 http www health auckland ac nz pharmacology staff nholford From Steve Duffull Subject RE NMusers posthoc step Date Wed December 8 2004 3 19 pm Hi all Nick wrote When the SD for the prior on K was increased to 9 then Khat changes abruptly from 10 to 0 94 I am not going to attempt to comment meaningfully on the findings to date but want to ask another question about the findings If the prior for K is N 10 x and the data were computed based on K i 1 then how could NONMEM or SAS have computed a Khat of www http www uq edu au pharmacy sduffull duffull htm PFIM http www uq edu au pharmacy sduffull pfim htm MCMC PK example http www uq edu au pharmacy sduffull MCMC eg htm From Steve Duffull sduffull pharmacy uq edu au Subject RE NMusers posthoc step Date Wed December 8 2004 8 10 pm Hi To continue the story a little on Jerry s example I ran the model in WinBUGS see code below model for j in 1 3 data j dnorm model j tau model j http www uq edu au pharmacy sduffull duffull htm PFIM http www uq edu au pharmacy sduffull pfim htm MCMC PK example http www uq edu au pharmacy sduffull MCMC eg htm From Nick Holford n holford auckland ac nz Subject Re NMusers posthoc step Date Wed December 8 2004 10 32 pm Steve I don t understand why you say When the prior variance was set to 4 Khat was estimated at 0 95 These are similar to the NONMEM and SAS results when variance was 4 I wrote earlier The estimate of Khat for all methods remains stubbornly at 10 with SD of 2 i e the prior variance on K was 4 and Leonid wrote K goes up to 9 78 when you move OMEGA down to 4 I ts not clear if Leonid means OMEGA is the variance of the prior or the SD of the prior But whichever convention one assumes it is evident that we both get estimates of Khat very close to 10 and not 0 95 It looks like WinBUGS and SAS seem to think Khat is dominated by the data simulated with K 1 whereas NONMEM seems to prefer the prior of 10 SD 2 It appears to me that NONMEM has a better sense of the prior on K because as Leonid pointed out there is only 3 on one million chance that K 1 Nick Nick Holford Dept Pharmacology Clinical Pharmacology University of Auckland 85 Park Rd Private Bag 92019 Auckland New Zealand email n holford auckland ac nz tel 64 9 373 7599x86730 fax 373 7556 http www health auckland ac nz pharmacology staff nholford From jerry nedelman pharma novartis com Subject RE NMusers posthoc step Date Wed December 8 2004 11 08 pm Friends Thanks to all for the interesting insights I guess the bottom line is that something about how NONMEM handles the nonlinearity breaks down when the data are unlikely relative to the prior For real examples this might mean that posthoc estimates for outlying subjects are shrunk substantially more than they should be as Leonid pointed out There seem to be some issues with the PRIOR method too based on Nick s results Steve shows that WinBUGS manages things OK For those who thought the MAP estimate should have been close to the prior mean 10 because the prior mass was concentrated away from the sparse data consider the linear case where SAS and NONMEM agreed The same prior and data are used there And there too the MAP estimate 1 1128 is close to the OLS estimate 1 1064 and far from the prior mean One can actually find the MAP estimate for the linear case analytically Let omega prior variance 4 in the example theta prior mean 10 in the example k ols OLS estimate of k 1 1064 in the example v ols sampling variance of k ols i e the square of its standard error 0 04 1 2 2 2 3 2 1 350 in the example k map MAP estimate of k 1 1128 in the example Then k map k ols v ols k ols theta v ols omega The amount of shrinkage is determined by v ols v ols omega 1 350 1 350 4 1 1401 Thus even though the data were generated by a very unlikely value of k relative to the prior the precision of the least squares estimate is so great relative to the prior that it dominates in the blending of the prior and the data to yield the posterior The same thing should happen in the nonlinear case and does happen with SAS and WinBUGS but something breaks down with NONMEM s way of handling it Jerry From Wang Yaning Subject RE NMusers posthoc step Date Thu December 9 2004 6 26 pm Dear all It has been a very interesting topic This discussion may lead to something quite significant Jerry I think it is too early to say NONMEM is worse than SAS When you compared SAS MAP estimate khat 0 944 with NONMEM MAP estimate khat 9 78 independent of estimation methods you were only checking whether NONMEM MAP estimate was following the method of weighted least squares with a pseudo observation for the parameter This was actually tested in the linear case in your example very well NONMEM does seem to use this method to estimate khat But it failed for the nonliear case Does this mean SAS is better in MAP estimation in nonlinear mixed modeling Maybe not see the following SAS PROC NLMIXED output If I applied your SAS code for OLS and WLS MAP fitting in NONMEM I could get exact the same results khat OLS 0 94 khat WLS 0 944 It seems to me that the unreasonable NONMEM MAP estimate 9 78 may be due to the linearization or integral approximation used in NONMEM I was really reluctent to think this way because those approximation methods are used to estimate the real parameters and those emperical Bayesian estimates of random effects should not be so complicated But when I wrote down the linearized model for your nonlinear example and applied the same SAS or NONMEM WLS MAP code to this linear model I got khat 9 82 Given the following WinBUGS results Nick WinBugs results don t match NONMEM results at all omega2 0 04 0 09 0 094 0 095 0 1 0 5 4 khat 9 992 9 984 9 984 1 165 1 146 0 9708 0 9472 there is clearly something wrong Then I tried SAS PROC NLMIXED to see whether SAS can do a better job Surprisingly or as expected if FIRO first order method is used khat 9 82 which is identical to the result above based on the linearized model If GAUSS HARDY or ISAMP method is used khat 9 78 which is identical to original NONMEM MAP result I also used your linear case 4 kt to make sure the SAS code is working as I expected Under linear model SAS PROC NLMIXED is also doing the same thing as weighted least squares with a pseudo observation for the parameter It seems that NONMEM is implementing FO in a different way from SAS at least on MAP estimates and achieves similar MAP estimates as those more computation intensive methods in SAS But none of these nonlinear mixed effect modeling tools SAS or NONMEM someone can try S and report the outcome here is handling nonlinear models appropriately for MAP estimates under this sufficiently stressful situation Given this observation the impact of this surprising outcome may deserve more study Original model y d exp kt I use d here to replace 10 to avoid confusion because the prior mean of k is also 10 accidentally Linearized model first order Taylor expansion around 10 y d exp 10t 1 k 10 t SAS code proc nlin data one model wobs dat 10 exp 10 t 1 k 10 t 0 2 1 dat k 2 parm k 0 1 output out out1 pred pred run NONMEM code PROB BAYES TEST DATA data nlWLS CSV IGNORE INPUT DAT ID T OBS WOBS DV PRED K THETA 1 F DAT 10 EXP 10 T 1 K 10 T 0 2 1 DAT K 2 IPRED F Y F ERR 1 ESTIMATE MAXEVALS 9999 THETA 0 1 OMEGA 0 04 TABLE K IPRED FILE WLS FIT SAS NLMIXED PROCEDURE when use other methods increase QPOINTS to 250 data oneb set one if dat 1 run proc nlmixed data ONEB cov corr method FIRO parms TVK 10 s2k 4 s2 0 04 bounds 10 Bayesian Estimation with NONMEM V RUV FIXED pdf data method PROB BAYES TEST DATA data csv IGNORE INPUT ID TIME OBS DV DVID ESTIMATE MAXEVALS 9990 METHOD COND SLOW THETA 10 K OMEGA 0 FIX PPVK SIGMA 0 04 FIX RUV SIGMA 4 FIX Kprior uncertainty PRED K THETA 1 ETA 1 C 10 EXP K TIME IF DVID EQ 1 THEN prior Y THETA 1 ERR 2 ENDIF IF DVID EQ 2 THEN obs Y C ERR 1 ENDIF prior method PROB BAYES TEST DATA prior csv IGNORE INPUT ID TIME OBS DV ESTIMATE MAXEVALS 9990 METHOD COND SLOW THETA 10 K OMEGA 0 FIX PPVK SIGMA 0 04 FIX RUV prior THETA THETA 10 FIX Kprior OMEGA 4 FIX Kprior uncertainty SUBR PRIOR prior for PRED K THETA 1 ETA 1 C 10 EXP K TIME Y C ERR 1 maxeval 0 method PROB BAYES TEST DATA prior csv IGNORE INPUT ID TIME OBS DV ESTIMATE MAXEVALS 0 METHOD COND SLOW THETA 10 K OMEGA 4 Kprior uncertainty SIGMA 0 04 RUV PRED K THETA 1 ETA 1 C 10 EXP K TIME Y C ERR 1 TABLE ID TIME K data csv ID Time Obs DVID 1 0 10 1 1 1 3 87 2 1 2 1 66 2 1 3 0 44 2 prior csv ID Time Obs 1 1 3 87 1 2 1 66 1 3 0 44 prior for SUBROUTINE PRIOR ICALL CNT NTHP NETP NEPP DOUBLE PRECISION CNT IF ICALL LE 1 THEN NTHP 1 NETP 1 NEPP 1 ENDIF CALL NWPRI CNT RETURN END Nick Holford Dept Pharmacology Clinical Pharmacology University of Auckland 85 Park Rd Private Bag 92019 Auckland New Zealand email n holford auckland ac nz tel 64 9 373 7599x86730 fax 373 7556 http www health auckland ac nz pharmacology staff nholford From Ludden Thomas MYD luddent iconus com Subject RE NMusers posthoc step Date Fri December 10 2004 11 48 am Nick Jerry Leonid Steve Yaning et al The objective function value for this problem is rather ugly see tabulation below It is generally flat around a K value of 10 but there is a minor maximum at about K 7 2 7 25 Some gradient search algorithms will have trouble with this The Solver in Excel obviously not the Gold Standard gives a K estimate of 9 78 with initial estimates of 7 25 up to 10 With initial estimates of 1 to 7 2 the K estimate is 0 944 The initial estimates for NONMEM s POSTHOC ETA search are not readily accessed so I have not tested NONMEM itself but I have been in contact with Stuart Beal regarding this question Jerry would you please try the SAS procedure with an initial estimate of 10 with and without estimation of the residual variance It would be interesting to see if SAS s search procedure can get past the minor maximum initial initial Final Final K est OFV K est OFV 10 448 0646546 9 781 448 06 9 448 1637275 9 781 448 06 8 448 5035677 9 781 448 06 7 5 448 6452926 9 781 448 06 7 25 448 6697797 9 781 448 06 7 2 448 6687872 0 944 21 603 7 1 448 6595588 7 448 6393969 0 944 21 603 6 5 448 3096192 6 447 3663721 0 944 21 603 5 5 445 3349748 5 441 4403283 4 5 434 4249225 4 422 270891 3 5 401 8019287 3 368 1922873 2 5 314 6254852 2 233 1745355 1 5 121 66397 1 23 59852696 0 944 21 603 0 9 23 0073974 0 8 39 55006733 0 7 83 27305134 0 6 170 024752 0 5 325 154084 0 4 589 7760326 0 3 1031 4946 0 1 2973 924151 0 944 21 603 From jerry nedelman pharma novartis com Subject RE NMusers posthoc step Date Sun December 12 2004 11 50 pm Tom SAS doesn t get past it either That was with the Residual SD fixed When the Residual SD was estimated starting always from 0 2 the estimate of k was 10 for initial guesses from 10 to 3 But the estimated Residual SD was weird then essentially plus or minus infinity When the initial guess of k was 2 or 1 SAS failed to converge although it stopped with k near 0 94 Jerry Results Initial k Final k SSE Res SD 10 9 78 448 1 Fixed 7 25 9 78 448 1 Fixed 7 2 0 94 21 6 Fixed Initial k Final k SSE Res SD Initial Res SD Final Res SD 10 10 8 49E 14 Estimated 0 2 14526239 7 10 2 86E 19 Estimated 0 2 7 92E 09 3 10 3 61E 20 Estimated 0 2 2 23E 10 2 0 9847 21 0218 Estimated 0 2 0 3671 Failure to converge 1 0 9578 21 7932 Estimated 0 2 0 1957 Failure to converge Code data one dat an indicator for whether it s real data or an extra obs for the parameter t the t of y exp kt obs observation real data or theta Real data generated by S Plus 10 exp c 1 2 3 0 2 rnorm 3 0 1 input dat t obs cards 1 1 3 87 1 2 1 66 1 3 0 44 0 999 10 run title Weighted least squares MAP initial 10 fix residual sd proc nlin data one y dat obs 0 2 1 dat obs 2 model y dat 10 exp k t 0 2 1 dat k 2 parm k 10 run title Weighted least squares MAP initial 7 25 fix residual sd proc nlin data one y dat obs 0 2 1 dat obs 2 model y dat 10 exp k t 0 2 1 dat k 2 parm k 7 25 run title Weighted least squares MAP initial 7 2 fix residual sd proc nlin data one y dat obs 0 2 1 dat obs 2 model y dat 10 exp k t 0 2 1 dat k 2 parm k 7 2 run title Weighted least squares MAP initial 10 estimate residual sd proc nlin data one y dat obs res sd 1 dat obs 2 model y dat 10 exp k t res sd 1 dat k 2 parm k 10 res sd 0 2 run title Weighted least squares MAP initial 7 estimate residual sd proc nlin data one y dat obs res sd 1 dat obs 2 model y dat 10 exp k t res sd 1 dat k 2 parm k 7 25 res sd 0 2 run title Weighted least squares MAP initial 3 estimate residual variance proc nlin data one y dat obs res sd 1 dat obs 2 model y dat 10 exp k t res sd 1 dat k 2 parm k 3 res sd 0 2 run title Weighted least squares MAP initial 2 estimate

    Original URL path: http://nonmem.org/nonmem/nm/99dec062004.html (2016-04-25)
    Open archived version from archive

  • [NMusers] problem with simulation
    CortM 2 A1 Cortf CORTFM MW UPP CP 0 12 Cortf RRBC RRBE DOW EC50CL CP 0 12 Cortf RRBC RRBE EFFL 1 UPP DOW DADT 4 KIN EFFL KOUT A 4 SIGMA 0 FIXED 0 FIXED 0 FIXED And the data file is basically set up as ID TIME AMT DV CMT EVID OCC 1 0 0 4 0 1 1 0 0 5 0 1 1 24 1000 1 1 1 1 24 0 2 0 1 1 24 0 4 0 1 1 24 0 5 0 1 Thanks in advance Kai Wu Department of Pharmaceutics University of Florida Gainesville Fl Office phone 352 846 2730 From Nick Holford Subject RE NMusers problem with simulation Date Tue November 30 2004 2 27 pm Kai Wu I am not really sure what differences you find between NONMEM and Scientist simulations but it looks like the initial conditions are incorrect This could explain why the AUCs in this CMT are low In the fragment of data you give there is no AMT for CMT 4 and 5 at time zero so it seems you are assuming that the initial state of these compartments is zero That is unlikely for a physiological turnover model So I suggest you add two records for each subject at time zero An AMT of 1 is put in each compartment to initialize it e g ID TIME AMT DV CMT EVID OCC 1 0 1 4 1 1 initialize CMT 4 1 0 1 5 1 1 initialize CMT 5 1 0 0 4 0 1 Observation for CMT 4 1 0 0 5 0 1 Observation for CMT 5 Then in PK you should use the bioavailability fraction trick to get the correct initial value in these compartments You don t give the DADT for the second PD compartment 5 so I am guessing it is a simple turnover model F4 KIN KOUT Compartment 4 initial value F5 KIN5 KOUT5 Compartment 5 initial value The amount in these compartments at time zero is then calculated from the AMT with a nominal value of 1 at time 0 times the bioavailability fraction the desired initial value When using this model for simulation the run times are probably not very long but if you use if for estimation you may be able to shorten run times by writing more efficient code All the code in DES is computed many times in order to solve the differential equations It is therefore a good idea to keep all unnecessary calculations out of this block e g I would write this in PK and remove these constant assignments out of DES IF NEWIND LE 1 THEN this is only executed once per subject KTC 30000000 KALB 5000 QTC 0 0000007 QALB 0 00055 MW 362 47 10 6 ENDIF You could also compute DCP CP 0 12 just once in DES and use this value instead of multiplying CP 0 12 in several different places in the DES

    Original URL path: http://nonmem.org/nonmem/nm/99nov302004.html (2016-04-25)
    Open archived version from archive

  • [NMusers] Determining metabolite clearance fraction
    also tried restricting the fraction of K that arises from the non M1 non M2 metabolites as FM3 1 FM1 FM2 In both this approach and that below I obtain estimates for FM1 FM2 that are 1 In the other variant if you use FM3 1 FM1 FM2 is this how it was coded FM3 1 FM1 FM2 is incorrect you must ensure that FM3 0 I would do the following FM1 THETA 3 1 THETA 3 THETA 5 FM2 THETA 5 1 THETA 3 THETA 5 FM3 1 1 THETA 3 THETA 5 do not put this into the code this is implicit DADT 2 A 1 KA A 2 K use FM1 FM2 MF3 1 DADT 3 A 2 K FM1 A 3 KM1 EQ FOR 4OH METABOLITE COMPARTMENT DADT 4 A 2 K FM2 A 4 KM2 EQ FOR NDESTAM METABOLITE COMPARTMENT Alternative is to use K23 THETA K24 THETA K20 THETA direct or via other metabolites elimination DADT 2 A 1 KA A 2 K23 K24 K20 use FM1 FM2 MF3 1 DADT 3 A 2 K23 A 3 KM1 EQ FOR 4OH METABOLITE COMPARTMENT DADT 4 A 2 K24 A 4 KM2 EQ FOR NDESTAM METABOLITE COMPARTMENT The drug effect at week 12 and random effects can be added to all or some of K23 K24 K20 Fractions F1 F2 F3 can be re computed from Kij values This should solve the problem of F1 F2 F3 1 Leonid From GIRARD PASCAL PASCAL GIRARD adm univ lyon1 fr Subject RE NMusers Determining metabolite clearance fraction Date Mon November 29 2004 11 44 am Leonid is correct about FM3 which should be equal to 1 FM1 FM2 rather than 1 FM1 FM2 which I incorrectly copy and pasted from Paul s Email in my own previous

    Original URL path: http://nonmem.org/nonmem/nm/98nov292004.html (2016-04-25)
    Open archived version from archive

  • [NMusers] Model estimate of dose
    fax 373 7556 http www health auckland ac nz pharmacology staff nholford From Johan Rosenborg astrazeneca com Subject RE NMusers Model estimate of dose Date Tue December 7 2004 9 39 am Dear Anthe Sam and Nick Thank you for your concern in this matter and your suggestion to moderate the measured dose via F Sorry for my late feedback A further clarification to my question may be appropriate A radiactive substance was administered via the lungs Radiactivity was measured at 2 min intervals with a gamma camera on several occasions up to a few hours post dose the primary estimate of the amount deposited in the airways the dose was based on the first measurement What I would like to do is to fit a model to the gamma camera measurements as a means to estimate the amount at time zero by back extrapolation so to say If the radioactive tracer is deposited in CMT n it should be possible to estimate the amount the dose at time zero as follows A n THETA X EXP ETA Y x AMT a unit dose is assigned to the deposition compartment in the data file i e AMT 1 in the record where CMT n TIME 0 and EVID 4 Any suggestion on how to proceed with this idea Johan Johan Rosenborg AstraZeneca R D Lund Experimental Medicine S 221 87 Lund Sweden Tel 46 46 33 65 99 Fax 46 46 33 71 91 E mail johan rosenborg astrazeneca com From Bachman William MYD bachmanw iconus com Subject RE NMusers Model estimate of dose Date Tue December 7 2004 10 57 am Maybe I m missed something in the previous discussions but I still think Nick s explanation still holds Put your unit dose in the data file and estimate Fn e g if you assume dose goes to CMT 1 INPUT C ID TIME DV AMT CMT SUBROUTINE ADVAN2 TRANS2 PK TVCL THETA 1 CL TVCL EXP ETA 1 TVV THETA 2 V TVV EXP ETA 2 TVKA THETA 3 KA TVKA EXP ETA 3 S2 V TVF1 THETA 4 F1 TVF1 EXP ETA 4 F1 is then the estimated dose appropriate constraints of parameter estimates may be needed but there is nothing that inherently dicates F1 can t be greater than 1 Bill From Nick Holford n holford auckland ac nz Subject RE NMusers Model estimate of dose Date Tue December 7 2004 3 13 pm Johan With the additional information you provide then it seems the problem is quite simple But it does depend on what assumptions you make about what the gamma camera is measuring If the gamma camera is calibrated in such a way that it measures the total AMOUNT of radioactivity in the same units as the actually administered dose e g Curies or Bequerels or whatever then the problem can be solved using PRED PRED K THETA k EXP ETAk DOSE THETA dose EXP ETAdose Y DOSE EXP K TIME EPS If the

    Original URL path: http://nonmem.org/nonmem/nm/99nov292004.html (2016-04-25)
    Open archived version from archive

  • [NMusers] Permutation test with small number of possible permutations
    Zealand email n holford auckland ac nz tel 64 9 373 7599x86730 fax 373 7556 http www health auckland ac nz pharmacology staff nholford From Leonid Gibiansky Subject RE NMusers Permutation test with small number of possible permutations Date Fri November 26 2004 8 47 pm Samer Actually there will be only 35 different permutations permutation 1234 versus 5678 is the same as 5678 versus 1234 The code below gives them all N 0 i 1 for j in i 1 6 for k in j 1 7 for n in k 1 8 if i j j k k n N N 1 print paste Permutation N class A subjects i j k n P value based on many say 1000 random permutations should be approximately the same as a p value based on all 35 in this case permutations Leonid From Mouksassi Mohamad Samer mohamad samer mouksassi umontreal ca Subject RE NMusers Permutation test with small number of possible permutations Date Fri November 26 2004 11 05 pm Leonid First thanks for the script Is there a general formula for the number of permutations If class A is assigned to 5678 or 1234 it would be different because we would have then A A A A B B B B B B B B A A A A these are not equivalent am I wrong ID 1 2 3 4 5 6 7 8 The p value should be the same but the variance of it would be much bigger as it is p 1 p n for a computed 0 05 sd p 0 00689 1000 sd p 0 036 35 Too large to be useful sd p 0 026 70 Nick maybe I used the wrong word criticized Stu Beal said that this may not hold in all the settings and he added it is instructive to consider another idea for calibrating the critical value as the posterior predictive check Samer From Leonid Gibiansky leonidg metrumrg com Subject RE NMusers Permutation test with small number of possible permutations Date Sat November 27 2004 1 48 am Samer A A A A B B B B B B B B A A A A are equivalent solutions will differ by the sign of the effect or equivalently by the class label only for example CL A CL B versus CL B CL A A general formula for the number of permutations for two classes A and B is N K N K where N is the number of subjects K is the number of elements in class A But in this particular case the number of elements in classes A and B are the same This creates an additional symmetry This symmetry can be used to reduce the number of permutations You can use 70 permutations if you like but the result will be the same If you like to use 70 just replace the first line of the script i 1 by for i in 1

    Original URL path: http://nonmem.org/nonmem/nm/99nov252004.html (2016-04-25)
    Open archived version from archive

  • [NMusers] Simulation problem
    0 2 0 1 0 1 0 2 0 2 0 1 0 3 0 1 0 1 0 3 0 2 0 1 0 4 0 1 0 1 0 4 0 2 0 1 0 5 0 1 0 1 0 5 0 2 0 1 0 6 0 1 0 1 0 6 0 2 0 1 0 7 0 1 0 1 0 7 0 2 0 1 0 8 0 1 0 1 0 8 0 2 0 1 0 9 0 1 0 1 0 9 0 2 0 1 0 10 0 1 0 1 0 10 0 2 0 Venkatesh Atul Bhattaram Pharmacometrics DPE 1 OCPB CDER FDA From Nick Holford n holford auckland ac nz Subject RE NMusers Simulation problem Date Mon November 22 2004 2 05 pm Manoj First of all if you want help please do not say its not running so obviously there is something wrong that I am doing without giving details of what goes wrong I cannot see anything obviously wrong with your code for simulation but without any clues to the error I havent looked very closely If you are doing SIM ONLYSIM there is no need to add FIX to all the parameter records Simulation implicitly means that the parameter remain fixed at their original values Simulation and estimation can be done at the same time by including both a SIM and a EST record in the control stream and removing ONLYSIMULATION from the SIM record If you remove all OMEGA records then NM TRAN does not understand about SIGMA because NONMEM only recognizes the second level of random effects implied by SIGMA if there are OMEGA records for the first level If you remove all OMEGA records and references to ETA change SIGMA to OMEGA and change ERR to ETA then NM TRAN will accept this as a non population problem However the simplest way to simulate an individual is to fix all OMEGA estimates to zero This fools NM TRAN into thinking you are doing a population problem Nick Nick Holford Dept Pharmacology Clinical Pharmacology University of Auckland 85 Park Rd Private Bag 92019 Auckland New Zealand email n holford auckland ac nz tel 64 9 373 7599x86730 fax 373 7556 http www health auckland ac nz pharmacology staff nholford From MANOJ KHURANA manoj2570 yahoo com Subject RE NMusers Simulation problem Date Mon November 22 2004 2 13 pm Hello Dr Atul Thanks for your prompt reply Its good to know that this code is fine and I had set the data exactly as you suggested but the run couldnt execute I am trying it again to see if it is a machine problem Thanks Manoj From MANOJ KHURANA manoj2570 yahoo com Subject RE NMusers Simulation problem Date Mon November 22 2004 3 02 pm Dr Halford Thanks for your words My future e mails would be appropriate The things are ok and I am able to run the code now My

    Original URL path: http://nonmem.org/nonmem/nm/98nov222004.html (2016-04-25)
    Open archived version from archive

  • [NMusers] When should a long run be aborted?
    ETA 3 VC FME THETA 4 FME KME THETA 5 EXP ETA 4 KME S2 V2 1000 SCALING FOR PARENT S3 V2 1000 CL K V2 AUC AMT CL CLF CL FME DES DADT 1 A 1 KA GUT DADT 2 A 1 KA A 2 K PARENT DADT 3 A 2 K FME A 3 KME METABOLITE ERROR FX 0 IF F EQ 0 FX 1 W F FX IPRED F IRES DV IPRED IWRES IRES W Y F EXP EPS 1 EPS 2 IF CMT EQ 3 THEN Y F EXP EPS 3 EPS 4 ENDIF THETA 1000 FIXED KA THETA 0 001 0 542 10 K THETA 0 001 1320 100000 V2 THETA 0 001 0 829 1 FME THETA 0 01 3 11 100 KME OMEGA 189 OMEGA 194 OMEGA 459 OMEGA 01 SIGMA 619 343 SIGP SIGMA 384 128 SIGM ESTIMATION METHOD 1 SIGDIGITS 3 MAXEVAL 9999 POSTHOC PRINT 10 NOABORT MSFO exem5 msf COVR TABLE ID TIME KA K V2 FME KME CL AUC CLF NOPRINT FILE exem5 fit SCAT ID VS DV CMT CL SCAT DV VS TIME BY CMT SCAT PRED VS TIME BY CMT SCAT RES VS TIME BY CMT SCAT IWRES VS DV BY CMT SCAT PRED VS DV BY CMT UNIT CID Subject TIME AMT II SS ADDL DV CMT EVID 28 1313 0 25 24 2 2 1 1 28 1313 0 0 424 2 0 28 1313 0 0 102 3 0 28 1313 0 5 26 8 2 0 28 1313 0 5 4 29 3 0 28 1313 1 10 9 2 0 28 1313 1 2 89 3 0 28 1313 2 4 45 2 0 28 1313 2 1 12 3 0 28 1313 4 1 24 2 0 28 1313 4 0 443 3 0 28 1313 6 0 749 2 0 28 1313 6 0 257 3 0 28 1313 24 0 433 2 0 28 1313 24 0 151 3 0 28 1313 48 0 372 2 0 28 1313 48 0 124 3 0 10 1316 0 25 24 2 2 1 1 10 1316 0 0 625 2 0 10 1316 0 0 169 3 0 10 1316 0 5 45 2 0 10 1316 0 5 4 41 3 0 10 1316 1 25 4 2 0 10 1316 1 3 46 3 0 10 1316 2 7 81 2 0 10 1316 2 1 54 3 0 10 1316 4 4 09 2 0 10 1316 4 1 01 3 0 10 1316 6 2 72 2 0 10 1316 6 0 722 3 0 10 1316 24 0 616 2 0 10 1316 24 0 181 3 0 10 1316 48 0 627 2 0 10 1316 48 0 184 3 0 Paul Hutson Pharm D Associate Professor CHS UW School of Pharmacy 777 Highland Avenue Madison WI 53705 2222 Tel 608 263 2496 FAX 608 265 5421 Pager 608 265 7000 7856 From Nick

    Original URL path: http://nonmem.org/nonmem/nm/99nov222004.html (2016-04-25)
    Open archived version from archive



  •