archive-org.com » ORG » C » CPNTOOLS.ORG

Total: 415

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • Calculating statistics << CPN Tools Homepage
    and identically distributed IID see Independent and identically distributed values All of the statistics mentioned above can be accessed using the Data collector functions In the following let x i i 1 n be the values that are returned by the observation initialization and stop functions for a data collector monitor Untimed statistics If untimed statistics are to be calculated for the data collector then the sum and average of n values are calculated in the following way Sum n x 1 x 2 x n Avrg n Sum n n The remaining statistics are calculated in a similar way If a data collector observes the same value twice then the value influences the statistics twice as expected The following figure shows an example of data values that are used to calculate untimed statistics The data values in the figure above are x i i 0 1 1 2 0 3 1 4 1 5 2 6 1 7 0 8 0 9 1 10 0 11 1 12 1 13 0 14 0 15 For these values sum 9 and avrg 0 6 Timed statistics Timed statistics differ from untimed statistics in that an interval of time is used to weight each observed value The figure below shows an example of the intervals of time that are associated with observed data values The line segment after an observed value corresponds to the interval of time that is used to weight the observed value Assume that data value x i is extracted at time t i for i 1 n The interval t i t i 1 is used to weight the value x i that is the weight of the value x i is t i 1 t i At precisely time t i variable x i has no

    Original URL path: http://cpntools.org/documentation/tasks/performance/calculating_statistics (2016-04-26)
    Open archived version from archive


  • Data collector functions << CPN Tools Homepage
    observed values DC ci i returns a record containing the i confidence interval average of the observed values i must be 90 95 or 99 DC first returns the value that was observed first DC last returns the most recently observed value DC get stat strings returns the ML name of the monitor together with a record with all of the statistics for the monitor as strings The following functions are also available for data collector monitors that calculate timed statistics DC starttime returns the model time at which the first value was observed DC lasttime returns the model time at which the most recent value was observed DC interval returns the amount of model time that has elapsed since the monitor was first updated Return types The type of the value that is returned by the functions above often depends on the type of the values returned by the observation function for the monitor Note that when an observation function returns integer values the values are converted to infinite integers i e IntInf int in order to avoid Overflow exceptions This means that several of the functions mentioned above such as DC min and DC sum will return values of type IntInf int rather than values of type int This table provides an overview of the types of the values returned by the functions above The first row of the table indicates the type of the values returned by the observation function of the monitor real int or IntInf int DC avrg real real DC count int int DC first real IntInf int DC last real IntInf int DC max real IntInf int DC min real IntInf int DC ss real IntInf int DC ssd real real DC std real real DC sum real IntInf int DC vari real

    Original URL path: http://cpntools.org/documentation/tasks/performance/data_collector_functions (2016-04-26)
    Open archived version from archive

  • Independent and identically distributed values << CPN Tools Homepage
    waits in a queue If packet i is in the queue when packet i 1 is added to the queue then the waiting time for packet i 1 will depend at least in part on the waiting time for packet i In this case the waiting time for packets i and i 1 are not independent Similarly the data values that are collected by a data collector are not necessarily identically distributed i e they may not be random samples from the same probability distribution function For example a queue of packets waiting to be sent may be very short at the beginning of a simulation which means that the waiting times for the first packets that pass through the queue are likely to be fairly small However towards the end of the simulation the queue of packets may be very long in which case the waiting times for the last packets to be removed from the queue are likely to be large In such a situation the waiting times for the packets at the start of the simulation will probably not come from the same probability distribution function as the waiting times for the packets near the end of the simulation Whether or not data values are independent and identically distributed IID will effect the accuracy of the confidence intervals that are calculated for a data collector at the end of a simulation When confidence intervals are calculated for the average of a number of data values it is assumed that the values are IID If the values are not IID then the confidence intervals may be inaccurate and in particular they may be too short Currently no attempt is made to investigate whether data values for a given data collector are IID during a single simulation IID Estimates

    Original URL path: http://cpntools.org/documentation/tasks/performance/iid_data_values (2016-04-26)
    Open archived version from archive

  • Output management << CPN Tools Homepage
    accessing the names of output directories can be found on the help page for Output management functions Top output directory The top output directory for a given net is determined by the Output directory option for the net The output directory option is found under the net overview for the net in the Index All other output directories that are created by CPN Tools will be created as subdirectories of the top output directory By default the top output directory will be a directory named output in the directory in which net is saved Simulation output directory A simulation output directory is a directory in which all files from an individual simulation are saved Single simulations are run when the Fast forward Play Single step or Bind manually tools are applied When running single simulations the default simulation output directory will be the same as the top output directory When Simulation replications are run a new simulation output directory is created for each of the individual simulations The simulation output directories are created in replication output directories see below In this case the simulation output directories will be named sim n where n is an integer that is increased each time a new simulation is run Simulation output directories will typically contain the files generated by Write in file monitors performance reports and a simulation log file directory Simulation log files directory The data that is collected for Data collector monitors can be saved in log files These log files are saved in a simulation log files directory The simulation log files directory is named logfiles and it can be found in a simulation output directory Replication output directory When Simulation replications are run one replication output directory will be created for each set of simulation replications Replication output directories

    Original URL path: http://cpntools.org/documentation/tasks/performance/output_management (2016-04-26)
    Open archived version from archive

  • Output management functions << CPN Tools Homepage
    file in a directory that cannot be accessed then a NotValidDirExn exception will be raised Output getModelDir returns the path to the directory in which the model is saved Output getModelName returns the name of the file in which the model is saved Output getTopOutputDir returns the path to the top output directory Output getSimOutputDir returns the path to the current simulation output directory Output getSimLogfileDir returns the path to the current simulation log files directory Output getRepOutputDir returns the path to the current replication output directory Output getRepLogfileDir returns the path to the current replication log files directory The following functions can be used to create the standard output directories If an attempt to create a directory fails then a NotValidDirExn s exception will be raised where s is a string describing the error Output initTopOutputDir attempts to create the top output directory if the directory does not already exist Output initSimOutputDir calls Output initTopOutputDir and attempts to create a simulation output directory if the directory does not already exist Output initSimLogfileDir calls Output initSimOutputDir and attempts to create a simulation log files directory if the directory does not already exist Output initRepOutputDir calls Output initTopOutputDir and attempts to create a replication output directory if the directory does not already exist Output initRepLogfileDir calls Output initRepOutputDir and attempts to create a replication log files directory if the directory does not already exist The simulator will automatically invoke the functions to initialize create the output directories in the following situations initSimOutputDir When a Simulation report is to be saved initSimLogfilesDir When there are active Write in file monitors or Data collector monitors with the Logging option selected initRepOutputDir When multiple simulation replications are run initRepLogfileDir When multiple simulation replications are run and there are Data collector monitors Examples of use

    Original URL path: http://cpntools.org/documentation/tasks/performance/output_management_functio (2016-04-26)
    Open archived version from archive

  • Performance options functions << CPN Tools Homepage
    the list l An error will occur if l contains values that are not in the set 90 95 99 The following can be used to check or select which statistics will be saved in simulation and replication performance reports In most cases it will not be necessary to use the following functions because net specific options in the Overview of a net are used to select statistics CPN PerfReport getIncludedUntimedStats Returns a record indicating which untimed statistics will be included in simulation performance reports CPN PerfReport selectUntimedStats r Selects the untimed statistics to be included in simulation performance reports based on record r The record r should have the same type as the record returned by CPN PerfReport getIncludedUntimedStats CPN PerfReport getIncludedTimedStats Returns a record indicating which timed statistics will be included in simulation performance reports CPN PerfReport selectTimedStats r Selects the timed statistics to be included in simulation performance reports based on record r The record r should have the same type as the record returned by CPN PerfReport getIncludedTimedStats CPN PerfReport getIncludedIIDStats Returns a record indicating which statistics will be included in replication performance reports CPN PerfReport selectIIDStats r Selects the statistics to be included in replication performance reports based on record r The record r should have the same type as the record returned by CPN PerfReport getIncludedIIDStats Examples of use The following can be used to check which timed statistics will be saved in simulation performance reports val x CPN PerfReport getIncludedTimedStatistics If the Evaluate ML tool is applied to the code above then the result would be the following val x avrg true ci false count true first false interval true last false lasttime false max true min true ss false ssd false starttime false std false sum false vari false avrg bool ci

    Original URL path: http://cpntools.org/documentation/tasks/performance/performance_options_funct (2016-04-26)
    Open archived version from archive

  • Performance output << CPN Tools Homepage
    monitors then some of the statistics that are calculated at the end of each simulation are used to calculate more reliable statistics that are based on data from the independent simulation replications Suppose that a data collector named DC is defined for a net A number of different statistics are calculated for that data collector at the end of a simulation This means that the average of the data values at the end of the simulation is just one estimate of what the average for the data collector should be Running another simulation would most likely result in a different estimate of the average for the data collector in question By running multiple simulations several IID estimates of a particular value such as average marking size or minimum list length can be collected The IID values for a number of different statistics are collected at the end of each simulation when running Simulation replications These values are then saved in replication log files For each data collector the following values are saved in replication log files count minimum maximum average sum only if the data collector calculates untimed statistics The replication log files are saved in the Replication Log Files Directory Below is an excerpt of a replication log file named Queue Delay avrg iid log The i th line shows the average for the Queue Delay data collector after the i th simulation completed when running Simulation replications The values in a given replication log file are independent and they are assumed to be identically distributed 493 770000 1 337 810000 2 241 250000 3 Note that these values can be found in the simulation performance reports for the corresponding three simulation performance reports Replication performance reports Replication performance reports contain statistics that are calculated for the data values that are found in replication log files Since it is very likely that the values in the replication log files are IID the confidence intervals in replication performance reports are more likely to be accurate than those in the simulation performance reports In the figure below the row for avrg iid in the section under Queue Delay contains statistics for the values in the replication log file shown above Replication performance reports are saved as HTML files The net specific Replication performance report options can be used to select which statistics should be included in replication performance reports These options are found in the net overview in the index Confidence interval files The confidence intervals that can be found in replication performance reports are also saved in plain text files Three different confidence intervals can be calculated 90 95 and 99 All confidence intervals for a certain level e g 95 will be saved in a single file Confidence interval files are saved in replication output directories and are named confidenceintervalsX txt where X is either 90 95 or 99 Below is an excerpt from a confidence interval report named confidenceintervals95 txt The first column indicates the data collector and

    Original URL path: http://cpntools.org/documentation/tasks/performance/performance_output (2016-04-26)
    Open archived version from archive

  • Tasks in CPN Tools << CPN Tools Homepage
    save Graphical layout Editing the net structure Exceptions Graphical feedback Long click Syntax checking Performance analysis Monitors Random distribution functions Calculating statistics Data collector functions Independent and identically distributed values Output management Output management functions Performance options functions Performance output Simulation Change marking during simulation Errors during simulation Limitations Manually choose bindings Run a Simulation Simulation feedback Simulation replications Simulation report Simulation stop criteria Simulator functions Verification Calculating the state

    Original URL path: http://cpntools.org/documentation/tasks/start?do=addtobook (2016-04-26)
    Open archived version from archive



  •