| | |
- builtins.object
-
- BinomialRandomVariable
- CauchyRandomVariable
- DiscreteRandomVariable
- ExtendedTriangularRandomVariable
- IncrementalQuantilesEstimator
- MontyHallGameSimulator
- QuasiRandomPointSet
-
- QuasiRandomFareyPointSet
- QuasiRandomKorobovPointSet
- QuasiRandomUniformPointSet
class BinomialRandomVariable(builtins.object) |
| |
BinomialRandomVariable(size=10, prob=0.5, seed=None, Debug=False)
Binomial random variable generator
Parameters:
- size := integer number > 0 of trials
- prob := float success probability
- seed := number for fixing the sequence generation. |
| |
Methods defined here:
- __init__(self, size=10, prob=0.5, seed=None, Debug=False)
- constructor for binomial random variables.
- random(self)
- Generates a binomial random number by
repeating *self.size* Bernouilli trials.
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class CauchyRandomVariable(builtins.object) |
| |
CauchyRandomVariable(position=0.0, scale=1.0, seed=None, Debug=False)
Cauchy random variable generator.
Parameters:
- position := median (default=0.0) of the Cauchy distribution
- scale := typical spread (default=1.0) with respect to median
- seed := number (default=None) for fixing the sequence generation.
Cauchy quantile (inverse cdf) function:
Q(x|position,scale) = position + scale*tan[pi(x-1/2)]
.. image:: cauchyDistribution.png
:alt: Cauchy Distribution
:width: 500 px
:align: center |
| |
Methods defined here:
- __init__(self, position=0.0, scale=1.0, seed=None, Debug=False)
- Constructor for Cauchy random variables.
- random(self)
- Generates a Cauchy random number.
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class DiscreteRandomVariable(builtins.object) |
| |
DiscreteRandomVariable(discreteLaw=None, seed=None, Debug=False)
Discrete random variable generator
Parameters:
| discreteLaw := dictionary with discrete variable states
| as keys and corresponding probabilities
| as float values,
| seed := integer for fixing the sequence generation.
Example usage:
>>> from randomNumbers import DiscreteRandomVariable
>>> discreteLaw = {0:0.0478,
... 1:0.3349,
... 2:0.2392,
... 3:0.1435,
... 4:0.0957,
... 5:0.0670,
... 6:0.0478,
... 7:0.0096,
... 8:0.0096,
... 9:0.0048,}
>>> ## initialze the random generator
>>> rdv = DiscreteRandomVariable(discreteLaw,seed=1)
>>> ## sample discrete random variable and
>>> ## count frequencies of obtained values
>>> sampleSize = 1000
>>> frequencies = {}
>>> for i in range(sampleSize):
... x = rdv.random()
... try:
... frequencies[x] += 1
... except:
... frequencies[x] = 1
>>> ## print results
>>> results = [x for x in frequencies]
>>> results.sort()
>>> counts= 0.0
>>> for x in results:
... counts += frequencies[x]
... print('%s, %d, %.3f, %.3f' % (x, frequencies[x], float(frequencies[x])/float(sampleSize), discreteLaw[x]))
0, 53, 0.053, 0.048
1, 308, 0.308, 0.335
2, 243, 0.243, 0.239
3, 143, 0.143, 0.143
4, 107, 0.107, 0.096
5, 74, 0.074, 0.067
6, 45, 0.045, 0.048
7, 12, 0.012, 0.010
8, 12, 0.012, 0.010
9, 3, 0.003, 0.005
>>> print ('# of valid samples = %d' % counts)
# of valid samples = 1000 |
| |
Methods defined here:
- __init__(self, discreteLaw=None, seed=None, Debug=False)
- Constructor for discrete random variables with
- random(self)
- Generates discrete random values from a discrete random variable.
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class ExtendedTriangularRandomVariable(builtins.object) |
| |
ExtendedTriangularRandomVariable(lowLimit=0.0, highLimit=1.0, mode=None, probRepart=0.5, seed=None, Debug=False)
Extended triangular random variable generator
Parameters:
- mode := most frequently observed value
- probRepart := probability mass distributed until the mode
- seed := number for fixing the sequence generation.
.. image:: extTrDistribution.png
:alt: Extended triangular distribution
:width: 500 px
:align: center |
| |
Methods defined here:
- __init__(self, lowLimit=0.0, highLimit=1.0, mode=None, probRepart=0.5, seed=None, Debug=False)
- Constructor for extended triangular random variables
- random(self)
- Generates an extended triangular random number.
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class IncrementalQuantilesEstimator(builtins.object) |
| |
IncrementalQuantilesEstimator(nbuf=1000, Debug=False)
*References*:
- John M. Chambers (et al.), Monitoring Networked Applications
with incremental Quantile estimation, *Statistical Science* 2006 (4):463-475
- William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery, *Numerical Recipes : The Art of Scientific Computing,Third Edition* (NR3), Cambridge University Press, Cambridge UK 2007.
Python reimplementation (RB) from the C++/NR3 source code.
See Computational Statistics Course : http://hdl.handle.net/10993/37870
Lecture 5.
Example usage:
>>> from randomNumbers import IncrementalQuantilesEstimator
>>> import random
>>> random.seed(1)
>>> iqAgent = IncrementalQuantilesEstimator(nbuf=100)
>>> # feeding the iqAgent with standard Gaussian random numbers
>>> for i in range(1000):
... iqAgent.add(random.gauss(mu=0,sigma=1))
>>> # reporting the estimated Gaussian quartiles
>>> print(iqAgent.report(0.0))
-2.961214270519158
>>> print(iqAgent.report(0.25))
-0.6832621550224423
>>> print(iqAgent.report(0.50))
-0.014392849958746522
>>> print(iqAgent.report(0.75))
0.7029655732010196
>>> print(iqAgent.report(1.00))
2.737259509189501
>>> # saving the iqAgent's state
>>> iqAgent.saveState('test.csv') |
| |
Methods defined here:
- __init__(self, nbuf=1000, Debug=False)
- *nbuf* := buffersize in bytes (default = 1000B)
250 quantiles containing all percentiles from 0.10 to 0.90
and heavy tails.
- add(self, datum)
- Assimilate a new value from the stream
- addList(self, listDatum, historyWeight=None)
- Assimilate a list of new values.
Parameter:
*historyWeight* takes decimal values in [0.0;1.0[ and
indicates a requested proportional weight of the history
wrt to the length of listDatum.
Is ignored when None (default).
- cdf(self, x=0)
- return proportion of data lower or equal to value x
- loadState(self, FileName='state.csv')
- Load a previously saved state of the estimator
- report(self, p=0.5)
- Return estimated *p*-quantile (default = median)
for the data seen so far
- reset(self)
- Reset the content of the estimator to the initial state.
- saveState(self, fileName='state.csv')
- Save the state of the IncrementalQuantileEstimator self instance
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class MontyHallGameSimulator(builtins.object) |
| |
MontyHallGameSimulator(numberOfDoors=3, numberOfClues=1)
Generalized Monty Hall game simulator:
https://en.wikipedia.org/wiki/Monty_Hall_problem
Suppose you're on a game show, and you're given the choice of *n* doors:
Behind one door is a car; behind the others, goats.
You pick a door, say No. 1, and the host, who knows what's behind the doors,
to give you some clues, opens *k* other doors which have a goat.
He then says to you, "Do you want to pick another closed door?"
Is it to your advantage to switch your choice?
! The number *n* of doors must be at least equal to 3.
! The number *k* of clues cannot exceed the number of doors minus two,
namely the initially chosen and the last closed door where the car could be behind.
With *n* doors and *k* clues, the success probabilities become:
*1/n* when not switched, and *(n-1)/n(n-1-k)* when switched.
When the number of clues *k* = 0, both success probabilities remain the same,
namely *1/n* = *(n-1)/n(n-1)*. Evidently there is no reason to switch the
initial choice in this case. However, when 0 < *k* < *(n-1)*,
then the complement probability of *1/n*, i.e. *(n-1)/n* gets
divided by the number *(n-1-k) > 0* of still closed doors.
With positive clues, it is hence the more interesting to switch
the initial choice the more clues one is given.
With a maximum of *k = n-2* clues, the success probability,
when switching, becomes: *(n-1)/n*.
>>> from randomNumbers import MontyHallGameSimulator
>>> m = MontyHallGameSimulator(numberOfDoors=6,
... numberOfClues=4)
>>> m.simulate(numberOfTrials=1000,seed=1)
*******************************
Monty Hall game successes count
Number of doors: 6
Number of clues: 4
Sampling size : 1000
Random seed : 1
----------------------------------
Switched : 838 (0.8380); Theoretical probability: 0.8333
Not switched: 162 (0.1620); Theoretical probability: 0.1667 |
| |
Methods defined here:
- __init__(self, numberOfDoors=3, numberOfClues=1)
- Initialize self. See help(type(self)) for accurate signature.
- simulate(self, numberOfTrials=10, seed=None, Comments=True)
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class QuasiRandomFareyPointSet(QuasiRandomPointSet) |
| |
QuasiRandomFareyPointSet(n=20, s=3, seed=None, Randomized=True, fileName='farey', Debug=False)
Constructor for rendering a Farey point set of dimension *s* and max denominateor *n* which is *fully projection regular* in the $s$-dimensional real-valued [0,1]^s hypercube. The lattice constructor uses a randomly shuffled Farey series for the point construction. The resulting point set is stored in a self.pointSet attribute and saved by default in a CSV formatted file.
*Parameters*:
* *n* : (default=20) maximal denominator of the Farey series
* *s* : (default=3) dimension of the hypercube
* *seed* : (default = None) number for regenerating a fixed Farey point set
* *Randomized* : (default=True) On each dimension, the points are randomly shifted (mod 1) to avoid constant projections for equal dimension index distances.
* *fileName*: (default='farey') name -without the csv suffix- of the stored result file.
Sample Python session:
>>> from randomNumbers import QuasiRandomFareyPointSet
>>> qrfs = QuasiRandomFareyPointSet(n=20,s=5,Randomized=True,
... fileName='testFarey')
>>> print(qrfs.__dict__.keys())
dict_keys(['n', 's', 'Randomized', 'seed', 'fileName', 'Debug',
'fareySeries', 'seriesLength', 'shuffledFareySeries',
'pointSet', 'pointSetCardinality'])
>>> print(qrfs.fareySeries)
[0.0, 0.04, 0.04166, 0.0435, 0.04545, 0.0476, 0.05, 0.05263,
0.0555, 0.058823529411764705, ...]
>>> print(qrfs.seriesLength)
201
>>> print(qrfs.pointSet)
[(0.0, 0.0, 0.0, 0.0, 0.0), (0.5116, 0.4660, 0.6493, 0.41757, 0.3663),
(0.9776, 0.1153, 0.0669, 0.7839, 0.5926), (0.6269, 0.5329, 0.4332, 0.0102, 0.6126),
(0.0445, 0.8992, 0.6595, 0.0302, 0.6704), ...]
>>> print(qrfs.pointSetCardinality)
207
The resulting point set may be inspected in an R session::
> x = read.csv('testFarey.csv')
> x[1:5,]
x1 x2 x3 x4 x5
1 0.000000 0.000000 0.000000 0.000000 0.000000
2 0.511597 0.466016 0.649321 0.417573 0.366316
3 0.977613 0.115336 0.066893 0.783889 0.592632
4 0.626933 0.532909 0.433209 0.010205 0.612632
5 0.044506 0.899225 0.659525 0.030205 0.670410
> library('lattice')
> cloud(x$x5 ~ x$x1 + x$x3)
> plot(x$x1,x$x3)
.. image:: farey3D.png
:alt: Checking hypercube filling
:width: 500 px
:align: center
.. image:: fareyx1x3.png
:alt: Checking projection regularity
:width: 400 px
:align: center |
| |
- Method resolution order:
- QuasiRandomFareyPointSet
- QuasiRandomPointSet
- builtins.object
Methods defined here:
- __init__(self, n=20, s=3, seed=None, Randomized=True, fileName='farey', Debug=False)
- Initialize self. See help(type(self)) for accurate signature.
Methods inherited from QuasiRandomPointSet:
- countHits(self, regionLimits, pointSet=None)
- Counts *hits* of a quasi random point set in given regionLimits.
- testFct(self, seq=None, buggyRegionLimits=(0.45, 0.55))
- Tiny buggy hypercube for testing a quasi random Korobov 3D sequence.
- testUniformityDiscrepancy(self, k=4, pointSet=None, fileName='testUniformity', Debug=True)
- Counts the number of point in each partial hypercube [(x-1)/k,x/k]^d
where x integer ands 0 < x <= k.
Data descriptors inherited from QuasiRandomPointSet:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class QuasiRandomKorobovPointSet(QuasiRandomPointSet) |
| |
QuasiRandomKorobovPointSet(n=997, s=3, a=383, Randomized=False, seed=None, fileName='korobov', Debug=False)
Constructor for rendering a Korobov point set of dimension *n* which is *fully projection regular* in the $s$-dimensional real-valued [0,1)^s hypercube. The constructor uses a MLCG generator with potentially a full period. The point set is stored in a self.sequence attribute and saved in a CSV formatted file.
*Source*: Chr. Lemieux, Monte Carlo and quasi Monte Carlo Sampling Springer 2009 Fig. 5.12 p. 176.
*Parameters*:
* *n* : (default=997) number of Korobov points and modulus of the underlying MLCG
* *s* : (default=3) dimension of the hypercube
* *Randomized* : (default=False) the sequence is randomly shifted (mod 1) to avoid cycling when *s* > *n*
* *a* : (default=383) MLCG coefficient (0 < *a* < *n*), primitive with *n*. The choice of *a* and *n* is crucial for getting an MLCG with full period and hence a fully projection-regular sequence. A second good pair is given with *n* = 1021 (prime) and *a* = 76.
* *fileName*: (default='korobov') name -without the csv suffix- of the stored result.
* *seed* := number (default=None) for fixing the sequence generation.
Sample Python session:
>>> from randomNumbers import QuasiRandomKorobovPointSet
>>> kor = QuasiRandomKorobovPointSet(Debug=True)
0 [0.0, 0.0, 0.0]
1 [0.13536725313948247, 0.23158619430934912, 0.8941657924971758]
2 [0.36595043842175035, 0.7415995294344084, 0.7035940773517395]
3 [0.8759637735468097, 0.5510278142889722, 0.714627176649633]
4 [0.6853920584013734, 0.5620609135868657, 0.9403042077429129]
5 [0.6964251576992669, 0.7877379446801456, 0.3746071164690914]
The resulting Korobov sequence may be inspected in an **R** session::
> x = read.csv('korobov.csv')
> x[1:5,]
> x1 x2 x3
1 0.000000 0.000000 0.000000
2 0.135367 0.231586 0.894166
3 0.365950 0.741600 0.703594
4 0.875964 0.551028 0.714627
5 0.685392 0.562061 0.940304
> library('lattice')
> cloud(x$x3 ~ x$x1 + x$x2)
> plot(x$x1,x$x2,pch='°')
> plot(x$x1,x$x3,pch="°")
.. image:: korobov3D.png
:alt: Checking projection regularity
:width: 500 px
:align: center
.. image:: korobovProjection12.png
:alt: Checking projection regularity
:width: 400 px
:align: center
.. image:: korobovProjection13.png
:alt: Checking projection regularity
:width: 400 px
:align: center |
| |
- Method resolution order:
- QuasiRandomKorobovPointSet
- QuasiRandomPointSet
- builtins.object
Methods defined here:
- __init__(self, n=997, s=3, a=383, Randomized=False, seed=None, fileName='korobov', Debug=False)
- Initialize self. See help(type(self)) for accurate signature.
Methods inherited from QuasiRandomPointSet:
- countHits(self, regionLimits, pointSet=None)
- Counts *hits* of a quasi random point set in given regionLimits.
- testFct(self, seq=None, buggyRegionLimits=(0.45, 0.55))
- Tiny buggy hypercube for testing a quasi random Korobov 3D sequence.
- testUniformityDiscrepancy(self, k=4, pointSet=None, fileName='testUniformity', Debug=True)
- Counts the number of point in each partial hypercube [(x-1)/k,x/k]^d
where x integer ands 0 < x <= k.
Data descriptors inherited from QuasiRandomPointSet:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class QuasiRandomPointSet(builtins.object) |
| |
Abstract class for generic quasi random point set methods and tools. |
| |
Methods defined here:
- countHits(self, regionLimits, pointSet=None)
- Counts *hits* of a quasi random point set in given regionLimits.
- testFct(self, seq=None, buggyRegionLimits=(0.45, 0.55))
- Tiny buggy hypercube for testing a quasi random Korobov 3D sequence.
- testUniformityDiscrepancy(self, k=4, pointSet=None, fileName='testUniformity', Debug=True)
- Counts the number of point in each partial hypercube [(x-1)/k,x/k]^d
where x integer ands 0 < x <= k.
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class QuasiRandomUniformPointSet(QuasiRandomPointSet) |
| |
QuasiRandomUniformPointSet(n=100, s=3, seed=None, Randomized=True, fileName='uniform', Debug=False)
Constructor for rendering a quai random point set of dimension *s* and
max denominateor *n* which is *fully projection regular* in the *s*-dimensional
real-valued [0,1]^s hypercube. The lattice constructor uses a randomly shuffled
*uniform* series for the point construction. The resulting point set is stored
in a *self.pointSet* attribute and saved by default in a CSV formatted file.
*Parameters*:
* *n* : (default=100) denominator of the uniform series x/n with 0 <= x <= n,
* *s* : (default=3) dimension of the hypercube,
* *seed* : number (default = None) for regenerating the same Farey point set ,
* *Randomized* : (default=True) On each dimension, the points are randomly
shifted (mod 1) to avoid constant projections for equal dimension index distances,
* *fileName*: (default='uniform') name -without the csv suffix- of the stored result file. |
| |
- Method resolution order:
- QuasiRandomUniformPointSet
- QuasiRandomPointSet
- builtins.object
Methods defined here:
- __init__(self, n=100, s=3, seed=None, Randomized=True, fileName='uniform', Debug=False)
- Initialize self. See help(type(self)) for accurate signature.
Methods inherited from QuasiRandomPointSet:
- countHits(self, regionLimits, pointSet=None)
- Counts *hits* of a quasi random point set in given regionLimits.
- testFct(self, seq=None, buggyRegionLimits=(0.45, 0.55))
- Tiny buggy hypercube for testing a quasi random Korobov 3D sequence.
- testUniformityDiscrepancy(self, k=4, pointSet=None, fileName='testUniformity', Debug=True)
- Counts the number of point in each partial hypercube [(x-1)/k,x/k]^d
where x integer ands 0 < x <= k.
Data descriptors inherited from QuasiRandomPointSet:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
| |