| | |
- builtins.object
-
- RandomPerformanceGenerator
- perfTabs.PerformanceTableau(builtins.object)
-
- Random3ObjectivesPerformanceTableau
- RandomAcademicPerformanceTableau
- RandomCBPerformanceTableau
- RandomPerformanceTableau
- RandomRankPerformanceTableau
class Random3ObjectivesPerformanceTableau(perfTabs.PerformanceTableau) |
| |
Random3ObjectivesPerformanceTableau(numberOfActions=20, shortNamePrefix='p', numberOfCriteria=13, weightDistribution='equiobjectives', weightScale=None, IntegerWeights=True, OrdinalScales=False, NegativeWeights=False, negativeWeightProbability=0.0, commonScale=None, commonThresholds=None, commonMode=None, valueDigits=2, vetoProbability=0.5, missingDataProbability=0.05, NA=-999, BigData=False, seed=None, Debug=False)
For geneating random 3 objectives: *Eco*, *Soc* and *Env* multile-criteria performance records. Each decision action is qualified randomly as weak (-), fair (~) or good (+) on each of the three objectives.
Generator arguments:
* numberOf Actions := 20 (default)
* shortNamePrefix := 'a' (default)
* number of Criteria := 13 (default)
* weightDistribution := 'equiobjectives' (default)
| 'equisignificant' (weights set all to 1)
| 'random' (in the range 1 to numberOfCriteria)
* weightScale := [1,numerOfCriteria] (random default)
* IntegerWeights := True (default) / False
* OrdinalScales := True / False (default), if True commonScale is set to (0,10)
* NegativeWeights := True (default) / False. If False, evaluations to be minimized are negative.
* negativeWeightProbability := [0,1] (default 0.10), 'min' preference direction probability
* commonScale := (Min, Max)
| when commonScale is None, (Min=0.0,Max=10.0) by default if OrdinalScales == True and (Min=0.0,Max=100.0) by default otherwise
* commonThresholds := ((Ind,Ind_slope),(Pref,Pref_slope),(Veto,Veto_slope)) with
| Ind < Pref < Veto in [0.0,100.0] such that
| (Ind/100.0*span + Ind_slope*x) < (Pref/100.0*span + Pref_slope*x) < (Pref/100.0*span + Pref_slope*x)
| By default [(0.05*span,0.0),(0.10*span,0.0),(0.60*span,0.0)] if OrdinalScales=False
| By default [(0.1*span,0.0),(0.2*span,0.0),(0.8*span,0.0)] otherwise
| with span = commonScale[1] - commonScale[0].
* commonMode := ['triangular','variable',0.50] (default), A constant mode may be provided.
| ['uniform','variable',None], a constant range may be provided.
| ['beta','variable',None] (three alpha, beta combinations:
| (5.8661,2.62203),(5.05556,5.05556) and (2.62203, 5.8661)
| chosen by default for 'good', 'fair' and 'weak' evaluations.
| Constant parameters may be provided.
* valueDigits := 2 (default)
* vetoProbability := x in ]0.0-1.0[ (0.5 default), probability that a cardinal criterion shows a veto preference discrimination threshold.
* Debug := True / False (default)
.. warning::
Minimal number required of criteria is 3!
>>> from randomPerfTabs import Random3ObjectivesPerformanceTableau
>>> t = Random3ObjectivesPerformanceTableau(numberOfActions=5,numberOfCriteria=3,seed=1)
>>> t
*------- PerformanceTableau instance description ------*
Instance class : Random3ObjectivesPerformanceTableau
Seed : 1
Instance name : random3ObjectivesPerfTab
# Actions : 5
# Objectives : 3
# Criteria : 3
Attributes : ['name', 'valueDigits', 'BigData', 'OrdinalScales',
'missingDataProbability', 'negativeWeightProbability',
'randomSeed', 'sumWeights', 'valuationPrecision',
'commonScale', 'objectiveSupportingTypes',
'actions', 'objectives', 'criteriaWeightMode',
'criteria', 'evaluation', 'weightPreorder']
>>> t.showObjectives()
*------ show objectives -------"
Eco: Economical aspect
ec1 criterion of objective Eco 1
Total weight: 1.00 (1 criteria)
Soc: Societal aspect
so2 criterion of objective Soc 1
Total weight: 1.00 (1 criteria)
Env: Environmental aspect
en3 criterion of objective Env 1
Total weight: 1.00 (1 criteria)
>>> t.showActions()
*----- show decision action --------------*
key: p1
short name: p1
name: random public policy Eco+ Soc- Env+
profile: {'Eco': 'good', 'Soc': 'weak', 'Env': 'good'}
key: p2
short name: p2
name: random public policy Eco~ Soc+ Env~
profile: {'Eco': 'fair', 'Soc': 'good', 'Env': 'fair'}
key: p3
short name: p3
name: random public policy Eco~ Soc~ Env-
profile: {'Eco': 'fair', 'Soc': 'fair', 'Env': 'weak'}
key: p4
short name: p4
name: random public policy Eco~ Soc+ Env+
profile: {'Eco': 'fair', 'Soc': 'good', 'Env': 'good'}
key: p5
short name: p5
name: random public policy Eco~ Soc+ Env~
profile: {'Eco': 'fair', 'Soc': 'good', 'Env': 'fair'}
>>> t.showPerformanceTableau()
*---- performance tableau -----*
criteria | weights | 'p1' 'p2' 'p3' 'p4' 'p5'
---------|---------------------------------------------
'ec1' | 1 | 36.29 85.17 34.49 NA 56.58
'so2' | 1 | 55.00 56.33 NA 67.36 72.22
'en3' | 1 | 66.58 48.71 21.59 NA NA
>>> |
| |
- Method resolution order:
- Random3ObjectivesPerformanceTableau
- perfTabs.PerformanceTableau
- builtins.object
Methods defined here:
- __init__(self, numberOfActions=20, shortNamePrefix='p', numberOfCriteria=13, weightDistribution='equiobjectives', weightScale=None, IntegerWeights=True, OrdinalScales=False, NegativeWeights=False, negativeWeightProbability=0.0, commonScale=None, commonThresholds=None, commonMode=None, valueDigits=2, vetoProbability=0.5, missingDataProbability=0.05, NA=-999, BigData=False, seed=None, Debug=False)
- Initialize self. See help(type(self)) for accurate signature.
- showActions(self, Alphabetic=False)
- presentation methods for decision actions or alternatives
- showObjectives(self)
Methods inherited from perfTabs.PerformanceTableau:
- __repr__(self)
- Default presentation method for PerformanceTableau instances.
- computeActionCriterionPerformanceDifferences(self, refAction, refCriterion, comments=False, Debug=False)
- computes the performances differences observed between the reference action and the others on the given criterion
- computeActionCriterionQuantile(self, action, criterion, strategy='average', Debug=False)
- renders the quantile of the performance of action on criterion
- computeActionQuantile(self, action, Debug=False)
- renders the overall performance quantile of action
- computeAllQuantiles(self, Sorted=True, Comments=False)
- renders a html string showing the table of
the quantiles matrix action x criterion
- computeCriterionPerformanceDifferences(self, c, Comments=False, Debug=False)
- Renders the ordered list of all observed performance differences on the given criterion.
- computeDefaultDiscriminationThresholds(self, criteriaList=None, quantile={'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto': 80}, Debug=False, Comments=False)
- updates the discrimination thresholds with the percentiles
from the performance differences.
Parameters: quantile = {'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto: 80}.
- computeMinMaxEvaluations(self, criteria=None, actions=None)
- renders minimum and maximum performances on each criterion
in dictionary form: {'g': {'minimum': x, 'maximum': x}}
- computeMissingDataProportion(self, InPercents=False, Comments=False)
- Renders the proportion of missing data,
i.e. NA == Decimal('-999') entries in self.evaluation.
- computeNormalizedDiffEvaluations(self, lowValue=0.0, highValue=100.0, withOutput=False, Debug=False)
- renders and csv stores (withOutput=True) the
list of normalized evaluation differences observed on the family of criteria
Is only adequate if all criteria have the same
evaluation scale. Therefore the performance tableau is normalized to 0.0-100.0 scales.
- computePerformanceDifferences(self, Comments=False, Debug=False, NotPermanentDiffs=True, WithMaxMin=False)
- Adds to the criteria dictionary the ordered list of all observed performance differences.
- computeQuantileOrder(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ordering of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to quantiles=min( 100, max(10,len(self.actions)/10]) ).
The actions are ordered along a decreasing Borda score of their ranking results.
- computeQuantilePreorder(self, Comments=True, Debug=False)
- computes the preorder of the actions obtained from decreasing majority quantiles. The quantiles are recomputed with a call to the self.computeQuantileSort() method.
- computeQuantileRanking(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ranking of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to qantiles=min( 100, max(10,len(self.actions)/10) ).
The actions are ordered along an increasing Borda score of their ranking results.
- computeQuantileSort(self)
- shows a sorting of the actions from decreasing majority quantiles
- computeQuantiles(self, Debug=False)
- renders a quantiles matrix action x criterion with the performance quantile of action on criterion
- computeRankingConsensusQuality(self, ranking, Comments=False, Threading=False, nbrOfCPUs=1)
- Renders the marginal criteria correlations with a given ranking with summary.
- computeThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given constant threshold.
- computeVariableThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given threshold.
- computeWeightPreorder(self)
- renders the weight preorder following from the given
criteria weights in a list of increasing equivalence
lists of criteria.
- computeWeightedAveragePerformances(self, isNormalized=False, lowValue=0.0, highValue=100.0, isListRanked=False)
- Compute normalized weighted average scores by ignoring missing data.
When *isNormalized* == True (False by default),
transforms all the scores into a common 0-100 scale.
A lowValue and highValue parameter
can be provided for a specific normalisation.
- convert2BigData(self)
- Renders a cPerformanceTableau instance, by converting the action keys to integers and evaluations to floats, including the discrimination thresholds, the case given.
- convertDiscriminationThresholds2Decimal(self)
- convertDiscriminationThresholds2Float(self)
- convertEvaluation2Decimal(self)
- Convert evaluations from obsolete float format to decimal format
- convertEvaluation2Float(self)
- Convert evaluations from decimal format to float
- convertInsite2BigData(self)
- Convert in site a standard formated Performance tableau into a bigData formated instance.
- convertInsite2Standard(self)
- Convert in site a bigData formated Performance tableau back into a standard formated PerformanceTableau instance.
- convertWeight2Decimal(self)
- Convert significance weights from obsolete float format
to decimal format.
- convertWeight2Integer(self)
- Convert significance weights from Decimal format
to int format.
- convertWeights2Negative(self)
- Negates the weights of criteria to be minimzed.
- convertWeights2Positive(self)
- Sets negative weights back to positive weights and negates corresponding evaluation grades.
- csvAllQuantiles(self, fileName='quantiles')
- save quantiles matrix criterionxaction in CSV format
- hasOddWeightAlgebra(self, Debug=False)
- Verify if the given criteria[self]['weight'] are odd or not.
Return a Boolen value.
- normalizeEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations between lowValue and highValue on all criteria
- quantizeCriterionEvaluations(self, g, q, ndigits=2, Debug=True)
- q-tile evaluation of criterion q
- replaceNA(self, newNA=None, Comments=False)
- Replaces the current self.NA symbol with the *newNA* symbol of type <Decimal>. If newNA is None, the defauklt value Decimal('-999') is used.
- restoreOriginalEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations to their original values on all criteria
- save(self, fileName='tempperftab', isDecimal=True, valueDigits=2, Comments=True)
- Persistant storage of Performance Tableaux.
- saveCSV(self, fileName='tempPerfTab', Sorted=True, criteriaList=None, actionsList=None, ndigits=2, Debug=False)
- 1
Store the performance Tableau self Actions x Criteria in CSV format.
- saveXMCDA2(self, fileName='temp', category='XMCDA 2.0 Extended format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=False, isStringIO=False, stringNA='NA', comment='produced by saveXMCDA2()', hasVeto=True)
- save performance tableau object self in XMCDA 2.0 format including decision objectives, the case given.
- saveXMCDA2String(self, fileName='temp', category='XMCDA 2.0 format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=True, comment='produced by stringIO()', stringNA='NA')
- save performance tableau object self in XMCDA 2.0 format.
!!! obsolete: replaced by the isStringIO in the saveXMCDA2 method !!!
- setObjectiveWeights(self, Debug=False)
- Set the objective weights to the sum of the corresponding criteria significance weights.
- showAll(self)
- Show fonction for performance tableau
- showAllQuantiles(self, Sorted=True)
- prints the performance quantiles tableau in the session console.
- showCriteria(self, IntegerWeights=False, Alphabetic=False, ByObjectives=False, Debug=False)
- print Criteria with thresholds and weights.
- showEvaluationStatistics(self)
- renders the variance and standard deviation of
the values observed in the performance Tableau.
- showHTMLCriteria(self, criteriaSubset=None, Sorted=True, ndigits=2, title=None, htmlFileName=None)
- shows the criteria in the system browser view.
- showHTMLPerformanceHeatmap(self, actionsList=None, WithActionNames=False, fromIndex=None, toIndex=None, Transposed=False, criteriaList=None, colorLevels=7, pageTitle=None, ndigits=2, SparseModel=False, outrankingModel='standard', minimalComponentSize=1, rankingRule='NetFlows', StoreRanking=True, quantiles=None, strategy='average', Correlations=False, htmlFileName=None, Threading=False, startMethod=None, nbrOfCPUs=None, Debug=False)
- shows the html heatmap version of the performance tableau in a browser window
(see perfTabs.htmlPerformanceHeatMap() method ).
**Parameters**:
* *actionsList* and *criteriaList*, if provided, give the possibility to show
the decision alternatives, resp. criteria, in a given ordering.
* *WithActionNames* = True (default False) will show the action names instead of the short names or the identifyers.
* *ndigits* = 0 may be used to show integer evaluation values.
* *colorLevels* may be 3, 5, 7, or 9.
* When no *actionsList* is provided, the decision actions are ordered from the best to the worst. This
ranking is obtained by default with the Copeland rule applied on a standard *BipolarOutrankingDigraph*.
* When the *SparseModel* flag is put to *True*, a sparse *PreRankedOutrankingDigraph* construction is used instead.
* the *outrankingModel* parameter (default = 'standard') allows to switch to alternative BipolarOutrankingDigraph constructors, like 'confident' or 'robust' models. When called from a bipolar-valued outrankingDigraph instance, *outrankingModel* = 'this' keeps the current outranking model without recomputing by default the standard outranking model.
* The *minimalComponentSize* allows to control the fill rate of the pre-ranked model.
When *minimalComponentSize* = *n* (the number of decision actions) both the pre-ranked model will be
in fact equivalent to the standard model.
* *rankingRule* = 'NetFlows' (default) | 'Copeland' | 'Kohler' | 'RankedPairs' | 'ArrowRaymond'
| 'IteratedNetFlows' | 'IteraredCopeland'. See tutorial on ranking mith multiple incommensurable criteria.
* when the *StoreRanking* flag is set to *True*, the ranking result is storted in *self*.
* Quantiles used for the pre-ranked decomposition are put by default to *n*
(the number of decision alternatives) for *n* < 50. For larger cardinalities up to 1000, quantiles = *n* /10.
For bigger performance tableaux the *quantiles* parameter may be set to a much lower value
not exceeding usually 10.
* The pre-ranking may be obtained with three ordering strategies for the
quantiles equivalence classes: 'average' (default), 'optimistic' or 'pessimistic'.
* With *Correlations* = *True* and *criteriaList* = *None*, the criteria will be presented from left to right in decreasing
order of the correlations between the marginal criterion based ranking and the global ranking used for
presenting the decision alternatives.
* For large performance Tableaux, *multiprocessing* techniques may be used by setting
*Threading* = *True* in order to speed up the computations; especially when *Correlations* = *True*.
* By default, the number of cores available, will be detected. It may be necessary in a HPC context to indicate the exact number of singled threaded cores in fact allocated to the multiprocessing job.
>>> from randomPerfTabs import RandomPerformanceTableau
>>> rt = RandomPerformanceTableau(seed=100)
>>> rt.showHTMLPerformanceHeatmap(colorLevels=5,Correlations=True)
.. image:: perfTabsExample.png
:alt: HTML heat map of the performance tableau
:width: 600 px
:align: center
- showHTMLPerformanceQuantiles(self, Sorted=True, htmlFileName=None)
- shows the performance quantiles tableau in a browser window.
- showHTMLPerformanceTableau(self, actionsSubset=None, fromIndex=None, toIndex=None, isSorted=False, Transposed=False, ndigits=2, ContentCentered=True, title=None, htmlFileName=None)
- shows the html version of the performance tableau in a browser window.
- showPairwiseComparison(self, a, b, Debug=False, isReturningHTML=False, hasSymmetricThresholds=True)
- renders the pairwise comprison parameters on all criteria
in html format
- showPerformanceTableau(self, Transposed=False, actionsSubset=None, fromIndex=None, toIndex=None, Sorted=True, ndigits=2)
- Print the performance Tableau.
- showQuantileSort(self, Debug=False)
- Wrapper of computeQuantilePreorder() for the obsolete showQuantileSort() method.
- showRankingConsensusQuality(self, ranking)
- shows the marginal criteria correlations with a given ranking with summary.
- showStatistics(self, Debug=False)
- show statistics concerning the evaluation distributions
on each criteria.
- showWeightPreorder(self)
- Renders a preordering of the the criteria signficance weights.
Data descriptors inherited from perfTabs.PerformanceTableau:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class RandomAcademicPerformanceTableau(perfTabs.PerformanceTableau) |
| |
RandomAcademicPerformanceTableau(numberOfStudents=10, numberOfCourses=5, weightDistribution='random', weightScale=(1, 5), commonScale=(0, 20), ndigits=0, WithTypes=False, commonMode=('triangular', 12, 0.25), commonThresholds=None, IntegerWeights=True, BigData=False, missingDataProbability=0.0, NA=-999, seed=None, Debug=False)
For generating a temporary academic performance tableau with random grading results performances
of a number of students in different academic courses (see Lecture 4: Grading
of the Algorithmic decision Theory Course http://hdl.handle.net/10993/37933 )
*Parameters*:
* number of students,
* number of courses,
* weightDistribution := equisignificant | random (default, see RandomPerformanceTableau)
* weightScale := (1, 1 | numberOfCourses (default when random))
* IntegerWeights := Boolean (True = default)
* commonScale := (0,20) (default)
* ndigits := 0
* WithTypes := Boolean (False = default)
* commonMode := ('triangular',xm=14,r=0.25)
* commonThresholds (default) := {
| 'ind':(0,0),
| 'pref':(1,0),
| } (default)
When parameter *WithTypes* is set to *True*, the students are randomly allocated
to one of the four categories: *weak* (1/6), *fair* (1/3), *good* (1/3),
and *excellent* (1/3), in the bracketed proportions.
In a default 0-20 grading range, the random range of a weak student is 0-10,
of a fair student 4-16, of a good student 8-20, and of an excellent student 12-20.
The random grading generator follows a double triangular probablity law
with *mode* equal to the middle of the random range and *median repartition* of
probability each side of the mode.
>>> from randomPerfTabs import RandomAcademicPerformanceTableau
>>> t = RandomAcademicPerformanceTableau(numberOfStudents=7,
... numberOfCourses=5, missingDataProbability=0.03,
... WithTypes=True, seed=100)
>>> t
*------- PerformanceTableau instance description ------*
Instance class : RandomAcademicPerformanceTableau
Seed : 100
Instance name : randstudPerf
Actions : 7
Criteria : 5
Attributes : ['randomSeed', 'name', 'actions',
'criteria', 'evaluation', 'weightPreorder']
>>> t.showPerformanceTableau()
*---- performance tableau -----*
Courses | 'm1' 'm2' 'm3' 'm4' 'm5'
ECTS | 5 1 5 4 3
---------|--------------------------
's1f' | 12 10 14 14 13
's2g' | 14 12 16 12 14
's3g' | 13 10 NA 12 17
's4f' | 10 13 NA 13 12
's5e' | 17 12 16 17 12
's6g' | 17 17 12 16 14
's7e' | 12 13 13 16 NA
>>> t.weightPreorder
[['m2'], ['m5'], ['m4'], ['m1', 'm3']]
The random instance generated here with seed = 100 results in a set of only
excellent (2), good (3) and fair (2) student performances. We observe 3 missing grades (NA).
We may show a statistical summary per course (performance criterion) with more than 5 grades.
>>> t.showStatistics()
*-------- Performance tableau summary statistics -------*
Instance name : randstudPerf
#Actions : 7
#Criteria : 5
*Statistics per Criterion*
Criterion name : g1
Criterion weight : 5
criterion scale : 0.00 - 20.00
# missing evaluations : 0
mean evaluation : 13.57
standard deviation : 2.44
maximal evaluation : 17.00
quantile Q3 (x_75) : 17.00
median evaluation : 13.50
quantile Q1 (x_25) : 12.00
minimal evaluation : 10.00
mean absolute difference : 2.69
standard difference deviation : 3.45
Criterion name : g2
Criterion weight : 1
criterion scale : 0.00 - 20.00
# missing evaluations : 0
mean evaluation : 12.43
standard deviation : 2.19
maximal evaluation : 17.00
quantile Q3 (x_75) : 14.00
median evaluation : 12.50
quantile Q1 (x_25) : 11.50
minimal evaluation : 10.00
mean absolute difference : 2.29
standard difference deviation : 3.10
Criterion name : g3
Criterion weight : 5
criterion scale : 0.00 - 20.00
# missing evaluations : 2
Criterion name : g4
Criterion weight : 4
criterion scale : 0.00 - 20.00
# missing evaluations : 0
mean evaluation : 14.29
standard deviation : 1.91
maximal evaluation : 17.00
quantile Q3 (x_75) : 16.25
median evaluation : 15.00
quantile Q1 (x_25) : 12.75
minimal evaluation : 12.00
mean absolute difference : 2.12
standard difference deviation : 2.70
Criterion name : g5
Criterion weight : 3
criterion scale : 0.00 - 20.00
# missing evaluations : 1
mean evaluation : 13.67
standard deviation : 1.70
maximal evaluation : 17.00
quantile Q3 (x_75) : 15.50
median evaluation : 14.00
quantile Q1 (x_25) : 12.50
minimal evaluation : 12.00
mean absolute difference : 1.78
standard difference deviation : 2.40 |
| |
- Method resolution order:
- RandomAcademicPerformanceTableau
- perfTabs.PerformanceTableau
- builtins.object
Methods defined here:
- __init__(self, numberOfStudents=10, numberOfCourses=5, weightDistribution='random', weightScale=(1, 5), commonScale=(0, 20), ndigits=0, WithTypes=False, commonMode=('triangular', 12, 0.25), commonThresholds=None, IntegerWeights=True, BigData=False, missingDataProbability=0.0, NA=-999, seed=None, Debug=False)
- showCourseStatistics(self, courseID, Debug=False)
- show statistics concerning the grades' distributions in the given course.
- showCourses(self, coursesSubset=None, ndigits=0, pageTitle='List of Courses')
- Print a list of the courses.
- showHTMLPerformanceTableau(self, studentsSubset=None, isSorted=True, Transposed=False, ndigits=0, ContentCentered=True, title=None, fromIndex=None, toIndex=None, htmlFileName=None)
- shows the html version of the academic performance tableau in a browser window.
- showPerformanceTableau(self, Transposed=False, studentsSubset=None, fromIndex=None, toIndex=None, Sorted=True, ndigits=0)
- Print the performance Tableau.
- showStatistics(self)
- Obsolete
- showStudents(self, WithComments=False)
- Print a list of the students.
Methods inherited from perfTabs.PerformanceTableau:
- __repr__(self)
- Default presentation method for PerformanceTableau instances.
- computeActionCriterionPerformanceDifferences(self, refAction, refCriterion, comments=False, Debug=False)
- computes the performances differences observed between the reference action and the others on the given criterion
- computeActionCriterionQuantile(self, action, criterion, strategy='average', Debug=False)
- renders the quantile of the performance of action on criterion
- computeActionQuantile(self, action, Debug=False)
- renders the overall performance quantile of action
- computeAllQuantiles(self, Sorted=True, Comments=False)
- renders a html string showing the table of
the quantiles matrix action x criterion
- computeCriterionPerformanceDifferences(self, c, Comments=False, Debug=False)
- Renders the ordered list of all observed performance differences on the given criterion.
- computeDefaultDiscriminationThresholds(self, criteriaList=None, quantile={'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto': 80}, Debug=False, Comments=False)
- updates the discrimination thresholds with the percentiles
from the performance differences.
Parameters: quantile = {'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto: 80}.
- computeMinMaxEvaluations(self, criteria=None, actions=None)
- renders minimum and maximum performances on each criterion
in dictionary form: {'g': {'minimum': x, 'maximum': x}}
- computeMissingDataProportion(self, InPercents=False, Comments=False)
- Renders the proportion of missing data,
i.e. NA == Decimal('-999') entries in self.evaluation.
- computeNormalizedDiffEvaluations(self, lowValue=0.0, highValue=100.0, withOutput=False, Debug=False)
- renders and csv stores (withOutput=True) the
list of normalized evaluation differences observed on the family of criteria
Is only adequate if all criteria have the same
evaluation scale. Therefore the performance tableau is normalized to 0.0-100.0 scales.
- computePerformanceDifferences(self, Comments=False, Debug=False, NotPermanentDiffs=True, WithMaxMin=False)
- Adds to the criteria dictionary the ordered list of all observed performance differences.
- computeQuantileOrder(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ordering of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to quantiles=min( 100, max(10,len(self.actions)/10]) ).
The actions are ordered along a decreasing Borda score of their ranking results.
- computeQuantilePreorder(self, Comments=True, Debug=False)
- computes the preorder of the actions obtained from decreasing majority quantiles. The quantiles are recomputed with a call to the self.computeQuantileSort() method.
- computeQuantileRanking(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ranking of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to qantiles=min( 100, max(10,len(self.actions)/10) ).
The actions are ordered along an increasing Borda score of their ranking results.
- computeQuantileSort(self)
- shows a sorting of the actions from decreasing majority quantiles
- computeQuantiles(self, Debug=False)
- renders a quantiles matrix action x criterion with the performance quantile of action on criterion
- computeRankingConsensusQuality(self, ranking, Comments=False, Threading=False, nbrOfCPUs=1)
- Renders the marginal criteria correlations with a given ranking with summary.
- computeThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given constant threshold.
- computeVariableThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given threshold.
- computeWeightPreorder(self)
- renders the weight preorder following from the given
criteria weights in a list of increasing equivalence
lists of criteria.
- computeWeightedAveragePerformances(self, isNormalized=False, lowValue=0.0, highValue=100.0, isListRanked=False)
- Compute normalized weighted average scores by ignoring missing data.
When *isNormalized* == True (False by default),
transforms all the scores into a common 0-100 scale.
A lowValue and highValue parameter
can be provided for a specific normalisation.
- convert2BigData(self)
- Renders a cPerformanceTableau instance, by converting the action keys to integers and evaluations to floats, including the discrimination thresholds, the case given.
- convertDiscriminationThresholds2Decimal(self)
- convertDiscriminationThresholds2Float(self)
- convertEvaluation2Decimal(self)
- Convert evaluations from obsolete float format to decimal format
- convertEvaluation2Float(self)
- Convert evaluations from decimal format to float
- convertInsite2BigData(self)
- Convert in site a standard formated Performance tableau into a bigData formated instance.
- convertInsite2Standard(self)
- Convert in site a bigData formated Performance tableau back into a standard formated PerformanceTableau instance.
- convertWeight2Decimal(self)
- Convert significance weights from obsolete float format
to decimal format.
- convertWeight2Integer(self)
- Convert significance weights from Decimal format
to int format.
- convertWeights2Negative(self)
- Negates the weights of criteria to be minimzed.
- convertWeights2Positive(self)
- Sets negative weights back to positive weights and negates corresponding evaluation grades.
- csvAllQuantiles(self, fileName='quantiles')
- save quantiles matrix criterionxaction in CSV format
- hasOddWeightAlgebra(self, Debug=False)
- Verify if the given criteria[self]['weight'] are odd or not.
Return a Boolen value.
- normalizeEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations between lowValue and highValue on all criteria
- quantizeCriterionEvaluations(self, g, q, ndigits=2, Debug=True)
- q-tile evaluation of criterion q
- replaceNA(self, newNA=None, Comments=False)
- Replaces the current self.NA symbol with the *newNA* symbol of type <Decimal>. If newNA is None, the defauklt value Decimal('-999') is used.
- restoreOriginalEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations to their original values on all criteria
- save(self, fileName='tempperftab', isDecimal=True, valueDigits=2, Comments=True)
- Persistant storage of Performance Tableaux.
- saveCSV(self, fileName='tempPerfTab', Sorted=True, criteriaList=None, actionsList=None, ndigits=2, Debug=False)
- 1
Store the performance Tableau self Actions x Criteria in CSV format.
- saveXMCDA2(self, fileName='temp', category='XMCDA 2.0 Extended format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=False, isStringIO=False, stringNA='NA', comment='produced by saveXMCDA2()', hasVeto=True)
- save performance tableau object self in XMCDA 2.0 format including decision objectives, the case given.
- saveXMCDA2String(self, fileName='temp', category='XMCDA 2.0 format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=True, comment='produced by stringIO()', stringNA='NA')
- save performance tableau object self in XMCDA 2.0 format.
!!! obsolete: replaced by the isStringIO in the saveXMCDA2 method !!!
- setObjectiveWeights(self, Debug=False)
- Set the objective weights to the sum of the corresponding criteria significance weights.
- showActions(self, Alphabetic=False)
- presentation methods for decision actions or alternatives
- showAll(self)
- Show fonction for performance tableau
- showAllQuantiles(self, Sorted=True)
- prints the performance quantiles tableau in the session console.
- showCriteria(self, IntegerWeights=False, Alphabetic=False, ByObjectives=False, Debug=False)
- print Criteria with thresholds and weights.
- showEvaluationStatistics(self)
- renders the variance and standard deviation of
the values observed in the performance Tableau.
- showHTMLCriteria(self, criteriaSubset=None, Sorted=True, ndigits=2, title=None, htmlFileName=None)
- shows the criteria in the system browser view.
- showHTMLPerformanceHeatmap(self, actionsList=None, WithActionNames=False, fromIndex=None, toIndex=None, Transposed=False, criteriaList=None, colorLevels=7, pageTitle=None, ndigits=2, SparseModel=False, outrankingModel='standard', minimalComponentSize=1, rankingRule='NetFlows', StoreRanking=True, quantiles=None, strategy='average', Correlations=False, htmlFileName=None, Threading=False, startMethod=None, nbrOfCPUs=None, Debug=False)
- shows the html heatmap version of the performance tableau in a browser window
(see perfTabs.htmlPerformanceHeatMap() method ).
**Parameters**:
* *actionsList* and *criteriaList*, if provided, give the possibility to show
the decision alternatives, resp. criteria, in a given ordering.
* *WithActionNames* = True (default False) will show the action names instead of the short names or the identifyers.
* *ndigits* = 0 may be used to show integer evaluation values.
* *colorLevels* may be 3, 5, 7, or 9.
* When no *actionsList* is provided, the decision actions are ordered from the best to the worst. This
ranking is obtained by default with the Copeland rule applied on a standard *BipolarOutrankingDigraph*.
* When the *SparseModel* flag is put to *True*, a sparse *PreRankedOutrankingDigraph* construction is used instead.
* the *outrankingModel* parameter (default = 'standard') allows to switch to alternative BipolarOutrankingDigraph constructors, like 'confident' or 'robust' models. When called from a bipolar-valued outrankingDigraph instance, *outrankingModel* = 'this' keeps the current outranking model without recomputing by default the standard outranking model.
* The *minimalComponentSize* allows to control the fill rate of the pre-ranked model.
When *minimalComponentSize* = *n* (the number of decision actions) both the pre-ranked model will be
in fact equivalent to the standard model.
* *rankingRule* = 'NetFlows' (default) | 'Copeland' | 'Kohler' | 'RankedPairs' | 'ArrowRaymond'
| 'IteratedNetFlows' | 'IteraredCopeland'. See tutorial on ranking mith multiple incommensurable criteria.
* when the *StoreRanking* flag is set to *True*, the ranking result is storted in *self*.
* Quantiles used for the pre-ranked decomposition are put by default to *n*
(the number of decision alternatives) for *n* < 50. For larger cardinalities up to 1000, quantiles = *n* /10.
For bigger performance tableaux the *quantiles* parameter may be set to a much lower value
not exceeding usually 10.
* The pre-ranking may be obtained with three ordering strategies for the
quantiles equivalence classes: 'average' (default), 'optimistic' or 'pessimistic'.
* With *Correlations* = *True* and *criteriaList* = *None*, the criteria will be presented from left to right in decreasing
order of the correlations between the marginal criterion based ranking and the global ranking used for
presenting the decision alternatives.
* For large performance Tableaux, *multiprocessing* techniques may be used by setting
*Threading* = *True* in order to speed up the computations; especially when *Correlations* = *True*.
* By default, the number of cores available, will be detected. It may be necessary in a HPC context to indicate the exact number of singled threaded cores in fact allocated to the multiprocessing job.
>>> from randomPerfTabs import RandomPerformanceTableau
>>> rt = RandomPerformanceTableau(seed=100)
>>> rt.showHTMLPerformanceHeatmap(colorLevels=5,Correlations=True)
.. image:: perfTabsExample.png
:alt: HTML heat map of the performance tableau
:width: 600 px
:align: center
- showHTMLPerformanceQuantiles(self, Sorted=True, htmlFileName=None)
- shows the performance quantiles tableau in a browser window.
- showObjectives(self)
- showPairwiseComparison(self, a, b, Debug=False, isReturningHTML=False, hasSymmetricThresholds=True)
- renders the pairwise comprison parameters on all criteria
in html format
- showQuantileSort(self, Debug=False)
- Wrapper of computeQuantilePreorder() for the obsolete showQuantileSort() method.
- showRankingConsensusQuality(self, ranking)
- shows the marginal criteria correlations with a given ranking with summary.
- showWeightPreorder(self)
- Renders a preordering of the the criteria signficance weights.
Data descriptors inherited from perfTabs.PerformanceTableau:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class RandomCBPerformanceTableau(perfTabs.PerformanceTableau) |
| |
RandomCBPerformanceTableau(numberOfActions=13, numberOfCriteria=7, name='randomCBperftab', weightDistribution='equiobjectives', weightScale=None, IntegerWeights=True, NegativeWeights=False, commonPercentiles=None, samplingSize=100000, commonMode=None, valueDigits=2, missingDataProbability=0.01, NA=-999, BigData=False, seed=None, Debug=False, Comments=False)
Full automatic generation of random multiple-criteria Cost-Benefit performance tableaux.
Parameters:
* If numberOfActions is None, a uniform random number between 10 and 31 of cheap, neutral or advantageous actions (equal 1/3 probability each type) actions is instantiated.
* If numberOfCriteria is None, a uniform random number between 5 and 21 of cost or benefit criteria. Cost criteria have probability 1/3, whereas benefit criteria respectively 2/3 probability to be generated. However, at least one criterion of each kind is always instantiated.
* weightDistribution := {'equiobjectives' (default)|'fixed'|'random'|'equisignificant'}
By default, the sum of significance of the cost criteria is set equal to the sum of the significance of the benefit criteria.
* Default weightScale for 'random' weightDistribution is 1 - numberOfCriteria.
* If NegativeWeights = True | False (default), the performance evaluation of the criteria with a 'min' preference direction will be positive, otherwise they will be negative.
* Parameter commonScale is not used. The scale of cost criteria is cardinal or ordinal (0-10) with probability 1/4, respectively 3/4, whereas the scale of benefit criteria is ordinal or cardinal with probabilities 2/3, respectively 1/3.
* All cardinal criteria are evaluated with decimals between 0.0 and 100.0 wheras all ordinal criteria are evaluated with integers between 0 and 10.
* commonThresholds parameter is not used. Preference discrimination is specified as percentiles of concerned performance differences (see below).
* CommonPercentiles = {'ind':0.05, 'pref':0.10, 'veto':'95} are expressed in percentiles of the observed performance differences and only concern cardinal criteria.
.. note::
Minimal number required of criteria is 2, and minimal number
required of decision actions is 6!
>>> from randomPerfTabs import RandomCBPerformanceTableau
>>> pt = RandomCBPerformanceTableau(numberOfActions=6,numberOfCriteria=3,seed=2)
>>> pt
*------- PerformanceTableau instance description ------*
Instance class : RandomCBPerformanceTableau
Seed : 1
Instance name : randomCBperftab
Actions : 6
Objectives : 2
Criteria : 3
NaN proportion (%) : 0.0
Attributes : ['randomSeed', 'name', 'digits',
'BigData', 'missingDataProbability', 'NA',
'commonPercentiles', 'samplingSize', 'Debug',
'actionsTypesList', 'sumWeights',
'valuationPrecision', 'actions', 'objectives',
'criteriaWeightMode', 'criteria', 'evaluation',
'weightPreorder']
>>> pt.showObjetives()
*------ decision objectives -------"
C: Costs
c1 random cardinal cost criterion 1
c2 random cardinal cost criterion 1
Total weight: 2.00 (2 criteria)
B: Benefits
b1 random ordinal benefit criterion 2
Total weight: 2.00 (1 criteria)
>>> pt.showCriteria()
*---- criteria -----*
c1 random cardinal cost criterion
Preference direction: min
Scale = (0.00, 100.00)
Weight = 0.250
Threshold ind : 1.98 + 0.00x ; percentile: 6.67
Threshold pref : 8.48 + 0.00x ; percentile: 13.33
Threshold veto : 60.79 + 0.00x ; percentile: 100.00
b1 random ordinal benefit criterion
Preference direction: max
Scale = (0.00, 10.00)
Weight = 0.500
c2 random cardinal cost criterion
Preference direction: min
Scale = (0.00, 100.00)
Weight = 0.250
Threshold ind : 3.34 + 0.00x ; percentile: 6.67
Threshold pref : 4.99 + 0.00x ; percentile: 13.33
Threshold veto : 63.75 + 0.00x ; percentile: 100.00
>>> pt.showActions
*----- show decision action --------------*
key: a1
short name: a1c
name: action a1
comment: Cost-Benefit
key: a2
short name: a2c
name: action a2
comment: Cost-Benefit
key: a3
short name: a3c
name: action a3
comment: Cost-Benefit
key: a4
short name: a4n
name: action a4
comment: Cost-Benefit
key: a5
short name: a5c
name: action a5
comment: Cost-Benefit
key: a6
short name: a6a
name: action a6
comment: Cost-Benefit
>>> pt.showPerformanceTableau()
*---- performance tableau -----*
Criteria | 'b1' 'c1' 'c2'
Actions | 2 1 1
---------|-----------------------------------------
'a1c' | 9.00 -37.92 -7.03
'a2c' | 8.00 -35.94 -28.93
'a3c' | 3.00 -16.88 -23.94
'a4n' | 5.00 -46.40 -43.59
'a5c' | 2.00 -26.61 -67.44
'a6a' | 2.00 -77.67 -70.78 |
| |
- Method resolution order:
- RandomCBPerformanceTableau
- perfTabs.PerformanceTableau
- builtins.object
Methods defined here:
- __init__(self, numberOfActions=13, numberOfCriteria=7, name='randomCBperftab', weightDistribution='equiobjectives', weightScale=None, IntegerWeights=True, NegativeWeights=False, commonPercentiles=None, samplingSize=100000, commonMode=None, valueDigits=2, missingDataProbability=0.01, NA=-999, BigData=False, seed=None, Debug=False, Comments=False)
- Constructor for RadomCBPerformanceTableau instances.
- updateDiscriminationThresholds(self, Comments=False, Debug=False)
- Recomputes performance discrimination thresholds from commonPercentiles.
.. note::
Overwrites all previous criterion discrimination thresholds !
Methods inherited from perfTabs.PerformanceTableau:
- __repr__(self)
- Default presentation method for PerformanceTableau instances.
- computeActionCriterionPerformanceDifferences(self, refAction, refCriterion, comments=False, Debug=False)
- computes the performances differences observed between the reference action and the others on the given criterion
- computeActionCriterionQuantile(self, action, criterion, strategy='average', Debug=False)
- renders the quantile of the performance of action on criterion
- computeActionQuantile(self, action, Debug=False)
- renders the overall performance quantile of action
- computeAllQuantiles(self, Sorted=True, Comments=False)
- renders a html string showing the table of
the quantiles matrix action x criterion
- computeCriterionPerformanceDifferences(self, c, Comments=False, Debug=False)
- Renders the ordered list of all observed performance differences on the given criterion.
- computeDefaultDiscriminationThresholds(self, criteriaList=None, quantile={'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto': 80}, Debug=False, Comments=False)
- updates the discrimination thresholds with the percentiles
from the performance differences.
Parameters: quantile = {'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto: 80}.
- computeMinMaxEvaluations(self, criteria=None, actions=None)
- renders minimum and maximum performances on each criterion
in dictionary form: {'g': {'minimum': x, 'maximum': x}}
- computeMissingDataProportion(self, InPercents=False, Comments=False)
- Renders the proportion of missing data,
i.e. NA == Decimal('-999') entries in self.evaluation.
- computeNormalizedDiffEvaluations(self, lowValue=0.0, highValue=100.0, withOutput=False, Debug=False)
- renders and csv stores (withOutput=True) the
list of normalized evaluation differences observed on the family of criteria
Is only adequate if all criteria have the same
evaluation scale. Therefore the performance tableau is normalized to 0.0-100.0 scales.
- computePerformanceDifferences(self, Comments=False, Debug=False, NotPermanentDiffs=True, WithMaxMin=False)
- Adds to the criteria dictionary the ordered list of all observed performance differences.
- computeQuantileOrder(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ordering of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to quantiles=min( 100, max(10,len(self.actions)/10]) ).
The actions are ordered along a decreasing Borda score of their ranking results.
- computeQuantilePreorder(self, Comments=True, Debug=False)
- computes the preorder of the actions obtained from decreasing majority quantiles. The quantiles are recomputed with a call to the self.computeQuantileSort() method.
- computeQuantileRanking(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ranking of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to qantiles=min( 100, max(10,len(self.actions)/10) ).
The actions are ordered along an increasing Borda score of their ranking results.
- computeQuantileSort(self)
- shows a sorting of the actions from decreasing majority quantiles
- computeQuantiles(self, Debug=False)
- renders a quantiles matrix action x criterion with the performance quantile of action on criterion
- computeRankingConsensusQuality(self, ranking, Comments=False, Threading=False, nbrOfCPUs=1)
- Renders the marginal criteria correlations with a given ranking with summary.
- computeThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given constant threshold.
- computeVariableThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given threshold.
- computeWeightPreorder(self)
- renders the weight preorder following from the given
criteria weights in a list of increasing equivalence
lists of criteria.
- computeWeightedAveragePerformances(self, isNormalized=False, lowValue=0.0, highValue=100.0, isListRanked=False)
- Compute normalized weighted average scores by ignoring missing data.
When *isNormalized* == True (False by default),
transforms all the scores into a common 0-100 scale.
A lowValue and highValue parameter
can be provided for a specific normalisation.
- convert2BigData(self)
- Renders a cPerformanceTableau instance, by converting the action keys to integers and evaluations to floats, including the discrimination thresholds, the case given.
- convertDiscriminationThresholds2Decimal(self)
- convertDiscriminationThresholds2Float(self)
- convertEvaluation2Decimal(self)
- Convert evaluations from obsolete float format to decimal format
- convertEvaluation2Float(self)
- Convert evaluations from decimal format to float
- convertInsite2BigData(self)
- Convert in site a standard formated Performance tableau into a bigData formated instance.
- convertInsite2Standard(self)
- Convert in site a bigData formated Performance tableau back into a standard formated PerformanceTableau instance.
- convertWeight2Decimal(self)
- Convert significance weights from obsolete float format
to decimal format.
- convertWeight2Integer(self)
- Convert significance weights from Decimal format
to int format.
- convertWeights2Negative(self)
- Negates the weights of criteria to be minimzed.
- convertWeights2Positive(self)
- Sets negative weights back to positive weights and negates corresponding evaluation grades.
- csvAllQuantiles(self, fileName='quantiles')
- save quantiles matrix criterionxaction in CSV format
- hasOddWeightAlgebra(self, Debug=False)
- Verify if the given criteria[self]['weight'] are odd or not.
Return a Boolen value.
- normalizeEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations between lowValue and highValue on all criteria
- quantizeCriterionEvaluations(self, g, q, ndigits=2, Debug=True)
- q-tile evaluation of criterion q
- replaceNA(self, newNA=None, Comments=False)
- Replaces the current self.NA symbol with the *newNA* symbol of type <Decimal>. If newNA is None, the defauklt value Decimal('-999') is used.
- restoreOriginalEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations to their original values on all criteria
- save(self, fileName='tempperftab', isDecimal=True, valueDigits=2, Comments=True)
- Persistant storage of Performance Tableaux.
- saveCSV(self, fileName='tempPerfTab', Sorted=True, criteriaList=None, actionsList=None, ndigits=2, Debug=False)
- 1
Store the performance Tableau self Actions x Criteria in CSV format.
- saveXMCDA2(self, fileName='temp', category='XMCDA 2.0 Extended format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=False, isStringIO=False, stringNA='NA', comment='produced by saveXMCDA2()', hasVeto=True)
- save performance tableau object self in XMCDA 2.0 format including decision objectives, the case given.
- saveXMCDA2String(self, fileName='temp', category='XMCDA 2.0 format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=True, comment='produced by stringIO()', stringNA='NA')
- save performance tableau object self in XMCDA 2.0 format.
!!! obsolete: replaced by the isStringIO in the saveXMCDA2 method !!!
- setObjectiveWeights(self, Debug=False)
- Set the objective weights to the sum of the corresponding criteria significance weights.
- showActions(self, Alphabetic=False)
- presentation methods for decision actions or alternatives
- showAll(self)
- Show fonction for performance tableau
- showAllQuantiles(self, Sorted=True)
- prints the performance quantiles tableau in the session console.
- showCriteria(self, IntegerWeights=False, Alphabetic=False, ByObjectives=False, Debug=False)
- print Criteria with thresholds and weights.
- showEvaluationStatistics(self)
- renders the variance and standard deviation of
the values observed in the performance Tableau.
- showHTMLCriteria(self, criteriaSubset=None, Sorted=True, ndigits=2, title=None, htmlFileName=None)
- shows the criteria in the system browser view.
- showHTMLPerformanceHeatmap(self, actionsList=None, WithActionNames=False, fromIndex=None, toIndex=None, Transposed=False, criteriaList=None, colorLevels=7, pageTitle=None, ndigits=2, SparseModel=False, outrankingModel='standard', minimalComponentSize=1, rankingRule='NetFlows', StoreRanking=True, quantiles=None, strategy='average', Correlations=False, htmlFileName=None, Threading=False, startMethod=None, nbrOfCPUs=None, Debug=False)
- shows the html heatmap version of the performance tableau in a browser window
(see perfTabs.htmlPerformanceHeatMap() method ).
**Parameters**:
* *actionsList* and *criteriaList*, if provided, give the possibility to show
the decision alternatives, resp. criteria, in a given ordering.
* *WithActionNames* = True (default False) will show the action names instead of the short names or the identifyers.
* *ndigits* = 0 may be used to show integer evaluation values.
* *colorLevels* may be 3, 5, 7, or 9.
* When no *actionsList* is provided, the decision actions are ordered from the best to the worst. This
ranking is obtained by default with the Copeland rule applied on a standard *BipolarOutrankingDigraph*.
* When the *SparseModel* flag is put to *True*, a sparse *PreRankedOutrankingDigraph* construction is used instead.
* the *outrankingModel* parameter (default = 'standard') allows to switch to alternative BipolarOutrankingDigraph constructors, like 'confident' or 'robust' models. When called from a bipolar-valued outrankingDigraph instance, *outrankingModel* = 'this' keeps the current outranking model without recomputing by default the standard outranking model.
* The *minimalComponentSize* allows to control the fill rate of the pre-ranked model.
When *minimalComponentSize* = *n* (the number of decision actions) both the pre-ranked model will be
in fact equivalent to the standard model.
* *rankingRule* = 'NetFlows' (default) | 'Copeland' | 'Kohler' | 'RankedPairs' | 'ArrowRaymond'
| 'IteratedNetFlows' | 'IteraredCopeland'. See tutorial on ranking mith multiple incommensurable criteria.
* when the *StoreRanking* flag is set to *True*, the ranking result is storted in *self*.
* Quantiles used for the pre-ranked decomposition are put by default to *n*
(the number of decision alternatives) for *n* < 50. For larger cardinalities up to 1000, quantiles = *n* /10.
For bigger performance tableaux the *quantiles* parameter may be set to a much lower value
not exceeding usually 10.
* The pre-ranking may be obtained with three ordering strategies for the
quantiles equivalence classes: 'average' (default), 'optimistic' or 'pessimistic'.
* With *Correlations* = *True* and *criteriaList* = *None*, the criteria will be presented from left to right in decreasing
order of the correlations between the marginal criterion based ranking and the global ranking used for
presenting the decision alternatives.
* For large performance Tableaux, *multiprocessing* techniques may be used by setting
*Threading* = *True* in order to speed up the computations; especially when *Correlations* = *True*.
* By default, the number of cores available, will be detected. It may be necessary in a HPC context to indicate the exact number of singled threaded cores in fact allocated to the multiprocessing job.
>>> from randomPerfTabs import RandomPerformanceTableau
>>> rt = RandomPerformanceTableau(seed=100)
>>> rt.showHTMLPerformanceHeatmap(colorLevels=5,Correlations=True)
.. image:: perfTabsExample.png
:alt: HTML heat map of the performance tableau
:width: 600 px
:align: center
- showHTMLPerformanceQuantiles(self, Sorted=True, htmlFileName=None)
- shows the performance quantiles tableau in a browser window.
- showHTMLPerformanceTableau(self, actionsSubset=None, fromIndex=None, toIndex=None, isSorted=False, Transposed=False, ndigits=2, ContentCentered=True, title=None, htmlFileName=None)
- shows the html version of the performance tableau in a browser window.
- showObjectives(self)
- showPairwiseComparison(self, a, b, Debug=False, isReturningHTML=False, hasSymmetricThresholds=True)
- renders the pairwise comprison parameters on all criteria
in html format
- showPerformanceTableau(self, Transposed=False, actionsSubset=None, fromIndex=None, toIndex=None, Sorted=True, ndigits=2)
- Print the performance Tableau.
- showQuantileSort(self, Debug=False)
- Wrapper of computeQuantilePreorder() for the obsolete showQuantileSort() method.
- showRankingConsensusQuality(self, ranking)
- shows the marginal criteria correlations with a given ranking with summary.
- showStatistics(self, Debug=False)
- show statistics concerning the evaluation distributions
on each criteria.
- showWeightPreorder(self)
- Renders a preordering of the the criteria signficance weights.
Data descriptors inherited from perfTabs.PerformanceTableau:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class RandomPerformanceGenerator(builtins.object) |
| |
RandomPerformanceGenerator(argPerfTab, actionNamePrefix='a', instanceCounter=None, seed=None)
Generic wrapper for generating new decision actions or performance tableaux
with random evaluations generated with a given performance tableau model of type:
RandomPerformanceTableau, RandomCBPerformanceTableau,
or Random3ObjectivesPerformanceTableau.
The return format of generated new set of random actions is schown below.
This return may be directly feeded to the PerformanceQuantiles.updateQuantiles() method.
>>> from randomPerfTabs import *
>>> t = RandomPerformanceTableau(seed=100)
>>> t
*------- PerformanceTableau instance description ------*
Instance class : RandomPerformanceTableau
Seed : 100
Instance name : randomperftab
Actions : 13
Criteria : 7
Attributes : [
'randomSeed', 'name', 'BigData', 'sumWeights', 'digits', 'commonScale',
'commonMode', 'missingDataProbability', 'actions', 'criteria',
'evaluation', 'weightPreorder']
>>> rpg = RandomPerformanceGenerator(t,seed= 100)
>>> newActions = rpg.randomActions(2)
>>> print(newActions)
{'actions': OrderedDict([
('a14', {'shortName': 'a14',
'name': 'random decision action',
'comment': 'RandomPerformanceGenerator'}),
('a15', {'shortName': 'a15',
'name': 'random decision action',
'comment': 'RandomPerformanceGenerator'})]),
'evaluation': {
'g1': {'a14': Decimal('15.17'), 'a15': Decimal('80.87')},
'g2': {'a14': Decimal('44.51'), 'a15': Decimal('62.74')},
'g3': {'a14': Decimal('57.87'), 'a15': Decimal('64.24')},
'g4': {'a14': Decimal('58.0'), 'a15': Decimal('26.99')},
'g5': {'a14': Decimal('24.22'), 'a15': Decimal('21.18')},
'g6': {'a14': Decimal('29.1'), 'a15': Decimal('73.09')},
'g7': {'a14': Decimal('96.58'), 'a15': Decimal('-999')}}}
>>> newTab = rpg.randomPerformanceTableau(2)
>>> newTab.showPerformanceTableau()
*---- performance tableau -----*
criteria | weights | 'a17' 'a18'
---------|-----------------------------------------
'g1' | 1 | 55.80 22.03
'g2' | 1 | 57.78 33.83
'g3' | 1 | 80.54 31.83
'g4' | 1 | 31.15 69.98
'g5' | 1 | 46.25 48.80
'g6' | 1 | 42.24 82.88
'g7' | 1 | 57.31 41.66 |
| |
Methods defined here:
- __init__(self, argPerfTab, actionNamePrefix='a', instanceCounter=None, seed=None)
- Initialize self. See help(type(self)) for accurate signature.
- randomActions(self, nbrOfRandomActions=1)
- Generates nbrOfRandomActions.
- randomPerformanceTableau(self, nbrOfRandomActions=1)
- Generates nbrOfRandomActions.
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class RandomPerformanceTableau(perfTabs.PerformanceTableau) |
| |
RandomPerformanceTableau(numberOfActions=13, actionNamePrefix='a', numberOfCriteria=7, weightDistribution='equisignificant', weightScale=None, IntegerWeights=True, commonScale=(0.0, 100.0), commonThresholds=((2.5, 0.0), (5.0, 0.0), (80.0, 0.0)), commonMode=('beta', None, (2, 2)), valueDigits=2, missingDataProbability=0.025, NA=-999, BigData=False, seed=None, Debug=False)
For generating a random standard performance tableau.
Parameters:
* numberOfActions := nbr of decision actions.
* numberOfCriteria := number performance criteria.
* weightDistribution := 'random' (default) | 'fixed' | 'equisignificant'.
| If 'random', weights are uniformly selected randomly
| form the given weight scale;
| If 'fixed', the weightScale must provided a corresponding weights
| distribution;
| If 'equisignificant', all criterion weights are put to unity.
* weightScale := [Min,Max] (default =[1,numberOfCriteria].
* IntegerWeights := True (default) | False (normalized to proportions of 1.0).
* commonScale := [Min;Max]; common performance measuring scales (default = [0;100])
* commonThresholds := [(q0,q1),(p0,p1),(v0,v1)]; common indifference(q), preference (p) and considerable performance difference discrimination thresholds. q0, p0 and v0 are expressed in percentige of the common scale amplitude: Max - Min.
* commonMode := common random distribution of random performance measurements (default = ('beta',None,(2,2)) ):
| ('uniform',None,None), uniformly distributed between min and max values.
| ('normal',mu,sigma), truncated Gaussion distribution.
| ('triangular',mode,repartition), generalized triangular distribution
| ('beta',mod,(alpha,beta)), mode in ]0,1[.
* valueDigits := <integer>, precision of performance measurements
(2 decimal digits by default).
* missingDataProbability := 0 <= x <= 1.0; probability of missing performance evaluation on a criterion for an alternative (default 0.025).
* Default NA symbol == Decimal('-999')
Code example:
>>> from randomPerfTabs import RandomPerformanceTableau
>>> t = RandomPerformanceTableau(numberOfActions=3,numberOfCriteria=1,seed=100)
>>> t.actions
{'a1': {'comment': 'RandomPerformanceTableau() generated.', 'name': 'random decision action'},
'a2': {'comment': 'RandomPerformanceTableau() generated.', 'name': 'random decision action'},
'a3': {'comment': 'RandomPerformanceTableau() generated.', 'name': 'random decision action'}}
>>> t.criteria
{'g1': {'thresholds': {'ind' : (Decimal('10.0'), Decimal('0.0')),
'veto': (Decimal('80.0'), Decimal('0.0')),
'pref': (Decimal('20.0'), Decimal('0.0'))},
'scale': [0.0, 100.0],
'weight': Decimal('1'),
'name': 'digraphs.RandomPerformanceTableau() instance',
'comment': 'Arguments: ; weightDistribution=random;
weightScale=(1, 1); commonMode=None'}}
>>> t.evaluation
{'g01': {'a01': Decimal('45.95'),
'a02': Decimal('95.17'),
'a03': Decimal('17.47')
}
} |
| |
- Method resolution order:
- RandomPerformanceTableau
- perfTabs.PerformanceTableau
- builtins.object
Methods defined here:
- __init__(self, numberOfActions=13, actionNamePrefix='a', numberOfCriteria=7, weightDistribution='equisignificant', weightScale=None, IntegerWeights=True, commonScale=(0.0, 100.0), commonThresholds=((2.5, 0.0), (5.0, 0.0), (80.0, 0.0)), commonMode=('beta', None, (2, 2)), valueDigits=2, missingDataProbability=0.025, NA=-999, BigData=False, seed=None, Debug=False)
- Initialize self. See help(type(self)) for accurate signature.
Methods inherited from perfTabs.PerformanceTableau:
- __repr__(self)
- Default presentation method for PerformanceTableau instances.
- computeActionCriterionPerformanceDifferences(self, refAction, refCriterion, comments=False, Debug=False)
- computes the performances differences observed between the reference action and the others on the given criterion
- computeActionCriterionQuantile(self, action, criterion, strategy='average', Debug=False)
- renders the quantile of the performance of action on criterion
- computeActionQuantile(self, action, Debug=False)
- renders the overall performance quantile of action
- computeAllQuantiles(self, Sorted=True, Comments=False)
- renders a html string showing the table of
the quantiles matrix action x criterion
- computeCriterionPerformanceDifferences(self, c, Comments=False, Debug=False)
- Renders the ordered list of all observed performance differences on the given criterion.
- computeDefaultDiscriminationThresholds(self, criteriaList=None, quantile={'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto': 80}, Debug=False, Comments=False)
- updates the discrimination thresholds with the percentiles
from the performance differences.
Parameters: quantile = {'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto: 80}.
- computeMinMaxEvaluations(self, criteria=None, actions=None)
- renders minimum and maximum performances on each criterion
in dictionary form: {'g': {'minimum': x, 'maximum': x}}
- computeMissingDataProportion(self, InPercents=False, Comments=False)
- Renders the proportion of missing data,
i.e. NA == Decimal('-999') entries in self.evaluation.
- computeNormalizedDiffEvaluations(self, lowValue=0.0, highValue=100.0, withOutput=False, Debug=False)
- renders and csv stores (withOutput=True) the
list of normalized evaluation differences observed on the family of criteria
Is only adequate if all criteria have the same
evaluation scale. Therefore the performance tableau is normalized to 0.0-100.0 scales.
- computePerformanceDifferences(self, Comments=False, Debug=False, NotPermanentDiffs=True, WithMaxMin=False)
- Adds to the criteria dictionary the ordered list of all observed performance differences.
- computeQuantileOrder(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ordering of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to quantiles=min( 100, max(10,len(self.actions)/10]) ).
The actions are ordered along a decreasing Borda score of their ranking results.
- computeQuantilePreorder(self, Comments=True, Debug=False)
- computes the preorder of the actions obtained from decreasing majority quantiles. The quantiles are recomputed with a call to the self.computeQuantileSort() method.
- computeQuantileRanking(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ranking of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to qantiles=min( 100, max(10,len(self.actions)/10) ).
The actions are ordered along an increasing Borda score of their ranking results.
- computeQuantileSort(self)
- shows a sorting of the actions from decreasing majority quantiles
- computeQuantiles(self, Debug=False)
- renders a quantiles matrix action x criterion with the performance quantile of action on criterion
- computeRankingConsensusQuality(self, ranking, Comments=False, Threading=False, nbrOfCPUs=1)
- Renders the marginal criteria correlations with a given ranking with summary.
- computeThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given constant threshold.
- computeVariableThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given threshold.
- computeWeightPreorder(self)
- renders the weight preorder following from the given
criteria weights in a list of increasing equivalence
lists of criteria.
- computeWeightedAveragePerformances(self, isNormalized=False, lowValue=0.0, highValue=100.0, isListRanked=False)
- Compute normalized weighted average scores by ignoring missing data.
When *isNormalized* == True (False by default),
transforms all the scores into a common 0-100 scale.
A lowValue and highValue parameter
can be provided for a specific normalisation.
- convert2BigData(self)
- Renders a cPerformanceTableau instance, by converting the action keys to integers and evaluations to floats, including the discrimination thresholds, the case given.
- convertDiscriminationThresholds2Decimal(self)
- convertDiscriminationThresholds2Float(self)
- convertEvaluation2Decimal(self)
- Convert evaluations from obsolete float format to decimal format
- convertEvaluation2Float(self)
- Convert evaluations from decimal format to float
- convertInsite2BigData(self)
- Convert in site a standard formated Performance tableau into a bigData formated instance.
- convertInsite2Standard(self)
- Convert in site a bigData formated Performance tableau back into a standard formated PerformanceTableau instance.
- convertWeight2Decimal(self)
- Convert significance weights from obsolete float format
to decimal format.
- convertWeight2Integer(self)
- Convert significance weights from Decimal format
to int format.
- convertWeights2Negative(self)
- Negates the weights of criteria to be minimzed.
- convertWeights2Positive(self)
- Sets negative weights back to positive weights and negates corresponding evaluation grades.
- csvAllQuantiles(self, fileName='quantiles')
- save quantiles matrix criterionxaction in CSV format
- hasOddWeightAlgebra(self, Debug=False)
- Verify if the given criteria[self]['weight'] are odd or not.
Return a Boolen value.
- normalizeEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations between lowValue and highValue on all criteria
- quantizeCriterionEvaluations(self, g, q, ndigits=2, Debug=True)
- q-tile evaluation of criterion q
- replaceNA(self, newNA=None, Comments=False)
- Replaces the current self.NA symbol with the *newNA* symbol of type <Decimal>. If newNA is None, the defauklt value Decimal('-999') is used.
- restoreOriginalEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations to their original values on all criteria
- save(self, fileName='tempperftab', isDecimal=True, valueDigits=2, Comments=True)
- Persistant storage of Performance Tableaux.
- saveCSV(self, fileName='tempPerfTab', Sorted=True, criteriaList=None, actionsList=None, ndigits=2, Debug=False)
- 1
Store the performance Tableau self Actions x Criteria in CSV format.
- saveXMCDA2(self, fileName='temp', category='XMCDA 2.0 Extended format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=False, isStringIO=False, stringNA='NA', comment='produced by saveXMCDA2()', hasVeto=True)
- save performance tableau object self in XMCDA 2.0 format including decision objectives, the case given.
- saveXMCDA2String(self, fileName='temp', category='XMCDA 2.0 format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=True, comment='produced by stringIO()', stringNA='NA')
- save performance tableau object self in XMCDA 2.0 format.
!!! obsolete: replaced by the isStringIO in the saveXMCDA2 method !!!
- setObjectiveWeights(self, Debug=False)
- Set the objective weights to the sum of the corresponding criteria significance weights.
- showActions(self, Alphabetic=False)
- presentation methods for decision actions or alternatives
- showAll(self)
- Show fonction for performance tableau
- showAllQuantiles(self, Sorted=True)
- prints the performance quantiles tableau in the session console.
- showCriteria(self, IntegerWeights=False, Alphabetic=False, ByObjectives=False, Debug=False)
- print Criteria with thresholds and weights.
- showEvaluationStatistics(self)
- renders the variance and standard deviation of
the values observed in the performance Tableau.
- showHTMLCriteria(self, criteriaSubset=None, Sorted=True, ndigits=2, title=None, htmlFileName=None)
- shows the criteria in the system browser view.
- showHTMLPerformanceHeatmap(self, actionsList=None, WithActionNames=False, fromIndex=None, toIndex=None, Transposed=False, criteriaList=None, colorLevels=7, pageTitle=None, ndigits=2, SparseModel=False, outrankingModel='standard', minimalComponentSize=1, rankingRule='NetFlows', StoreRanking=True, quantiles=None, strategy='average', Correlations=False, htmlFileName=None, Threading=False, startMethod=None, nbrOfCPUs=None, Debug=False)
- shows the html heatmap version of the performance tableau in a browser window
(see perfTabs.htmlPerformanceHeatMap() method ).
**Parameters**:
* *actionsList* and *criteriaList*, if provided, give the possibility to show
the decision alternatives, resp. criteria, in a given ordering.
* *WithActionNames* = True (default False) will show the action names instead of the short names or the identifyers.
* *ndigits* = 0 may be used to show integer evaluation values.
* *colorLevels* may be 3, 5, 7, or 9.
* When no *actionsList* is provided, the decision actions are ordered from the best to the worst. This
ranking is obtained by default with the Copeland rule applied on a standard *BipolarOutrankingDigraph*.
* When the *SparseModel* flag is put to *True*, a sparse *PreRankedOutrankingDigraph* construction is used instead.
* the *outrankingModel* parameter (default = 'standard') allows to switch to alternative BipolarOutrankingDigraph constructors, like 'confident' or 'robust' models. When called from a bipolar-valued outrankingDigraph instance, *outrankingModel* = 'this' keeps the current outranking model without recomputing by default the standard outranking model.
* The *minimalComponentSize* allows to control the fill rate of the pre-ranked model.
When *minimalComponentSize* = *n* (the number of decision actions) both the pre-ranked model will be
in fact equivalent to the standard model.
* *rankingRule* = 'NetFlows' (default) | 'Copeland' | 'Kohler' | 'RankedPairs' | 'ArrowRaymond'
| 'IteratedNetFlows' | 'IteraredCopeland'. See tutorial on ranking mith multiple incommensurable criteria.
* when the *StoreRanking* flag is set to *True*, the ranking result is storted in *self*.
* Quantiles used for the pre-ranked decomposition are put by default to *n*
(the number of decision alternatives) for *n* < 50. For larger cardinalities up to 1000, quantiles = *n* /10.
For bigger performance tableaux the *quantiles* parameter may be set to a much lower value
not exceeding usually 10.
* The pre-ranking may be obtained with three ordering strategies for the
quantiles equivalence classes: 'average' (default), 'optimistic' or 'pessimistic'.
* With *Correlations* = *True* and *criteriaList* = *None*, the criteria will be presented from left to right in decreasing
order of the correlations between the marginal criterion based ranking and the global ranking used for
presenting the decision alternatives.
* For large performance Tableaux, *multiprocessing* techniques may be used by setting
*Threading* = *True* in order to speed up the computations; especially when *Correlations* = *True*.
* By default, the number of cores available, will be detected. It may be necessary in a HPC context to indicate the exact number of singled threaded cores in fact allocated to the multiprocessing job.
>>> from randomPerfTabs import RandomPerformanceTableau
>>> rt = RandomPerformanceTableau(seed=100)
>>> rt.showHTMLPerformanceHeatmap(colorLevels=5,Correlations=True)
.. image:: perfTabsExample.png
:alt: HTML heat map of the performance tableau
:width: 600 px
:align: center
- showHTMLPerformanceQuantiles(self, Sorted=True, htmlFileName=None)
- shows the performance quantiles tableau in a browser window.
- showHTMLPerformanceTableau(self, actionsSubset=None, fromIndex=None, toIndex=None, isSorted=False, Transposed=False, ndigits=2, ContentCentered=True, title=None, htmlFileName=None)
- shows the html version of the performance tableau in a browser window.
- showObjectives(self)
- showPairwiseComparison(self, a, b, Debug=False, isReturningHTML=False, hasSymmetricThresholds=True)
- renders the pairwise comprison parameters on all criteria
in html format
- showPerformanceTableau(self, Transposed=False, actionsSubset=None, fromIndex=None, toIndex=None, Sorted=True, ndigits=2)
- Print the performance Tableau.
- showQuantileSort(self, Debug=False)
- Wrapper of computeQuantilePreorder() for the obsolete showQuantileSort() method.
- showRankingConsensusQuality(self, ranking)
- shows the marginal criteria correlations with a given ranking with summary.
- showStatistics(self, Debug=False)
- show statistics concerning the evaluation distributions
on each criteria.
- showWeightPreorder(self)
- Renders a preordering of the the criteria signficance weights.
Data descriptors inherited from perfTabs.PerformanceTableau:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class RandomRankPerformanceTableau(perfTabs.PerformanceTableau) |
| |
RandomRankPerformanceTableau(numberOfActions=13, numberOfCriteria=7, weightDistribution='equisignificant', weightScale=None, commonThresholds=None, IntegerWeights=True, BigData=False, seed=None, Debug=False)
For generating multiple-criteria ranked (without ties) performances of a given number of decision actions.
On each criterion, all decision actions are hence lineraly ordered.
The :py:class:`randomPerfTabs.RandomRankPerformanceTableau` class
is matching the :py:class:`votingDigraphs.RandomLinearVotingProfiles`
class (see http://hdl.handle.net/10993/37933 Lecture 2 : Voting of
the Algorithmic Decision Theory Course)
*Parameters*:
* number of actions,
* number of performance criteria,
* weightDistribution := equisignificant | random (default, see RandomPerformanceTableau)
* weightScale := (1, 1 | numberOfCriteria (default when random))
* IntegerWeights := Boolean (True = default). Weights are negative, as all the criteria preference directions are 'min', as the rank performance is to be minimized.
* commonThresholds (default) := {
| 'ind':(0,0),
| 'pref':(1,0),
| 'veto':(numberOfActions,0)
| } (default)
>>> t = RandomRankPerformanceTableau(numberOfActions=3,numberOfCriteria=2)
>>> t.showObjectives()
The performance tableau does not contain objectives.
>>> t.showCriteria()
*---- criteria -----*
g1 'Random criteria (voter)'
Scale = (Decimal('0'), Decimal('3'))
Weight = -1 # ranks to be minimal
Threshold ind : 0.00 + 0.00x ; percentile: 0.0
Threshold pref : 1.00 + 0.00x ; percentile: 0.667
Threshold veto : 3.00 + 0.00x ; percentile: 1.0
g2 'Random criteria (voter)'
Scale = (Decimal('0'), Decimal('3'))
Weight = -1 # ranks to be minimal
Threshold ind : 0.00 + 0.00x ; percentile: 0.0
Threshold pref : 1.00 + 0.00x ; percentile: 0.667
Threshold veto : 3.00 + 0.00x ; percentile: 1.0
>>> t.showActions()
*----- show decision action --------------*
key: a1
short name: a1
name: random decision action (candidate)
comment: RandomRankPerformanceTableau() generated.
key: a2
short name: a2
name: random decision action (candidate)
comment: RandomRankPerformanceTableau() generated.
key: a3
short name: a3
name: random decision action (candidate)
comment: RandomRankPerformanceTableau() generated.
>>> t.showPerformanceTableau()
*---- performance tableau -----*
criteria | weights | 'a1' 'a2' 'a3'
---------|--------------------------
'g1' | -1 | 3 1 2
'g2' | -1 | 2 1 3 |
| |
- Method resolution order:
- RandomRankPerformanceTableau
- perfTabs.PerformanceTableau
- builtins.object
Methods defined here:
- __init__(self, numberOfActions=13, numberOfCriteria=7, weightDistribution='equisignificant', weightScale=None, commonThresholds=None, IntegerWeights=True, BigData=False, seed=None, Debug=False)
- Constructor of random ranks performance tableaux.
Methods inherited from perfTabs.PerformanceTableau:
- __repr__(self)
- Default presentation method for PerformanceTableau instances.
- computeActionCriterionPerformanceDifferences(self, refAction, refCriterion, comments=False, Debug=False)
- computes the performances differences observed between the reference action and the others on the given criterion
- computeActionCriterionQuantile(self, action, criterion, strategy='average', Debug=False)
- renders the quantile of the performance of action on criterion
- computeActionQuantile(self, action, Debug=False)
- renders the overall performance quantile of action
- computeAllQuantiles(self, Sorted=True, Comments=False)
- renders a html string showing the table of
the quantiles matrix action x criterion
- computeCriterionPerformanceDifferences(self, c, Comments=False, Debug=False)
- Renders the ordered list of all observed performance differences on the given criterion.
- computeDefaultDiscriminationThresholds(self, criteriaList=None, quantile={'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto': 80}, Debug=False, Comments=False)
- updates the discrimination thresholds with the percentiles
from the performance differences.
Parameters: quantile = {'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto: 80}.
- computeMinMaxEvaluations(self, criteria=None, actions=None)
- renders minimum and maximum performances on each criterion
in dictionary form: {'g': {'minimum': x, 'maximum': x}}
- computeMissingDataProportion(self, InPercents=False, Comments=False)
- Renders the proportion of missing data,
i.e. NA == Decimal('-999') entries in self.evaluation.
- computeNormalizedDiffEvaluations(self, lowValue=0.0, highValue=100.0, withOutput=False, Debug=False)
- renders and csv stores (withOutput=True) the
list of normalized evaluation differences observed on the family of criteria
Is only adequate if all criteria have the same
evaluation scale. Therefore the performance tableau is normalized to 0.0-100.0 scales.
- computePerformanceDifferences(self, Comments=False, Debug=False, NotPermanentDiffs=True, WithMaxMin=False)
- Adds to the criteria dictionary the ordered list of all observed performance differences.
- computeQuantileOrder(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ordering of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to quantiles=min( 100, max(10,len(self.actions)/10]) ).
The actions are ordered along a decreasing Borda score of their ranking results.
- computeQuantilePreorder(self, Comments=True, Debug=False)
- computes the preorder of the actions obtained from decreasing majority quantiles. The quantiles are recomputed with a call to the self.computeQuantileSort() method.
- computeQuantileRanking(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ranking of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to qantiles=min( 100, max(10,len(self.actions)/10) ).
The actions are ordered along an increasing Borda score of their ranking results.
- computeQuantileSort(self)
- shows a sorting of the actions from decreasing majority quantiles
- computeQuantiles(self, Debug=False)
- renders a quantiles matrix action x criterion with the performance quantile of action on criterion
- computeRankingConsensusQuality(self, ranking, Comments=False, Threading=False, nbrOfCPUs=1)
- Renders the marginal criteria correlations with a given ranking with summary.
- computeThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given constant threshold.
- computeVariableThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given threshold.
- computeWeightPreorder(self)
- renders the weight preorder following from the given
criteria weights in a list of increasing equivalence
lists of criteria.
- computeWeightedAveragePerformances(self, isNormalized=False, lowValue=0.0, highValue=100.0, isListRanked=False)
- Compute normalized weighted average scores by ignoring missing data.
When *isNormalized* == True (False by default),
transforms all the scores into a common 0-100 scale.
A lowValue and highValue parameter
can be provided for a specific normalisation.
- convert2BigData(self)
- Renders a cPerformanceTableau instance, by converting the action keys to integers and evaluations to floats, including the discrimination thresholds, the case given.
- convertDiscriminationThresholds2Decimal(self)
- convertDiscriminationThresholds2Float(self)
- convertEvaluation2Decimal(self)
- Convert evaluations from obsolete float format to decimal format
- convertEvaluation2Float(self)
- Convert evaluations from decimal format to float
- convertInsite2BigData(self)
- Convert in site a standard formated Performance tableau into a bigData formated instance.
- convertInsite2Standard(self)
- Convert in site a bigData formated Performance tableau back into a standard formated PerformanceTableau instance.
- convertWeight2Decimal(self)
- Convert significance weights from obsolete float format
to decimal format.
- convertWeight2Integer(self)
- Convert significance weights from Decimal format
to int format.
- convertWeights2Negative(self)
- Negates the weights of criteria to be minimzed.
- convertWeights2Positive(self)
- Sets negative weights back to positive weights and negates corresponding evaluation grades.
- csvAllQuantiles(self, fileName='quantiles')
- save quantiles matrix criterionxaction in CSV format
- hasOddWeightAlgebra(self, Debug=False)
- Verify if the given criteria[self]['weight'] are odd or not.
Return a Boolen value.
- normalizeEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations between lowValue and highValue on all criteria
- quantizeCriterionEvaluations(self, g, q, ndigits=2, Debug=True)
- q-tile evaluation of criterion q
- replaceNA(self, newNA=None, Comments=False)
- Replaces the current self.NA symbol with the *newNA* symbol of type <Decimal>. If newNA is None, the defauklt value Decimal('-999') is used.
- restoreOriginalEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations to their original values on all criteria
- save(self, fileName='tempperftab', isDecimal=True, valueDigits=2, Comments=True)
- Persistant storage of Performance Tableaux.
- saveCSV(self, fileName='tempPerfTab', Sorted=True, criteriaList=None, actionsList=None, ndigits=2, Debug=False)
- 1
Store the performance Tableau self Actions x Criteria in CSV format.
- saveXMCDA2(self, fileName='temp', category='XMCDA 2.0 Extended format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=False, isStringIO=False, stringNA='NA', comment='produced by saveXMCDA2()', hasVeto=True)
- save performance tableau object self in XMCDA 2.0 format including decision objectives, the case given.
- saveXMCDA2String(self, fileName='temp', category='XMCDA 2.0 format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=True, comment='produced by stringIO()', stringNA='NA')
- save performance tableau object self in XMCDA 2.0 format.
!!! obsolete: replaced by the isStringIO in the saveXMCDA2 method !!!
- setObjectiveWeights(self, Debug=False)
- Set the objective weights to the sum of the corresponding criteria significance weights.
- showActions(self, Alphabetic=False)
- presentation methods for decision actions or alternatives
- showAll(self)
- Show fonction for performance tableau
- showAllQuantiles(self, Sorted=True)
- prints the performance quantiles tableau in the session console.
- showCriteria(self, IntegerWeights=False, Alphabetic=False, ByObjectives=False, Debug=False)
- print Criteria with thresholds and weights.
- showEvaluationStatistics(self)
- renders the variance and standard deviation of
the values observed in the performance Tableau.
- showHTMLCriteria(self, criteriaSubset=None, Sorted=True, ndigits=2, title=None, htmlFileName=None)
- shows the criteria in the system browser view.
- showHTMLPerformanceHeatmap(self, actionsList=None, WithActionNames=False, fromIndex=None, toIndex=None, Transposed=False, criteriaList=None, colorLevels=7, pageTitle=None, ndigits=2, SparseModel=False, outrankingModel='standard', minimalComponentSize=1, rankingRule='NetFlows', StoreRanking=True, quantiles=None, strategy='average', Correlations=False, htmlFileName=None, Threading=False, startMethod=None, nbrOfCPUs=None, Debug=False)
- shows the html heatmap version of the performance tableau in a browser window
(see perfTabs.htmlPerformanceHeatMap() method ).
**Parameters**:
* *actionsList* and *criteriaList*, if provided, give the possibility to show
the decision alternatives, resp. criteria, in a given ordering.
* *WithActionNames* = True (default False) will show the action names instead of the short names or the identifyers.
* *ndigits* = 0 may be used to show integer evaluation values.
* *colorLevels* may be 3, 5, 7, or 9.
* When no *actionsList* is provided, the decision actions are ordered from the best to the worst. This
ranking is obtained by default with the Copeland rule applied on a standard *BipolarOutrankingDigraph*.
* When the *SparseModel* flag is put to *True*, a sparse *PreRankedOutrankingDigraph* construction is used instead.
* the *outrankingModel* parameter (default = 'standard') allows to switch to alternative BipolarOutrankingDigraph constructors, like 'confident' or 'robust' models. When called from a bipolar-valued outrankingDigraph instance, *outrankingModel* = 'this' keeps the current outranking model without recomputing by default the standard outranking model.
* The *minimalComponentSize* allows to control the fill rate of the pre-ranked model.
When *minimalComponentSize* = *n* (the number of decision actions) both the pre-ranked model will be
in fact equivalent to the standard model.
* *rankingRule* = 'NetFlows' (default) | 'Copeland' | 'Kohler' | 'RankedPairs' | 'ArrowRaymond'
| 'IteratedNetFlows' | 'IteraredCopeland'. See tutorial on ranking mith multiple incommensurable criteria.
* when the *StoreRanking* flag is set to *True*, the ranking result is storted in *self*.
* Quantiles used for the pre-ranked decomposition are put by default to *n*
(the number of decision alternatives) for *n* < 50. For larger cardinalities up to 1000, quantiles = *n* /10.
For bigger performance tableaux the *quantiles* parameter may be set to a much lower value
not exceeding usually 10.
* The pre-ranking may be obtained with three ordering strategies for the
quantiles equivalence classes: 'average' (default), 'optimistic' or 'pessimistic'.
* With *Correlations* = *True* and *criteriaList* = *None*, the criteria will be presented from left to right in decreasing
order of the correlations between the marginal criterion based ranking and the global ranking used for
presenting the decision alternatives.
* For large performance Tableaux, *multiprocessing* techniques may be used by setting
*Threading* = *True* in order to speed up the computations; especially when *Correlations* = *True*.
* By default, the number of cores available, will be detected. It may be necessary in a HPC context to indicate the exact number of singled threaded cores in fact allocated to the multiprocessing job.
>>> from randomPerfTabs import RandomPerformanceTableau
>>> rt = RandomPerformanceTableau(seed=100)
>>> rt.showHTMLPerformanceHeatmap(colorLevels=5,Correlations=True)
.. image:: perfTabsExample.png
:alt: HTML heat map of the performance tableau
:width: 600 px
:align: center
- showHTMLPerformanceQuantiles(self, Sorted=True, htmlFileName=None)
- shows the performance quantiles tableau in a browser window.
- showHTMLPerformanceTableau(self, actionsSubset=None, fromIndex=None, toIndex=None, isSorted=False, Transposed=False, ndigits=2, ContentCentered=True, title=None, htmlFileName=None)
- shows the html version of the performance tableau in a browser window.
- showObjectives(self)
- showPairwiseComparison(self, a, b, Debug=False, isReturningHTML=False, hasSymmetricThresholds=True)
- renders the pairwise comprison parameters on all criteria
in html format
- showPerformanceTableau(self, Transposed=False, actionsSubset=None, fromIndex=None, toIndex=None, Sorted=True, ndigits=2)
- Print the performance Tableau.
- showQuantileSort(self, Debug=False)
- Wrapper of computeQuantilePreorder() for the obsolete showQuantileSort() method.
- showRankingConsensusQuality(self, ranking)
- shows the marginal criteria correlations with a given ranking with summary.
- showStatistics(self, Debug=False)
- show statistics concerning the evaluation distributions
on each criteria.
- showWeightPreorder(self)
- Renders a preordering of the the criteria signficance weights.
Data descriptors inherited from perfTabs.PerformanceTableau:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
| |