| | |
- builtins.object
-
- PerformanceTableau
-
- CSVPerformanceTableau
- CircularPerformanceTableau
- ConstantPerformanceTableau
- EmptyPerformanceTableau
- NormalizedPerformanceTableau
- PartialPerformanceTableau
- XMCDA2PerformanceTableau
class CSVPerformanceTableau(PerformanceTableau) |
| |
CSVPerformanceTableau(fileName='temp', Debug=False)
Reading stored CSV encoded actions x criteria PerformanceTableau instances, Using the inbuilt module csv.
Param:
fileName (without the extension .csv). |
| |
- Method resolution order:
- CSVPerformanceTableau
- PerformanceTableau
- builtins.object
Methods defined here:
- __init__(self, fileName='temp', Debug=False)
- Initialize self. See help(type(self)) for accurate signature.
Methods inherited from PerformanceTableau:
- __repr__(self)
- Default presentation method for PerformanceTableau instances.
- computeActionCriterionPerformanceDifferences(self, refAction, refCriterion, comments=False, Debug=False)
- computes the performances differences observed between the reference action and the others on the given criterion
- computeActionCriterionQuantile(self, action, criterion, strategy='average', Debug=False)
- renders the quantile of the performance of action on criterion
- computeActionQuantile(self, action, Debug=False)
- renders the overall performance quantile of action
- computeAllQuantiles(self, Sorted=True, Comments=False)
- renders a html string showing the table of
the quantiles matrix action x criterion
- computeCriterionPerformanceDifferences(self, c, Comments=False, Debug=False)
- Renders the ordered list of all observed performance differences on the given criterion.
- computeDefaultDiscriminationThresholds(self, criteriaList=None, quantile={'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto': 80}, Debug=False, Comments=False)
- updates the discrimination thresholds with the percentiles
from the performance differences.
Parameters: quantile = {'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto: 80}.
- computeMinMaxEvaluations(self, criteria=None, actions=None)
- renders minimum and maximum performances on each criterion
in dictionary form: {'g': {'minimum': x, 'maximum': x}}
- computeMissingDataProportion(self, InPercents=False, Comments=False)
- Renders the proportion of missing data,
i.e. NA == Decimal('-999') entries in self.evaluation.
- computeNormalizedDiffEvaluations(self, lowValue=0.0, highValue=100.0, withOutput=False, Debug=False)
- renders and csv stores (withOutput=True) the
list of normalized evaluation differences observed on the family of criteria
Is only adequate if all criteria have the same
evaluation scale. Therefore the performance tableau is normalized to 0.0-100.0 scales.
- computePerformanceDifferences(self, Comments=False, Debug=False, NotPermanentDiffs=True, WithMaxMin=False)
- Adds to the criteria dictionary the ordered list of all observed performance differences.
- computeQuantileOrder(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ordering of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to quantiles=min( 100, max(10,len(self.actions)/10]) ).
The actions are ordered along a decreasing Borda score of their ranking results.
- computeQuantilePreorder(self, Comments=True, Debug=False)
- computes the preorder of the actions obtained from decreasing majority quantiles. The quantiles are recomputed with a call to the self.computeQuantileSort() method.
- computeQuantileRanking(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ranking of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to qantiles=min( 100, max(10,len(self.actions)/10) ).
The actions are ordered along an increasing Borda score of their ranking results.
- computeQuantileSort(self)
- shows a sorting of the actions from decreasing majority quantiles
- computeQuantiles(self, Debug=False)
- renders a quantiles matrix action x criterion with the performance quantile of action on criterion
- computeRankingConsensusQuality(self, ranking, Comments=False, Threading=False, nbrOfCPUs=1)
- Renders the marginal criteria correlations with a given ranking with summary.
- computeThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given constant threshold.
- computeVariableThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given threshold.
- computeWeightPreorder(self)
- renders the weight preorder following from the given
criteria weights in a list of increasing equivalence
lists of criteria.
- computeWeightedAveragePerformances(self, isNormalized=False, lowValue=0.0, highValue=100.0, isListRanked=False)
- Compute normalized weighted average scores by ignoring missing data.
When *isNormalized* == True (False by default),
transforms all the scores into a common 0-100 scale.
A lowValue and highValue parameter
can be provided for a specific normalisation.
- convert2BigData(self)
- Renders a cPerformanceTableau instance, by converting the action keys to integers and evaluations to floats, including the discrimination thresholds, the case given.
- convertDiscriminationThresholds2Decimal(self)
- convertDiscriminationThresholds2Float(self)
- convertEvaluation2Decimal(self)
- Convert evaluations from obsolete float format to decimal format
- convertEvaluation2Float(self)
- Convert evaluations from decimal format to float
- convertInsite2BigData(self)
- Convert in site a standard formated Performance tableau into a bigData formated instance.
- convertInsite2Standard(self)
- Convert in site a bigData formated Performance tableau back into a standard formated PerformanceTableau instance.
- convertWeight2Decimal(self)
- Convert significance weights from obsolete float format
to decimal format.
- convertWeight2Integer(self)
- Convert significance weights from Decimal format
to int format.
- convertWeights2Negative(self)
- Negates the weights of criteria to be minimzed.
- convertWeights2Positive(self)
- Sets negative weights back to positive weights and negates corresponding evaluation grades.
- csvAllQuantiles(self, fileName='quantiles')
- save quantiles matrix criterionxaction in CSV format
- hasOddWeightAlgebra(self, Debug=False)
- Verify if the given criteria[self]['weight'] are odd or not.
Return a Boolen value.
- normalizeEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations between lowValue and highValue on all criteria
- quantizeCriterionEvaluations(self, g, q, ndigits=2, Debug=True)
- q-tile evaluation of criterion q
- replaceNA(self, newNA=None, Comments=False)
- Replaces the current self.NA symbol with the *newNA* symbol of type <Decimal>. If newNA is None, the defauklt value Decimal('-999') is used.
- restoreOriginalEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations to their original values on all criteria
- save(self, fileName='tempperftab', isDecimal=True, valueDigits=2, Comments=True)
- Persistant storage of Performance Tableaux.
- saveCSV(self, fileName='tempPerfTab', Sorted=True, criteriaList=None, actionsList=None, ndigits=2, Debug=False)
- 1
Store the performance Tableau self Actions x Criteria in CSV format.
- saveXMCDA2(self, fileName='temp', category='XMCDA 2.0 Extended format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=False, isStringIO=False, stringNA='NA', comment='produced by saveXMCDA2()', hasVeto=True)
- save performance tableau object self in XMCDA 2.0 format including decision objectives, the case given.
- saveXMCDA2String(self, fileName='temp', category='XMCDA 2.0 format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=True, comment='produced by stringIO()', stringNA='NA')
- save performance tableau object self in XMCDA 2.0 format.
!!! obsolete: replaced by the isStringIO in the saveXMCDA2 method !!!
- setObjectiveWeights(self, Debug=False)
- Set the objective weights to the sum of the corresponding criteria significance weights.
- showActions(self, Alphabetic=False)
- presentation methods for decision actions or alternatives
- showAll(self)
- Show fonction for performance tableau
- showAllQuantiles(self, Sorted=True)
- prints the performance quantiles tableau in the session console.
- showCriteria(self, IntegerWeights=False, Alphabetic=False, ByObjectives=False, Debug=False)
- print Criteria with thresholds and weights.
- showEvaluationStatistics(self)
- renders the variance and standard deviation of
the values observed in the performance Tableau.
- showHTMLCriteria(self, criteriaSubset=None, Sorted=True, ndigits=2, title=None, htmlFileName=None)
- shows the criteria in the system browser view.
- showHTMLPerformanceHeatmap(self, actionsList=None, WithActionNames=False, fromIndex=None, toIndex=None, Transposed=False, criteriaList=None, colorLevels=7, pageTitle=None, ndigits=2, SparseModel=False, outrankingModel='standard', minimalComponentSize=1, rankingRule='NetFlows', StoreRanking=True, quantiles=None, strategy='average', Correlations=False, htmlFileName=None, Threading=False, startMethod=None, nbrOfCPUs=None, Debug=False)
- shows the html heatmap version of the performance tableau in a browser window
(see perfTabs.htmlPerformanceHeatMap() method ).
**Parameters**:
* *actionsList* and *criteriaList*, if provided, give the possibility to show
the decision alternatives, resp. criteria, in a given ordering.
* *WithActionNames* = True (default False) will show the action names instead of the short names or the identifyers.
* *ndigits* = 0 may be used to show integer evaluation values.
* *colorLevels* may be 3, 5, 7, or 9.
* When no *actionsList* is provided, the decision actions are ordered from the best to the worst. This
ranking is obtained by default with the Copeland rule applied on a standard *BipolarOutrankingDigraph*.
* When the *SparseModel* flag is put to *True*, a sparse *PreRankedOutrankingDigraph* construction is used instead.
* the *outrankingModel* parameter (default = 'standard') allows to switch to alternative BipolarOutrankingDigraph constructors, like 'confident' or 'robust' models. When called from a bipolar-valued outrankingDigraph instance, *outrankingModel* = 'this' keeps the current outranking model without recomputing by default the standard outranking model.
* The *minimalComponentSize* allows to control the fill rate of the pre-ranked model.
When *minimalComponentSize* = *n* (the number of decision actions) both the pre-ranked model will be
in fact equivalent to the standard model.
* *rankingRule* = 'NetFlows' (default) | 'Copeland' | 'Kohler' | 'RankedPairs' | 'ArrowRaymond'
| 'IteratedNetFlows' | 'IteraredCopeland'. See tutorial on ranking mith multiple incommensurable criteria.
* when the *StoreRanking* flag is set to *True*, the ranking result is storted in *self*.
* Quantiles used for the pre-ranked decomposition are put by default to *n*
(the number of decision alternatives) for *n* < 50. For larger cardinalities up to 1000, quantiles = *n* /10.
For bigger performance tableaux the *quantiles* parameter may be set to a much lower value
not exceeding usually 10.
* The pre-ranking may be obtained with three ordering strategies for the
quantiles equivalence classes: 'average' (default), 'optimistic' or 'pessimistic'.
* With *Correlations* = *True* and *criteriaList* = *None*, the criteria will be presented from left to right in decreasing
order of the correlations between the marginal criterion based ranking and the global ranking used for
presenting the decision alternatives.
* For large performance Tableaux, *multiprocessing* techniques may be used by setting
*Threading* = *True* in order to speed up the computations; especially when *Correlations* = *True*.
* By default, the number of cores available, will be detected. It may be necessary in a HPC context to indicate the exact number of singled threaded cores in fact allocated to the multiprocessing job.
>>> from randomPerfTabs import RandomPerformanceTableau
>>> rt = RandomPerformanceTableau(seed=100)
>>> rt.showHTMLPerformanceHeatmap(colorLevels=5,Correlations=True)
.. image:: perfTabsExample.png
:alt: HTML heat map of the performance tableau
:width: 600 px
:align: center
- showHTMLPerformanceQuantiles(self, Sorted=True, htmlFileName=None)
- shows the performance quantiles tableau in a browser window.
- showHTMLPerformanceTableau(self, actionsSubset=None, fromIndex=None, toIndex=None, isSorted=False, Transposed=False, ndigits=2, ContentCentered=True, title=None, htmlFileName=None)
- shows the html version of the performance tableau in a browser window.
- showObjectives(self)
- showPairwiseComparison(self, a, b, Debug=False, isReturningHTML=False, hasSymmetricThresholds=True)
- renders the pairwise comprison parameters on all criteria
in html format
- showPerformanceTableau(self, Transposed=False, actionsSubset=None, fromIndex=None, toIndex=None, Sorted=True, ndigits=2)
- Print the performance Tableau.
- showQuantileSort(self, Debug=False)
- Wrapper of computeQuantilePreorder() for the obsolete showQuantileSort() method.
- showRankingConsensusQuality(self, ranking)
- shows the marginal criteria correlations with a given ranking with summary.
- showStatistics(self, Debug=False)
- show statistics concerning the evaluation distributions
on each criteria.
- showWeightPreorder(self)
- Renders a preordering of the the criteria signficance weights.
Data descriptors inherited from PerformanceTableau:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class CircularPerformanceTableau(PerformanceTableau) |
| |
CircularPerformanceTableau(order=5, scale=(0.0, 100.0), NoPolarisation=True)
Constructor for circular performance tableaux. |
| |
- Method resolution order:
- CircularPerformanceTableau
- PerformanceTableau
- builtins.object
Methods defined here:
- __init__(self, order=5, scale=(0.0, 100.0), NoPolarisation=True)
- Initialize self. See help(type(self)) for accurate signature.
Methods inherited from PerformanceTableau:
- __repr__(self)
- Default presentation method for PerformanceTableau instances.
- computeActionCriterionPerformanceDifferences(self, refAction, refCriterion, comments=False, Debug=False)
- computes the performances differences observed between the reference action and the others on the given criterion
- computeActionCriterionQuantile(self, action, criterion, strategy='average', Debug=False)
- renders the quantile of the performance of action on criterion
- computeActionQuantile(self, action, Debug=False)
- renders the overall performance quantile of action
- computeAllQuantiles(self, Sorted=True, Comments=False)
- renders a html string showing the table of
the quantiles matrix action x criterion
- computeCriterionPerformanceDifferences(self, c, Comments=False, Debug=False)
- Renders the ordered list of all observed performance differences on the given criterion.
- computeDefaultDiscriminationThresholds(self, criteriaList=None, quantile={'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto': 80}, Debug=False, Comments=False)
- updates the discrimination thresholds with the percentiles
from the performance differences.
Parameters: quantile = {'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto: 80}.
- computeMinMaxEvaluations(self, criteria=None, actions=None)
- renders minimum and maximum performances on each criterion
in dictionary form: {'g': {'minimum': x, 'maximum': x}}
- computeMissingDataProportion(self, InPercents=False, Comments=False)
- Renders the proportion of missing data,
i.e. NA == Decimal('-999') entries in self.evaluation.
- computeNormalizedDiffEvaluations(self, lowValue=0.0, highValue=100.0, withOutput=False, Debug=False)
- renders and csv stores (withOutput=True) the
list of normalized evaluation differences observed on the family of criteria
Is only adequate if all criteria have the same
evaluation scale. Therefore the performance tableau is normalized to 0.0-100.0 scales.
- computePerformanceDifferences(self, Comments=False, Debug=False, NotPermanentDiffs=True, WithMaxMin=False)
- Adds to the criteria dictionary the ordered list of all observed performance differences.
- computeQuantileOrder(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ordering of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to quantiles=min( 100, max(10,len(self.actions)/10]) ).
The actions are ordered along a decreasing Borda score of their ranking results.
- computeQuantilePreorder(self, Comments=True, Debug=False)
- computes the preorder of the actions obtained from decreasing majority quantiles. The quantiles are recomputed with a call to the self.computeQuantileSort() method.
- computeQuantileRanking(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ranking of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to qantiles=min( 100, max(10,len(self.actions)/10) ).
The actions are ordered along an increasing Borda score of their ranking results.
- computeQuantileSort(self)
- shows a sorting of the actions from decreasing majority quantiles
- computeQuantiles(self, Debug=False)
- renders a quantiles matrix action x criterion with the performance quantile of action on criterion
- computeRankingConsensusQuality(self, ranking, Comments=False, Threading=False, nbrOfCPUs=1)
- Renders the marginal criteria correlations with a given ranking with summary.
- computeThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given constant threshold.
- computeVariableThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given threshold.
- computeWeightPreorder(self)
- renders the weight preorder following from the given
criteria weights in a list of increasing equivalence
lists of criteria.
- computeWeightedAveragePerformances(self, isNormalized=False, lowValue=0.0, highValue=100.0, isListRanked=False)
- Compute normalized weighted average scores by ignoring missing data.
When *isNormalized* == True (False by default),
transforms all the scores into a common 0-100 scale.
A lowValue and highValue parameter
can be provided for a specific normalisation.
- convert2BigData(self)
- Renders a cPerformanceTableau instance, by converting the action keys to integers and evaluations to floats, including the discrimination thresholds, the case given.
- convertDiscriminationThresholds2Decimal(self)
- convertDiscriminationThresholds2Float(self)
- convertEvaluation2Decimal(self)
- Convert evaluations from obsolete float format to decimal format
- convertEvaluation2Float(self)
- Convert evaluations from decimal format to float
- convertInsite2BigData(self)
- Convert in site a standard formated Performance tableau into a bigData formated instance.
- convertInsite2Standard(self)
- Convert in site a bigData formated Performance tableau back into a standard formated PerformanceTableau instance.
- convertWeight2Decimal(self)
- Convert significance weights from obsolete float format
to decimal format.
- convertWeight2Integer(self)
- Convert significance weights from Decimal format
to int format.
- convertWeights2Negative(self)
- Negates the weights of criteria to be minimzed.
- convertWeights2Positive(self)
- Sets negative weights back to positive weights and negates corresponding evaluation grades.
- csvAllQuantiles(self, fileName='quantiles')
- save quantiles matrix criterionxaction in CSV format
- hasOddWeightAlgebra(self, Debug=False)
- Verify if the given criteria[self]['weight'] are odd or not.
Return a Boolen value.
- normalizeEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations between lowValue and highValue on all criteria
- quantizeCriterionEvaluations(self, g, q, ndigits=2, Debug=True)
- q-tile evaluation of criterion q
- replaceNA(self, newNA=None, Comments=False)
- Replaces the current self.NA symbol with the *newNA* symbol of type <Decimal>. If newNA is None, the defauklt value Decimal('-999') is used.
- restoreOriginalEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations to their original values on all criteria
- save(self, fileName='tempperftab', isDecimal=True, valueDigits=2, Comments=True)
- Persistant storage of Performance Tableaux.
- saveCSV(self, fileName='tempPerfTab', Sorted=True, criteriaList=None, actionsList=None, ndigits=2, Debug=False)
- 1
Store the performance Tableau self Actions x Criteria in CSV format.
- saveXMCDA2(self, fileName='temp', category='XMCDA 2.0 Extended format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=False, isStringIO=False, stringNA='NA', comment='produced by saveXMCDA2()', hasVeto=True)
- save performance tableau object self in XMCDA 2.0 format including decision objectives, the case given.
- saveXMCDA2String(self, fileName='temp', category='XMCDA 2.0 format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=True, comment='produced by stringIO()', stringNA='NA')
- save performance tableau object self in XMCDA 2.0 format.
!!! obsolete: replaced by the isStringIO in the saveXMCDA2 method !!!
- setObjectiveWeights(self, Debug=False)
- Set the objective weights to the sum of the corresponding criteria significance weights.
- showActions(self, Alphabetic=False)
- presentation methods for decision actions or alternatives
- showAll(self)
- Show fonction for performance tableau
- showAllQuantiles(self, Sorted=True)
- prints the performance quantiles tableau in the session console.
- showCriteria(self, IntegerWeights=False, Alphabetic=False, ByObjectives=False, Debug=False)
- print Criteria with thresholds and weights.
- showEvaluationStatistics(self)
- renders the variance and standard deviation of
the values observed in the performance Tableau.
- showHTMLCriteria(self, criteriaSubset=None, Sorted=True, ndigits=2, title=None, htmlFileName=None)
- shows the criteria in the system browser view.
- showHTMLPerformanceHeatmap(self, actionsList=None, WithActionNames=False, fromIndex=None, toIndex=None, Transposed=False, criteriaList=None, colorLevels=7, pageTitle=None, ndigits=2, SparseModel=False, outrankingModel='standard', minimalComponentSize=1, rankingRule='NetFlows', StoreRanking=True, quantiles=None, strategy='average', Correlations=False, htmlFileName=None, Threading=False, startMethod=None, nbrOfCPUs=None, Debug=False)
- shows the html heatmap version of the performance tableau in a browser window
(see perfTabs.htmlPerformanceHeatMap() method ).
**Parameters**:
* *actionsList* and *criteriaList*, if provided, give the possibility to show
the decision alternatives, resp. criteria, in a given ordering.
* *WithActionNames* = True (default False) will show the action names instead of the short names or the identifyers.
* *ndigits* = 0 may be used to show integer evaluation values.
* *colorLevels* may be 3, 5, 7, or 9.
* When no *actionsList* is provided, the decision actions are ordered from the best to the worst. This
ranking is obtained by default with the Copeland rule applied on a standard *BipolarOutrankingDigraph*.
* When the *SparseModel* flag is put to *True*, a sparse *PreRankedOutrankingDigraph* construction is used instead.
* the *outrankingModel* parameter (default = 'standard') allows to switch to alternative BipolarOutrankingDigraph constructors, like 'confident' or 'robust' models. When called from a bipolar-valued outrankingDigraph instance, *outrankingModel* = 'this' keeps the current outranking model without recomputing by default the standard outranking model.
* The *minimalComponentSize* allows to control the fill rate of the pre-ranked model.
When *minimalComponentSize* = *n* (the number of decision actions) both the pre-ranked model will be
in fact equivalent to the standard model.
* *rankingRule* = 'NetFlows' (default) | 'Copeland' | 'Kohler' | 'RankedPairs' | 'ArrowRaymond'
| 'IteratedNetFlows' | 'IteraredCopeland'. See tutorial on ranking mith multiple incommensurable criteria.
* when the *StoreRanking* flag is set to *True*, the ranking result is storted in *self*.
* Quantiles used for the pre-ranked decomposition are put by default to *n*
(the number of decision alternatives) for *n* < 50. For larger cardinalities up to 1000, quantiles = *n* /10.
For bigger performance tableaux the *quantiles* parameter may be set to a much lower value
not exceeding usually 10.
* The pre-ranking may be obtained with three ordering strategies for the
quantiles equivalence classes: 'average' (default), 'optimistic' or 'pessimistic'.
* With *Correlations* = *True* and *criteriaList* = *None*, the criteria will be presented from left to right in decreasing
order of the correlations between the marginal criterion based ranking and the global ranking used for
presenting the decision alternatives.
* For large performance Tableaux, *multiprocessing* techniques may be used by setting
*Threading* = *True* in order to speed up the computations; especially when *Correlations* = *True*.
* By default, the number of cores available, will be detected. It may be necessary in a HPC context to indicate the exact number of singled threaded cores in fact allocated to the multiprocessing job.
>>> from randomPerfTabs import RandomPerformanceTableau
>>> rt = RandomPerformanceTableau(seed=100)
>>> rt.showHTMLPerformanceHeatmap(colorLevels=5,Correlations=True)
.. image:: perfTabsExample.png
:alt: HTML heat map of the performance tableau
:width: 600 px
:align: center
- showHTMLPerformanceQuantiles(self, Sorted=True, htmlFileName=None)
- shows the performance quantiles tableau in a browser window.
- showHTMLPerformanceTableau(self, actionsSubset=None, fromIndex=None, toIndex=None, isSorted=False, Transposed=False, ndigits=2, ContentCentered=True, title=None, htmlFileName=None)
- shows the html version of the performance tableau in a browser window.
- showObjectives(self)
- showPairwiseComparison(self, a, b, Debug=False, isReturningHTML=False, hasSymmetricThresholds=True)
- renders the pairwise comprison parameters on all criteria
in html format
- showPerformanceTableau(self, Transposed=False, actionsSubset=None, fromIndex=None, toIndex=None, Sorted=True, ndigits=2)
- Print the performance Tableau.
- showQuantileSort(self, Debug=False)
- Wrapper of computeQuantilePreorder() for the obsolete showQuantileSort() method.
- showRankingConsensusQuality(self, ranking)
- shows the marginal criteria correlations with a given ranking with summary.
- showStatistics(self, Debug=False)
- show statistics concerning the evaluation distributions
on each criteria.
- showWeightPreorder(self)
- Renders a preordering of the the criteria signficance weights.
Data descriptors inherited from PerformanceTableau:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class ConstantPerformanceTableau(PerformanceTableau) |
| |
ConstantPerformanceTableau(inPerfTab, actionsSubset=None, criteriaSubset=None, position=0.5)
Constructor for (partially) constant performance tableaux.
*Parameter*:
* *actionsSubset* selects the actions to be set at equal constant performances,
* *criteriaSubset* select the concerned subset of criteria,
* The *position* parameter (default = median performance) selects the constant performance in the respective scale of each performance criterion. |
| |
- Method resolution order:
- ConstantPerformanceTableau
- PerformanceTableau
- builtins.object
Methods defined here:
- __init__(self, inPerfTab, actionsSubset=None, criteriaSubset=None, position=0.5)
- Initialize self. See help(type(self)) for accurate signature.
Methods inherited from PerformanceTableau:
- __repr__(self)
- Default presentation method for PerformanceTableau instances.
- computeActionCriterionPerformanceDifferences(self, refAction, refCriterion, comments=False, Debug=False)
- computes the performances differences observed between the reference action and the others on the given criterion
- computeActionCriterionQuantile(self, action, criterion, strategy='average', Debug=False)
- renders the quantile of the performance of action on criterion
- computeActionQuantile(self, action, Debug=False)
- renders the overall performance quantile of action
- computeAllQuantiles(self, Sorted=True, Comments=False)
- renders a html string showing the table of
the quantiles matrix action x criterion
- computeCriterionPerformanceDifferences(self, c, Comments=False, Debug=False)
- Renders the ordered list of all observed performance differences on the given criterion.
- computeDefaultDiscriminationThresholds(self, criteriaList=None, quantile={'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto': 80}, Debug=False, Comments=False)
- updates the discrimination thresholds with the percentiles
from the performance differences.
Parameters: quantile = {'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto: 80}.
- computeMinMaxEvaluations(self, criteria=None, actions=None)
- renders minimum and maximum performances on each criterion
in dictionary form: {'g': {'minimum': x, 'maximum': x}}
- computeMissingDataProportion(self, InPercents=False, Comments=False)
- Renders the proportion of missing data,
i.e. NA == Decimal('-999') entries in self.evaluation.
- computeNormalizedDiffEvaluations(self, lowValue=0.0, highValue=100.0, withOutput=False, Debug=False)
- renders and csv stores (withOutput=True) the
list of normalized evaluation differences observed on the family of criteria
Is only adequate if all criteria have the same
evaluation scale. Therefore the performance tableau is normalized to 0.0-100.0 scales.
- computePerformanceDifferences(self, Comments=False, Debug=False, NotPermanentDiffs=True, WithMaxMin=False)
- Adds to the criteria dictionary the ordered list of all observed performance differences.
- computeQuantileOrder(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ordering of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to quantiles=min( 100, max(10,len(self.actions)/10]) ).
The actions are ordered along a decreasing Borda score of their ranking results.
- computeQuantilePreorder(self, Comments=True, Debug=False)
- computes the preorder of the actions obtained from decreasing majority quantiles. The quantiles are recomputed with a call to the self.computeQuantileSort() method.
- computeQuantileRanking(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ranking of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to qantiles=min( 100, max(10,len(self.actions)/10) ).
The actions are ordered along an increasing Borda score of their ranking results.
- computeQuantileSort(self)
- shows a sorting of the actions from decreasing majority quantiles
- computeQuantiles(self, Debug=False)
- renders a quantiles matrix action x criterion with the performance quantile of action on criterion
- computeRankingConsensusQuality(self, ranking, Comments=False, Threading=False, nbrOfCPUs=1)
- Renders the marginal criteria correlations with a given ranking with summary.
- computeThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given constant threshold.
- computeVariableThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given threshold.
- computeWeightPreorder(self)
- renders the weight preorder following from the given
criteria weights in a list of increasing equivalence
lists of criteria.
- computeWeightedAveragePerformances(self, isNormalized=False, lowValue=0.0, highValue=100.0, isListRanked=False)
- Compute normalized weighted average scores by ignoring missing data.
When *isNormalized* == True (False by default),
transforms all the scores into a common 0-100 scale.
A lowValue and highValue parameter
can be provided for a specific normalisation.
- convert2BigData(self)
- Renders a cPerformanceTableau instance, by converting the action keys to integers and evaluations to floats, including the discrimination thresholds, the case given.
- convertDiscriminationThresholds2Decimal(self)
- convertDiscriminationThresholds2Float(self)
- convertEvaluation2Decimal(self)
- Convert evaluations from obsolete float format to decimal format
- convertEvaluation2Float(self)
- Convert evaluations from decimal format to float
- convertInsite2BigData(self)
- Convert in site a standard formated Performance tableau into a bigData formated instance.
- convertInsite2Standard(self)
- Convert in site a bigData formated Performance tableau back into a standard formated PerformanceTableau instance.
- convertWeight2Decimal(self)
- Convert significance weights from obsolete float format
to decimal format.
- convertWeight2Integer(self)
- Convert significance weights from Decimal format
to int format.
- convertWeights2Negative(self)
- Negates the weights of criteria to be minimzed.
- convertWeights2Positive(self)
- Sets negative weights back to positive weights and negates corresponding evaluation grades.
- csvAllQuantiles(self, fileName='quantiles')
- save quantiles matrix criterionxaction in CSV format
- hasOddWeightAlgebra(self, Debug=False)
- Verify if the given criteria[self]['weight'] are odd or not.
Return a Boolen value.
- normalizeEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations between lowValue and highValue on all criteria
- quantizeCriterionEvaluations(self, g, q, ndigits=2, Debug=True)
- q-tile evaluation of criterion q
- replaceNA(self, newNA=None, Comments=False)
- Replaces the current self.NA symbol with the *newNA* symbol of type <Decimal>. If newNA is None, the defauklt value Decimal('-999') is used.
- restoreOriginalEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations to their original values on all criteria
- save(self, fileName='tempperftab', isDecimal=True, valueDigits=2, Comments=True)
- Persistant storage of Performance Tableaux.
- saveCSV(self, fileName='tempPerfTab', Sorted=True, criteriaList=None, actionsList=None, ndigits=2, Debug=False)
- 1
Store the performance Tableau self Actions x Criteria in CSV format.
- saveXMCDA2(self, fileName='temp', category='XMCDA 2.0 Extended format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=False, isStringIO=False, stringNA='NA', comment='produced by saveXMCDA2()', hasVeto=True)
- save performance tableau object self in XMCDA 2.0 format including decision objectives, the case given.
- saveXMCDA2String(self, fileName='temp', category='XMCDA 2.0 format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=True, comment='produced by stringIO()', stringNA='NA')
- save performance tableau object self in XMCDA 2.0 format.
!!! obsolete: replaced by the isStringIO in the saveXMCDA2 method !!!
- setObjectiveWeights(self, Debug=False)
- Set the objective weights to the sum of the corresponding criteria significance weights.
- showActions(self, Alphabetic=False)
- presentation methods for decision actions or alternatives
- showAll(self)
- Show fonction for performance tableau
- showAllQuantiles(self, Sorted=True)
- prints the performance quantiles tableau in the session console.
- showCriteria(self, IntegerWeights=False, Alphabetic=False, ByObjectives=False, Debug=False)
- print Criteria with thresholds and weights.
- showEvaluationStatistics(self)
- renders the variance and standard deviation of
the values observed in the performance Tableau.
- showHTMLCriteria(self, criteriaSubset=None, Sorted=True, ndigits=2, title=None, htmlFileName=None)
- shows the criteria in the system browser view.
- showHTMLPerformanceHeatmap(self, actionsList=None, WithActionNames=False, fromIndex=None, toIndex=None, Transposed=False, criteriaList=None, colorLevels=7, pageTitle=None, ndigits=2, SparseModel=False, outrankingModel='standard', minimalComponentSize=1, rankingRule='NetFlows', StoreRanking=True, quantiles=None, strategy='average', Correlations=False, htmlFileName=None, Threading=False, startMethod=None, nbrOfCPUs=None, Debug=False)
- shows the html heatmap version of the performance tableau in a browser window
(see perfTabs.htmlPerformanceHeatMap() method ).
**Parameters**:
* *actionsList* and *criteriaList*, if provided, give the possibility to show
the decision alternatives, resp. criteria, in a given ordering.
* *WithActionNames* = True (default False) will show the action names instead of the short names or the identifyers.
* *ndigits* = 0 may be used to show integer evaluation values.
* *colorLevels* may be 3, 5, 7, or 9.
* When no *actionsList* is provided, the decision actions are ordered from the best to the worst. This
ranking is obtained by default with the Copeland rule applied on a standard *BipolarOutrankingDigraph*.
* When the *SparseModel* flag is put to *True*, a sparse *PreRankedOutrankingDigraph* construction is used instead.
* the *outrankingModel* parameter (default = 'standard') allows to switch to alternative BipolarOutrankingDigraph constructors, like 'confident' or 'robust' models. When called from a bipolar-valued outrankingDigraph instance, *outrankingModel* = 'this' keeps the current outranking model without recomputing by default the standard outranking model.
* The *minimalComponentSize* allows to control the fill rate of the pre-ranked model.
When *minimalComponentSize* = *n* (the number of decision actions) both the pre-ranked model will be
in fact equivalent to the standard model.
* *rankingRule* = 'NetFlows' (default) | 'Copeland' | 'Kohler' | 'RankedPairs' | 'ArrowRaymond'
| 'IteratedNetFlows' | 'IteraredCopeland'. See tutorial on ranking mith multiple incommensurable criteria.
* when the *StoreRanking* flag is set to *True*, the ranking result is storted in *self*.
* Quantiles used for the pre-ranked decomposition are put by default to *n*
(the number of decision alternatives) for *n* < 50. For larger cardinalities up to 1000, quantiles = *n* /10.
For bigger performance tableaux the *quantiles* parameter may be set to a much lower value
not exceeding usually 10.
* The pre-ranking may be obtained with three ordering strategies for the
quantiles equivalence classes: 'average' (default), 'optimistic' or 'pessimistic'.
* With *Correlations* = *True* and *criteriaList* = *None*, the criteria will be presented from left to right in decreasing
order of the correlations between the marginal criterion based ranking and the global ranking used for
presenting the decision alternatives.
* For large performance Tableaux, *multiprocessing* techniques may be used by setting
*Threading* = *True* in order to speed up the computations; especially when *Correlations* = *True*.
* By default, the number of cores available, will be detected. It may be necessary in a HPC context to indicate the exact number of singled threaded cores in fact allocated to the multiprocessing job.
>>> from randomPerfTabs import RandomPerformanceTableau
>>> rt = RandomPerformanceTableau(seed=100)
>>> rt.showHTMLPerformanceHeatmap(colorLevels=5,Correlations=True)
.. image:: perfTabsExample.png
:alt: HTML heat map of the performance tableau
:width: 600 px
:align: center
- showHTMLPerformanceQuantiles(self, Sorted=True, htmlFileName=None)
- shows the performance quantiles tableau in a browser window.
- showHTMLPerformanceTableau(self, actionsSubset=None, fromIndex=None, toIndex=None, isSorted=False, Transposed=False, ndigits=2, ContentCentered=True, title=None, htmlFileName=None)
- shows the html version of the performance tableau in a browser window.
- showObjectives(self)
- showPairwiseComparison(self, a, b, Debug=False, isReturningHTML=False, hasSymmetricThresholds=True)
- renders the pairwise comprison parameters on all criteria
in html format
- showPerformanceTableau(self, Transposed=False, actionsSubset=None, fromIndex=None, toIndex=None, Sorted=True, ndigits=2)
- Print the performance Tableau.
- showQuantileSort(self, Debug=False)
- Wrapper of computeQuantilePreorder() for the obsolete showQuantileSort() method.
- showRankingConsensusQuality(self, ranking)
- shows the marginal criteria correlations with a given ranking with summary.
- showStatistics(self, Debug=False)
- show statistics concerning the evaluation distributions
on each criteria.
- showWeightPreorder(self)
- Renders a preordering of the the criteria signficance weights.
Data descriptors inherited from PerformanceTableau:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class EmptyPerformanceTableau(PerformanceTableau) |
| |
Template for PerformanceTableau objects. |
| |
- Method resolution order:
- EmptyPerformanceTableau
- PerformanceTableau
- builtins.object
Methods defined here:
- __init__(self)
- Initialize self. See help(type(self)) for accurate signature.
Methods inherited from PerformanceTableau:
- __repr__(self)
- Default presentation method for PerformanceTableau instances.
- computeActionCriterionPerformanceDifferences(self, refAction, refCriterion, comments=False, Debug=False)
- computes the performances differences observed between the reference action and the others on the given criterion
- computeActionCriterionQuantile(self, action, criterion, strategy='average', Debug=False)
- renders the quantile of the performance of action on criterion
- computeActionQuantile(self, action, Debug=False)
- renders the overall performance quantile of action
- computeAllQuantiles(self, Sorted=True, Comments=False)
- renders a html string showing the table of
the quantiles matrix action x criterion
- computeCriterionPerformanceDifferences(self, c, Comments=False, Debug=False)
- Renders the ordered list of all observed performance differences on the given criterion.
- computeDefaultDiscriminationThresholds(self, criteriaList=None, quantile={'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto': 80}, Debug=False, Comments=False)
- updates the discrimination thresholds with the percentiles
from the performance differences.
Parameters: quantile = {'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto: 80}.
- computeMinMaxEvaluations(self, criteria=None, actions=None)
- renders minimum and maximum performances on each criterion
in dictionary form: {'g': {'minimum': x, 'maximum': x}}
- computeMissingDataProportion(self, InPercents=False, Comments=False)
- Renders the proportion of missing data,
i.e. NA == Decimal('-999') entries in self.evaluation.
- computeNormalizedDiffEvaluations(self, lowValue=0.0, highValue=100.0, withOutput=False, Debug=False)
- renders and csv stores (withOutput=True) the
list of normalized evaluation differences observed on the family of criteria
Is only adequate if all criteria have the same
evaluation scale. Therefore the performance tableau is normalized to 0.0-100.0 scales.
- computePerformanceDifferences(self, Comments=False, Debug=False, NotPermanentDiffs=True, WithMaxMin=False)
- Adds to the criteria dictionary the ordered list of all observed performance differences.
- computeQuantileOrder(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ordering of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to quantiles=min( 100, max(10,len(self.actions)/10]) ).
The actions are ordered along a decreasing Borda score of their ranking results.
- computeQuantilePreorder(self, Comments=True, Debug=False)
- computes the preorder of the actions obtained from decreasing majority quantiles. The quantiles are recomputed with a call to the self.computeQuantileSort() method.
- computeQuantileRanking(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ranking of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to qantiles=min( 100, max(10,len(self.actions)/10) ).
The actions are ordered along an increasing Borda score of their ranking results.
- computeQuantileSort(self)
- shows a sorting of the actions from decreasing majority quantiles
- computeQuantiles(self, Debug=False)
- renders a quantiles matrix action x criterion with the performance quantile of action on criterion
- computeRankingConsensusQuality(self, ranking, Comments=False, Threading=False, nbrOfCPUs=1)
- Renders the marginal criteria correlations with a given ranking with summary.
- computeThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given constant threshold.
- computeVariableThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given threshold.
- computeWeightPreorder(self)
- renders the weight preorder following from the given
criteria weights in a list of increasing equivalence
lists of criteria.
- computeWeightedAveragePerformances(self, isNormalized=False, lowValue=0.0, highValue=100.0, isListRanked=False)
- Compute normalized weighted average scores by ignoring missing data.
When *isNormalized* == True (False by default),
transforms all the scores into a common 0-100 scale.
A lowValue and highValue parameter
can be provided for a specific normalisation.
- convert2BigData(self)
- Renders a cPerformanceTableau instance, by converting the action keys to integers and evaluations to floats, including the discrimination thresholds, the case given.
- convertDiscriminationThresholds2Decimal(self)
- convertDiscriminationThresholds2Float(self)
- convertEvaluation2Decimal(self)
- Convert evaluations from obsolete float format to decimal format
- convertEvaluation2Float(self)
- Convert evaluations from decimal format to float
- convertInsite2BigData(self)
- Convert in site a standard formated Performance tableau into a bigData formated instance.
- convertInsite2Standard(self)
- Convert in site a bigData formated Performance tableau back into a standard formated PerformanceTableau instance.
- convertWeight2Decimal(self)
- Convert significance weights from obsolete float format
to decimal format.
- convertWeight2Integer(self)
- Convert significance weights from Decimal format
to int format.
- convertWeights2Negative(self)
- Negates the weights of criteria to be minimzed.
- convertWeights2Positive(self)
- Sets negative weights back to positive weights and negates corresponding evaluation grades.
- csvAllQuantiles(self, fileName='quantiles')
- save quantiles matrix criterionxaction in CSV format
- hasOddWeightAlgebra(self, Debug=False)
- Verify if the given criteria[self]['weight'] are odd or not.
Return a Boolen value.
- normalizeEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations between lowValue and highValue on all criteria
- quantizeCriterionEvaluations(self, g, q, ndigits=2, Debug=True)
- q-tile evaluation of criterion q
- replaceNA(self, newNA=None, Comments=False)
- Replaces the current self.NA symbol with the *newNA* symbol of type <Decimal>. If newNA is None, the defauklt value Decimal('-999') is used.
- restoreOriginalEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations to their original values on all criteria
- save(self, fileName='tempperftab', isDecimal=True, valueDigits=2, Comments=True)
- Persistant storage of Performance Tableaux.
- saveCSV(self, fileName='tempPerfTab', Sorted=True, criteriaList=None, actionsList=None, ndigits=2, Debug=False)
- 1
Store the performance Tableau self Actions x Criteria in CSV format.
- saveXMCDA2(self, fileName='temp', category='XMCDA 2.0 Extended format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=False, isStringIO=False, stringNA='NA', comment='produced by saveXMCDA2()', hasVeto=True)
- save performance tableau object self in XMCDA 2.0 format including decision objectives, the case given.
- saveXMCDA2String(self, fileName='temp', category='XMCDA 2.0 format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=True, comment='produced by stringIO()', stringNA='NA')
- save performance tableau object self in XMCDA 2.0 format.
!!! obsolete: replaced by the isStringIO in the saveXMCDA2 method !!!
- setObjectiveWeights(self, Debug=False)
- Set the objective weights to the sum of the corresponding criteria significance weights.
- showActions(self, Alphabetic=False)
- presentation methods for decision actions or alternatives
- showAll(self)
- Show fonction for performance tableau
- showAllQuantiles(self, Sorted=True)
- prints the performance quantiles tableau in the session console.
- showCriteria(self, IntegerWeights=False, Alphabetic=False, ByObjectives=False, Debug=False)
- print Criteria with thresholds and weights.
- showEvaluationStatistics(self)
- renders the variance and standard deviation of
the values observed in the performance Tableau.
- showHTMLCriteria(self, criteriaSubset=None, Sorted=True, ndigits=2, title=None, htmlFileName=None)
- shows the criteria in the system browser view.
- showHTMLPerformanceHeatmap(self, actionsList=None, WithActionNames=False, fromIndex=None, toIndex=None, Transposed=False, criteriaList=None, colorLevels=7, pageTitle=None, ndigits=2, SparseModel=False, outrankingModel='standard', minimalComponentSize=1, rankingRule='NetFlows', StoreRanking=True, quantiles=None, strategy='average', Correlations=False, htmlFileName=None, Threading=False, startMethod=None, nbrOfCPUs=None, Debug=False)
- shows the html heatmap version of the performance tableau in a browser window
(see perfTabs.htmlPerformanceHeatMap() method ).
**Parameters**:
* *actionsList* and *criteriaList*, if provided, give the possibility to show
the decision alternatives, resp. criteria, in a given ordering.
* *WithActionNames* = True (default False) will show the action names instead of the short names or the identifyers.
* *ndigits* = 0 may be used to show integer evaluation values.
* *colorLevels* may be 3, 5, 7, or 9.
* When no *actionsList* is provided, the decision actions are ordered from the best to the worst. This
ranking is obtained by default with the Copeland rule applied on a standard *BipolarOutrankingDigraph*.
* When the *SparseModel* flag is put to *True*, a sparse *PreRankedOutrankingDigraph* construction is used instead.
* the *outrankingModel* parameter (default = 'standard') allows to switch to alternative BipolarOutrankingDigraph constructors, like 'confident' or 'robust' models. When called from a bipolar-valued outrankingDigraph instance, *outrankingModel* = 'this' keeps the current outranking model without recomputing by default the standard outranking model.
* The *minimalComponentSize* allows to control the fill rate of the pre-ranked model.
When *minimalComponentSize* = *n* (the number of decision actions) both the pre-ranked model will be
in fact equivalent to the standard model.
* *rankingRule* = 'NetFlows' (default) | 'Copeland' | 'Kohler' | 'RankedPairs' | 'ArrowRaymond'
| 'IteratedNetFlows' | 'IteraredCopeland'. See tutorial on ranking mith multiple incommensurable criteria.
* when the *StoreRanking* flag is set to *True*, the ranking result is storted in *self*.
* Quantiles used for the pre-ranked decomposition are put by default to *n*
(the number of decision alternatives) for *n* < 50. For larger cardinalities up to 1000, quantiles = *n* /10.
For bigger performance tableaux the *quantiles* parameter may be set to a much lower value
not exceeding usually 10.
* The pre-ranking may be obtained with three ordering strategies for the
quantiles equivalence classes: 'average' (default), 'optimistic' or 'pessimistic'.
* With *Correlations* = *True* and *criteriaList* = *None*, the criteria will be presented from left to right in decreasing
order of the correlations between the marginal criterion based ranking and the global ranking used for
presenting the decision alternatives.
* For large performance Tableaux, *multiprocessing* techniques may be used by setting
*Threading* = *True* in order to speed up the computations; especially when *Correlations* = *True*.
* By default, the number of cores available, will be detected. It may be necessary in a HPC context to indicate the exact number of singled threaded cores in fact allocated to the multiprocessing job.
>>> from randomPerfTabs import RandomPerformanceTableau
>>> rt = RandomPerformanceTableau(seed=100)
>>> rt.showHTMLPerformanceHeatmap(colorLevels=5,Correlations=True)
.. image:: perfTabsExample.png
:alt: HTML heat map of the performance tableau
:width: 600 px
:align: center
- showHTMLPerformanceQuantiles(self, Sorted=True, htmlFileName=None)
- shows the performance quantiles tableau in a browser window.
- showHTMLPerformanceTableau(self, actionsSubset=None, fromIndex=None, toIndex=None, isSorted=False, Transposed=False, ndigits=2, ContentCentered=True, title=None, htmlFileName=None)
- shows the html version of the performance tableau in a browser window.
- showObjectives(self)
- showPairwiseComparison(self, a, b, Debug=False, isReturningHTML=False, hasSymmetricThresholds=True)
- renders the pairwise comprison parameters on all criteria
in html format
- showPerformanceTableau(self, Transposed=False, actionsSubset=None, fromIndex=None, toIndex=None, Sorted=True, ndigits=2)
- Print the performance Tableau.
- showQuantileSort(self, Debug=False)
- Wrapper of computeQuantilePreorder() for the obsolete showQuantileSort() method.
- showRankingConsensusQuality(self, ranking)
- shows the marginal criteria correlations with a given ranking with summary.
- showStatistics(self, Debug=False)
- show statistics concerning the evaluation distributions
on each criteria.
- showWeightPreorder(self)
- Renders a preordering of the the criteria signficance weights.
Data descriptors inherited from PerformanceTableau:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class NormalizedPerformanceTableau(PerformanceTableau) |
| |
NormalizedPerformanceTableau(argPerfTab=None, lowValue=0, highValue=100, coalition=None, Debug=False)
specialsation of the PerformanceTableau class for
constructing normalized, 0 - 100, valued PerformanceTableau
instances from a given argPerfTab instance. |
| |
- Method resolution order:
- NormalizedPerformanceTableau
- PerformanceTableau
- builtins.object
Methods defined here:
- __init__(self, argPerfTab=None, lowValue=0, highValue=100, coalition=None, Debug=False)
- Initialize self. See help(type(self)) for accurate signature.
Methods inherited from PerformanceTableau:
- __repr__(self)
- Default presentation method for PerformanceTableau instances.
- computeActionCriterionPerformanceDifferences(self, refAction, refCriterion, comments=False, Debug=False)
- computes the performances differences observed between the reference action and the others on the given criterion
- computeActionCriterionQuantile(self, action, criterion, strategy='average', Debug=False)
- renders the quantile of the performance of action on criterion
- computeActionQuantile(self, action, Debug=False)
- renders the overall performance quantile of action
- computeAllQuantiles(self, Sorted=True, Comments=False)
- renders a html string showing the table of
the quantiles matrix action x criterion
- computeCriterionPerformanceDifferences(self, c, Comments=False, Debug=False)
- Renders the ordered list of all observed performance differences on the given criterion.
- computeDefaultDiscriminationThresholds(self, criteriaList=None, quantile={'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto': 80}, Debug=False, Comments=False)
- updates the discrimination thresholds with the percentiles
from the performance differences.
Parameters: quantile = {'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto: 80}.
- computeMinMaxEvaluations(self, criteria=None, actions=None)
- renders minimum and maximum performances on each criterion
in dictionary form: {'g': {'minimum': x, 'maximum': x}}
- computeMissingDataProportion(self, InPercents=False, Comments=False)
- Renders the proportion of missing data,
i.e. NA == Decimal('-999') entries in self.evaluation.
- computeNormalizedDiffEvaluations(self, lowValue=0.0, highValue=100.0, withOutput=False, Debug=False)
- renders and csv stores (withOutput=True) the
list of normalized evaluation differences observed on the family of criteria
Is only adequate if all criteria have the same
evaluation scale. Therefore the performance tableau is normalized to 0.0-100.0 scales.
- computePerformanceDifferences(self, Comments=False, Debug=False, NotPermanentDiffs=True, WithMaxMin=False)
- Adds to the criteria dictionary the ordered list of all observed performance differences.
- computeQuantileOrder(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ordering of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to quantiles=min( 100, max(10,len(self.actions)/10]) ).
The actions are ordered along a decreasing Borda score of their ranking results.
- computeQuantilePreorder(self, Comments=True, Debug=False)
- computes the preorder of the actions obtained from decreasing majority quantiles. The quantiles are recomputed with a call to the self.computeQuantileSort() method.
- computeQuantileRanking(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ranking of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to qantiles=min( 100, max(10,len(self.actions)/10) ).
The actions are ordered along an increasing Borda score of their ranking results.
- computeQuantileSort(self)
- shows a sorting of the actions from decreasing majority quantiles
- computeQuantiles(self, Debug=False)
- renders a quantiles matrix action x criterion with the performance quantile of action on criterion
- computeRankingConsensusQuality(self, ranking, Comments=False, Threading=False, nbrOfCPUs=1)
- Renders the marginal criteria correlations with a given ranking with summary.
- computeThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given constant threshold.
- computeVariableThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given threshold.
- computeWeightPreorder(self)
- renders the weight preorder following from the given
criteria weights in a list of increasing equivalence
lists of criteria.
- computeWeightedAveragePerformances(self, isNormalized=False, lowValue=0.0, highValue=100.0, isListRanked=False)
- Compute normalized weighted average scores by ignoring missing data.
When *isNormalized* == True (False by default),
transforms all the scores into a common 0-100 scale.
A lowValue and highValue parameter
can be provided for a specific normalisation.
- convert2BigData(self)
- Renders a cPerformanceTableau instance, by converting the action keys to integers and evaluations to floats, including the discrimination thresholds, the case given.
- convertDiscriminationThresholds2Decimal(self)
- convertDiscriminationThresholds2Float(self)
- convertEvaluation2Decimal(self)
- Convert evaluations from obsolete float format to decimal format
- convertEvaluation2Float(self)
- Convert evaluations from decimal format to float
- convertInsite2BigData(self)
- Convert in site a standard formated Performance tableau into a bigData formated instance.
- convertInsite2Standard(self)
- Convert in site a bigData formated Performance tableau back into a standard formated PerformanceTableau instance.
- convertWeight2Decimal(self)
- Convert significance weights from obsolete float format
to decimal format.
- convertWeight2Integer(self)
- Convert significance weights from Decimal format
to int format.
- convertWeights2Negative(self)
- Negates the weights of criteria to be minimzed.
- convertWeights2Positive(self)
- Sets negative weights back to positive weights and negates corresponding evaluation grades.
- csvAllQuantiles(self, fileName='quantiles')
- save quantiles matrix criterionxaction in CSV format
- hasOddWeightAlgebra(self, Debug=False)
- Verify if the given criteria[self]['weight'] are odd or not.
Return a Boolen value.
- normalizeEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations between lowValue and highValue on all criteria
- quantizeCriterionEvaluations(self, g, q, ndigits=2, Debug=True)
- q-tile evaluation of criterion q
- replaceNA(self, newNA=None, Comments=False)
- Replaces the current self.NA symbol with the *newNA* symbol of type <Decimal>. If newNA is None, the defauklt value Decimal('-999') is used.
- restoreOriginalEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations to their original values on all criteria
- save(self, fileName='tempperftab', isDecimal=True, valueDigits=2, Comments=True)
- Persistant storage of Performance Tableaux.
- saveCSV(self, fileName='tempPerfTab', Sorted=True, criteriaList=None, actionsList=None, ndigits=2, Debug=False)
- 1
Store the performance Tableau self Actions x Criteria in CSV format.
- saveXMCDA2(self, fileName='temp', category='XMCDA 2.0 Extended format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=False, isStringIO=False, stringNA='NA', comment='produced by saveXMCDA2()', hasVeto=True)
- save performance tableau object self in XMCDA 2.0 format including decision objectives, the case given.
- saveXMCDA2String(self, fileName='temp', category='XMCDA 2.0 format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=True, comment='produced by stringIO()', stringNA='NA')
- save performance tableau object self in XMCDA 2.0 format.
!!! obsolete: replaced by the isStringIO in the saveXMCDA2 method !!!
- setObjectiveWeights(self, Debug=False)
- Set the objective weights to the sum of the corresponding criteria significance weights.
- showActions(self, Alphabetic=False)
- presentation methods for decision actions or alternatives
- showAll(self)
- Show fonction for performance tableau
- showAllQuantiles(self, Sorted=True)
- prints the performance quantiles tableau in the session console.
- showCriteria(self, IntegerWeights=False, Alphabetic=False, ByObjectives=False, Debug=False)
- print Criteria with thresholds and weights.
- showEvaluationStatistics(self)
- renders the variance and standard deviation of
the values observed in the performance Tableau.
- showHTMLCriteria(self, criteriaSubset=None, Sorted=True, ndigits=2, title=None, htmlFileName=None)
- shows the criteria in the system browser view.
- showHTMLPerformanceHeatmap(self, actionsList=None, WithActionNames=False, fromIndex=None, toIndex=None, Transposed=False, criteriaList=None, colorLevels=7, pageTitle=None, ndigits=2, SparseModel=False, outrankingModel='standard', minimalComponentSize=1, rankingRule='NetFlows', StoreRanking=True, quantiles=None, strategy='average', Correlations=False, htmlFileName=None, Threading=False, startMethod=None, nbrOfCPUs=None, Debug=False)
- shows the html heatmap version of the performance tableau in a browser window
(see perfTabs.htmlPerformanceHeatMap() method ).
**Parameters**:
* *actionsList* and *criteriaList*, if provided, give the possibility to show
the decision alternatives, resp. criteria, in a given ordering.
* *WithActionNames* = True (default False) will show the action names instead of the short names or the identifyers.
* *ndigits* = 0 may be used to show integer evaluation values.
* *colorLevels* may be 3, 5, 7, or 9.
* When no *actionsList* is provided, the decision actions are ordered from the best to the worst. This
ranking is obtained by default with the Copeland rule applied on a standard *BipolarOutrankingDigraph*.
* When the *SparseModel* flag is put to *True*, a sparse *PreRankedOutrankingDigraph* construction is used instead.
* the *outrankingModel* parameter (default = 'standard') allows to switch to alternative BipolarOutrankingDigraph constructors, like 'confident' or 'robust' models. When called from a bipolar-valued outrankingDigraph instance, *outrankingModel* = 'this' keeps the current outranking model without recomputing by default the standard outranking model.
* The *minimalComponentSize* allows to control the fill rate of the pre-ranked model.
When *minimalComponentSize* = *n* (the number of decision actions) both the pre-ranked model will be
in fact equivalent to the standard model.
* *rankingRule* = 'NetFlows' (default) | 'Copeland' | 'Kohler' | 'RankedPairs' | 'ArrowRaymond'
| 'IteratedNetFlows' | 'IteraredCopeland'. See tutorial on ranking mith multiple incommensurable criteria.
* when the *StoreRanking* flag is set to *True*, the ranking result is storted in *self*.
* Quantiles used for the pre-ranked decomposition are put by default to *n*
(the number of decision alternatives) for *n* < 50. For larger cardinalities up to 1000, quantiles = *n* /10.
For bigger performance tableaux the *quantiles* parameter may be set to a much lower value
not exceeding usually 10.
* The pre-ranking may be obtained with three ordering strategies for the
quantiles equivalence classes: 'average' (default), 'optimistic' or 'pessimistic'.
* With *Correlations* = *True* and *criteriaList* = *None*, the criteria will be presented from left to right in decreasing
order of the correlations between the marginal criterion based ranking and the global ranking used for
presenting the decision alternatives.
* For large performance Tableaux, *multiprocessing* techniques may be used by setting
*Threading* = *True* in order to speed up the computations; especially when *Correlations* = *True*.
* By default, the number of cores available, will be detected. It may be necessary in a HPC context to indicate the exact number of singled threaded cores in fact allocated to the multiprocessing job.
>>> from randomPerfTabs import RandomPerformanceTableau
>>> rt = RandomPerformanceTableau(seed=100)
>>> rt.showHTMLPerformanceHeatmap(colorLevels=5,Correlations=True)
.. image:: perfTabsExample.png
:alt: HTML heat map of the performance tableau
:width: 600 px
:align: center
- showHTMLPerformanceQuantiles(self, Sorted=True, htmlFileName=None)
- shows the performance quantiles tableau in a browser window.
- showHTMLPerformanceTableau(self, actionsSubset=None, fromIndex=None, toIndex=None, isSorted=False, Transposed=False, ndigits=2, ContentCentered=True, title=None, htmlFileName=None)
- shows the html version of the performance tableau in a browser window.
- showObjectives(self)
- showPairwiseComparison(self, a, b, Debug=False, isReturningHTML=False, hasSymmetricThresholds=True)
- renders the pairwise comprison parameters on all criteria
in html format
- showPerformanceTableau(self, Transposed=False, actionsSubset=None, fromIndex=None, toIndex=None, Sorted=True, ndigits=2)
- Print the performance Tableau.
- showQuantileSort(self, Debug=False)
- Wrapper of computeQuantilePreorder() for the obsolete showQuantileSort() method.
- showRankingConsensusQuality(self, ranking)
- shows the marginal criteria correlations with a given ranking with summary.
- showStatistics(self, Debug=False)
- show statistics concerning the evaluation distributions
on each criteria.
- showWeightPreorder(self)
- Renders a preordering of the the criteria signficance weights.
Data descriptors inherited from PerformanceTableau:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class PartialPerformanceTableau(PerformanceTableau) |
| |
PartialPerformanceTableau(inPerfTab, actionsSubset=None, criteriaSubset=None, objectivesSubset=None)
Constructor for partial performance tableaux concerning a subset of actions and/or criteria and/or objectives |
| |
- Method resolution order:
- PartialPerformanceTableau
- PerformanceTableau
- builtins.object
Methods defined here:
- __init__(self, inPerfTab, actionsSubset=None, criteriaSubset=None, objectivesSubset=None)
- Initialize self. See help(type(self)) for accurate signature.
Methods inherited from PerformanceTableau:
- __repr__(self)
- Default presentation method for PerformanceTableau instances.
- computeActionCriterionPerformanceDifferences(self, refAction, refCriterion, comments=False, Debug=False)
- computes the performances differences observed between the reference action and the others on the given criterion
- computeActionCriterionQuantile(self, action, criterion, strategy='average', Debug=False)
- renders the quantile of the performance of action on criterion
- computeActionQuantile(self, action, Debug=False)
- renders the overall performance quantile of action
- computeAllQuantiles(self, Sorted=True, Comments=False)
- renders a html string showing the table of
the quantiles matrix action x criterion
- computeCriterionPerformanceDifferences(self, c, Comments=False, Debug=False)
- Renders the ordered list of all observed performance differences on the given criterion.
- computeDefaultDiscriminationThresholds(self, criteriaList=None, quantile={'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto': 80}, Debug=False, Comments=False)
- updates the discrimination thresholds with the percentiles
from the performance differences.
Parameters: quantile = {'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto: 80}.
- computeMinMaxEvaluations(self, criteria=None, actions=None)
- renders minimum and maximum performances on each criterion
in dictionary form: {'g': {'minimum': x, 'maximum': x}}
- computeMissingDataProportion(self, InPercents=False, Comments=False)
- Renders the proportion of missing data,
i.e. NA == Decimal('-999') entries in self.evaluation.
- computeNormalizedDiffEvaluations(self, lowValue=0.0, highValue=100.0, withOutput=False, Debug=False)
- renders and csv stores (withOutput=True) the
list of normalized evaluation differences observed on the family of criteria
Is only adequate if all criteria have the same
evaluation scale. Therefore the performance tableau is normalized to 0.0-100.0 scales.
- computePerformanceDifferences(self, Comments=False, Debug=False, NotPermanentDiffs=True, WithMaxMin=False)
- Adds to the criteria dictionary the ordered list of all observed performance differences.
- computeQuantileOrder(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ordering of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to quantiles=min( 100, max(10,len(self.actions)/10]) ).
The actions are ordered along a decreasing Borda score of their ranking results.
- computeQuantilePreorder(self, Comments=True, Debug=False)
- computes the preorder of the actions obtained from decreasing majority quantiles. The quantiles are recomputed with a call to the self.computeQuantileSort() method.
- computeQuantileRanking(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ranking of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to qantiles=min( 100, max(10,len(self.actions)/10) ).
The actions are ordered along an increasing Borda score of their ranking results.
- computeQuantileSort(self)
- shows a sorting of the actions from decreasing majority quantiles
- computeQuantiles(self, Debug=False)
- renders a quantiles matrix action x criterion with the performance quantile of action on criterion
- computeRankingConsensusQuality(self, ranking, Comments=False, Threading=False, nbrOfCPUs=1)
- Renders the marginal criteria correlations with a given ranking with summary.
- computeThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given constant threshold.
- computeVariableThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given threshold.
- computeWeightPreorder(self)
- renders the weight preorder following from the given
criteria weights in a list of increasing equivalence
lists of criteria.
- computeWeightedAveragePerformances(self, isNormalized=False, lowValue=0.0, highValue=100.0, isListRanked=False)
- Compute normalized weighted average scores by ignoring missing data.
When *isNormalized* == True (False by default),
transforms all the scores into a common 0-100 scale.
A lowValue and highValue parameter
can be provided for a specific normalisation.
- convert2BigData(self)
- Renders a cPerformanceTableau instance, by converting the action keys to integers and evaluations to floats, including the discrimination thresholds, the case given.
- convertDiscriminationThresholds2Decimal(self)
- convertDiscriminationThresholds2Float(self)
- convertEvaluation2Decimal(self)
- Convert evaluations from obsolete float format to decimal format
- convertEvaluation2Float(self)
- Convert evaluations from decimal format to float
- convertInsite2BigData(self)
- Convert in site a standard formated Performance tableau into a bigData formated instance.
- convertInsite2Standard(self)
- Convert in site a bigData formated Performance tableau back into a standard formated PerformanceTableau instance.
- convertWeight2Decimal(self)
- Convert significance weights from obsolete float format
to decimal format.
- convertWeight2Integer(self)
- Convert significance weights from Decimal format
to int format.
- convertWeights2Negative(self)
- Negates the weights of criteria to be minimzed.
- convertWeights2Positive(self)
- Sets negative weights back to positive weights and negates corresponding evaluation grades.
- csvAllQuantiles(self, fileName='quantiles')
- save quantiles matrix criterionxaction in CSV format
- hasOddWeightAlgebra(self, Debug=False)
- Verify if the given criteria[self]['weight'] are odd or not.
Return a Boolen value.
- normalizeEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations between lowValue and highValue on all criteria
- quantizeCriterionEvaluations(self, g, q, ndigits=2, Debug=True)
- q-tile evaluation of criterion q
- replaceNA(self, newNA=None, Comments=False)
- Replaces the current self.NA symbol with the *newNA* symbol of type <Decimal>. If newNA is None, the defauklt value Decimal('-999') is used.
- restoreOriginalEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations to their original values on all criteria
- save(self, fileName='tempperftab', isDecimal=True, valueDigits=2, Comments=True)
- Persistant storage of Performance Tableaux.
- saveCSV(self, fileName='tempPerfTab', Sorted=True, criteriaList=None, actionsList=None, ndigits=2, Debug=False)
- 1
Store the performance Tableau self Actions x Criteria in CSV format.
- saveXMCDA2(self, fileName='temp', category='XMCDA 2.0 Extended format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=False, isStringIO=False, stringNA='NA', comment='produced by saveXMCDA2()', hasVeto=True)
- save performance tableau object self in XMCDA 2.0 format including decision objectives, the case given.
- saveXMCDA2String(self, fileName='temp', category='XMCDA 2.0 format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=True, comment='produced by stringIO()', stringNA='NA')
- save performance tableau object self in XMCDA 2.0 format.
!!! obsolete: replaced by the isStringIO in the saveXMCDA2 method !!!
- setObjectiveWeights(self, Debug=False)
- Set the objective weights to the sum of the corresponding criteria significance weights.
- showActions(self, Alphabetic=False)
- presentation methods for decision actions or alternatives
- showAll(self)
- Show fonction for performance tableau
- showAllQuantiles(self, Sorted=True)
- prints the performance quantiles tableau in the session console.
- showCriteria(self, IntegerWeights=False, Alphabetic=False, ByObjectives=False, Debug=False)
- print Criteria with thresholds and weights.
- showEvaluationStatistics(self)
- renders the variance and standard deviation of
the values observed in the performance Tableau.
- showHTMLCriteria(self, criteriaSubset=None, Sorted=True, ndigits=2, title=None, htmlFileName=None)
- shows the criteria in the system browser view.
- showHTMLPerformanceHeatmap(self, actionsList=None, WithActionNames=False, fromIndex=None, toIndex=None, Transposed=False, criteriaList=None, colorLevels=7, pageTitle=None, ndigits=2, SparseModel=False, outrankingModel='standard', minimalComponentSize=1, rankingRule='NetFlows', StoreRanking=True, quantiles=None, strategy='average', Correlations=False, htmlFileName=None, Threading=False, startMethod=None, nbrOfCPUs=None, Debug=False)
- shows the html heatmap version of the performance tableau in a browser window
(see perfTabs.htmlPerformanceHeatMap() method ).
**Parameters**:
* *actionsList* and *criteriaList*, if provided, give the possibility to show
the decision alternatives, resp. criteria, in a given ordering.
* *WithActionNames* = True (default False) will show the action names instead of the short names or the identifyers.
* *ndigits* = 0 may be used to show integer evaluation values.
* *colorLevels* may be 3, 5, 7, or 9.
* When no *actionsList* is provided, the decision actions are ordered from the best to the worst. This
ranking is obtained by default with the Copeland rule applied on a standard *BipolarOutrankingDigraph*.
* When the *SparseModel* flag is put to *True*, a sparse *PreRankedOutrankingDigraph* construction is used instead.
* the *outrankingModel* parameter (default = 'standard') allows to switch to alternative BipolarOutrankingDigraph constructors, like 'confident' or 'robust' models. When called from a bipolar-valued outrankingDigraph instance, *outrankingModel* = 'this' keeps the current outranking model without recomputing by default the standard outranking model.
* The *minimalComponentSize* allows to control the fill rate of the pre-ranked model.
When *minimalComponentSize* = *n* (the number of decision actions) both the pre-ranked model will be
in fact equivalent to the standard model.
* *rankingRule* = 'NetFlows' (default) | 'Copeland' | 'Kohler' | 'RankedPairs' | 'ArrowRaymond'
| 'IteratedNetFlows' | 'IteraredCopeland'. See tutorial on ranking mith multiple incommensurable criteria.
* when the *StoreRanking* flag is set to *True*, the ranking result is storted in *self*.
* Quantiles used for the pre-ranked decomposition are put by default to *n*
(the number of decision alternatives) for *n* < 50. For larger cardinalities up to 1000, quantiles = *n* /10.
For bigger performance tableaux the *quantiles* parameter may be set to a much lower value
not exceeding usually 10.
* The pre-ranking may be obtained with three ordering strategies for the
quantiles equivalence classes: 'average' (default), 'optimistic' or 'pessimistic'.
* With *Correlations* = *True* and *criteriaList* = *None*, the criteria will be presented from left to right in decreasing
order of the correlations between the marginal criterion based ranking and the global ranking used for
presenting the decision alternatives.
* For large performance Tableaux, *multiprocessing* techniques may be used by setting
*Threading* = *True* in order to speed up the computations; especially when *Correlations* = *True*.
* By default, the number of cores available, will be detected. It may be necessary in a HPC context to indicate the exact number of singled threaded cores in fact allocated to the multiprocessing job.
>>> from randomPerfTabs import RandomPerformanceTableau
>>> rt = RandomPerformanceTableau(seed=100)
>>> rt.showHTMLPerformanceHeatmap(colorLevels=5,Correlations=True)
.. image:: perfTabsExample.png
:alt: HTML heat map of the performance tableau
:width: 600 px
:align: center
- showHTMLPerformanceQuantiles(self, Sorted=True, htmlFileName=None)
- shows the performance quantiles tableau in a browser window.
- showHTMLPerformanceTableau(self, actionsSubset=None, fromIndex=None, toIndex=None, isSorted=False, Transposed=False, ndigits=2, ContentCentered=True, title=None, htmlFileName=None)
- shows the html version of the performance tableau in a browser window.
- showObjectives(self)
- showPairwiseComparison(self, a, b, Debug=False, isReturningHTML=False, hasSymmetricThresholds=True)
- renders the pairwise comprison parameters on all criteria
in html format
- showPerformanceTableau(self, Transposed=False, actionsSubset=None, fromIndex=None, toIndex=None, Sorted=True, ndigits=2)
- Print the performance Tableau.
- showQuantileSort(self, Debug=False)
- Wrapper of computeQuantilePreorder() for the obsolete showQuantileSort() method.
- showRankingConsensusQuality(self, ranking)
- shows the marginal criteria correlations with a given ranking with summary.
- showStatistics(self, Debug=False)
- show statistics concerning the evaluation distributions
on each criteria.
- showWeightPreorder(self)
- Renders a preordering of the the criteria signficance weights.
Data descriptors inherited from PerformanceTableau:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class PerformanceTableau(builtins.object) |
| |
PerformanceTableau(filePerfTab=None, isEmpty=False)
In this *Digraph3* module, the root :py:class:`perfTabs.PerformanceTableau` class provides a generic **performance table model**. A given object of this class consists in:
* A set of potential decision **actions** : an ordered dictionary describing the potential decision actions or alternatives with 'name' and 'comment' attributes,
* An optional set of decision **objectives**: an ordered dictionary with name, comment, weight and list of concerned criteria per objective,
* A coherent family of **criteria**: a ordered dictionary of criteria functions used for measuring the performance of each potential decision action with respect to the preference dimension captured by each criterion,
* The **evaluation**: a dictionary of performance evaluations for each decision action or alternative on each criterion function,
* The NA numerical symbol: Decimal('-999') by default representing missing evaluation data.
Structure::
actions = OrderedDict([('a1', {'name': ..., 'comment': ...}),
('a2', {'name': ..., 'comment': ...}),
...])
objectives = OrderedDict([
('obj1', {'name': ..., 'comment': ..., 'weight': ..., 'criteria': ['g1', ...]}),
('obj2', {'name': ..., 'comment', ..., 'weight': ..., 'criteria': ['g2', ...]}),
...])
criteria = OrderedDict([
('g1', {'weight':Decimal("3.00"),
'scale': (Decimal("0.00"),Decimal("100.00")),
'thresholds' : {'pref': (Decimal('20.0'), Decimal('0.0')),
'ind': (Decimal('10.0'), Decimal('0.0')),
'veto': (Decimal('80.0'), Decimal('0.0'))},
'objective': 'obj1',
}),
('g2', {'weight':Decimal("5.00"),
'scale': (Decimal("0.00"),Decimal("100.00")),
'thresholds' : {'pref': (Decimal('20.0'), Decimal('0.0')),
'ind': (Decimal('10.0'), Decimal('0.0')),
'veto': (Decimal('80.0'), Decimal('0.0'))},
'objective': 'obj2',
}),
...])
evaluation = {'g1': {'a1':Decimal("57.28"),'a2':Decimal("99.85"), ...},
'g2': {'a1':Decimal("88.12"),'a2':Decimal("-999"), ...},
...}
With the help of the :py:class:`perfTabs.RandomPerformanceTableau` class let us generate for illustration a random performance tableau concerning 7 decision actions or alternatives denoted *a01*, *a02*, ..., *a07*:
>>> from randomPerfTabs import RandomPerformanceTableau
>>> rt = RandomPerformanceTableau(seed=100)
>>> rt.showActions()
*----- show decision action --------------*
key: a01
short name: a01
name: random decision action
comment: RandomPerformanceTableau() generated.
key: a02
short name: a02
name: random decision action
comment: RandomPerformanceTableau() generated.
key: a03
short name: a03
name: random decision action
comment: RandomPerformanceTableau() generated.
...
...
key: a07
name: random decision action
comment: RandomPerformanceTableau() generated.
In this example we consider furthermore a family of seven equisignificant cardinal criteria functions *g01*, *g02*, ..., *g07*, measuring the performance of each alternative on a rational scale form 0.0 to 100.00. In order to capture the evaluation's uncertainty and imprecision, each criterion function *g1* to *g7* admits three performance discrimination thresholds of 10, 20 and 80 pts for warranting respectively any indifference, preference and veto situations:
>>> rt.showCriteria(IntegerWeights=True)
*---- criteria -----*
g1 RandomPerformanceTableau() instance
Preference direction: max
Scale = (0.00, 100.00)
Weight = 1
Threshold ind : 2.50 + 0.00x ; percentile: 6.06
Threshold pref : 5.00 + 0.00x ; percentile: 12.12
Threshold veto : 80.00 + 0.00x ; percentile: 100.00
g2 RandomPerformanceTableau() instance
Preference direction: max
Scale = (0.00, 100.00)
Weight = 1
Threshold ind : 2.50 + 0.00x ; percentile: 7.69
Threshold pref : 5.00 + 0.00x ; percentile: 14.10
Threshold veto : 80.00 + 0.00x ; percentile: 100.00
g3 RandomPerformanceTableau() instance
Preference direction: max
Scale = (0.00, 100.00)
Weight = 1
Threshold ind : 2.50 + 0.00x ; percentile: 6.41
Threshold pref : 5.00 + 0.00x ; percentile: 6.41
Threshold veto : 80.00 + 0.00x ; percentile: 100.00
...
...
g7 RandomPerformanceTableau() instance
Preference direction: max
Scale = (0.00, 100.00)
Weight = 1
Threshold ind : 2.50 + 0.00x ; percentile: 3.85
Threshold pref : 5.00 + 0.00x ; percentile: 11.54
Threshold veto : 80.00 + 0.00x ; percentile: 100.00
The performance evaluations of each decision alternative on each criterion are gathered in a *performance tableau*:
>>> rt.showPerformanceTableau()
*---- performance tableau -----*
Criteria | 'g1' 'g2' 'g3' 'g4' 'g5' 'g6' 'g7'
Actions | 1 1 1 1 1 1 1
---------|-------------------------------------------------------
'a01' | 15.17 62.22 39.35 31.83 38.81 56.93 64.96
'a02' | 44.51 44.23 32.06 69.98 67.45 65.57 79.38
'a03' | 57.87 19.10 47.67 48.80 38.93 83.87 75.11
'a04' | 58.00 27.73 14.81 82.88 19.26 34.99 49.30
'a05' | 24.22 41.46 79.70 41.66 94.95 49.56 43.74
'a06' | 29.10 22.41 67.48 12.82 65.63 79.43 15.31
'a07' | NA 21.52 13.97 21.92 48.00 42.37 59.94
'a08' | 82.29 56.90 90.72 75.74 7.97 42.39 31.39
'a09' | 43.90 46.37 80.16 15.45 34.86 33.75 26.80
'a10' | 38.75 16.22 69.62 6.05 71.81 38.60 59.02
'a11' | 35.84 21.53 45.49 9.96 31.66 57.38 40.85
'a12' | 29.12 51.16 22.03 60.55 41.14 62.34 49.12
'a13' | 34.79 77.01 33.83 27.88 53.58 34.95 45.20 |
| |
Methods defined here:
- __init__(self, filePerfTab=None, isEmpty=False)
- Initialize self. See help(type(self)) for accurate signature.
- __repr__(self)
- Default presentation method for PerformanceTableau instances.
- computeActionCriterionPerformanceDifferences(self, refAction, refCriterion, comments=False, Debug=False)
- computes the performances differences observed between the reference action and the others on the given criterion
- computeActionCriterionQuantile(self, action, criterion, strategy='average', Debug=False)
- renders the quantile of the performance of action on criterion
- computeActionQuantile(self, action, Debug=False)
- renders the overall performance quantile of action
- computeAllQuantiles(self, Sorted=True, Comments=False)
- renders a html string showing the table of
the quantiles matrix action x criterion
- computeCriterionPerformanceDifferences(self, c, Comments=False, Debug=False)
- Renders the ordered list of all observed performance differences on the given criterion.
- computeDefaultDiscriminationThresholds(self, criteriaList=None, quantile={'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto': 80}, Debug=False, Comments=False)
- updates the discrimination thresholds with the percentiles
from the performance differences.
Parameters: quantile = {'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto: 80}.
- computeMinMaxEvaluations(self, criteria=None, actions=None)
- renders minimum and maximum performances on each criterion
in dictionary form: {'g': {'minimum': x, 'maximum': x}}
- computeMissingDataProportion(self, InPercents=False, Comments=False)
- Renders the proportion of missing data,
i.e. NA == Decimal('-999') entries in self.evaluation.
- computeNormalizedDiffEvaluations(self, lowValue=0.0, highValue=100.0, withOutput=False, Debug=False)
- renders and csv stores (withOutput=True) the
list of normalized evaluation differences observed on the family of criteria
Is only adequate if all criteria have the same
evaluation scale. Therefore the performance tableau is normalized to 0.0-100.0 scales.
- computePerformanceDifferences(self, Comments=False, Debug=False, NotPermanentDiffs=True, WithMaxMin=False)
- Adds to the criteria dictionary the ordered list of all observed performance differences.
- computeQuantileOrder(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ordering of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to quantiles=min( 100, max(10,len(self.actions)/10]) ).
The actions are ordered along a decreasing Borda score of their ranking results.
- computeQuantilePreorder(self, Comments=True, Debug=False)
- computes the preorder of the actions obtained from decreasing majority quantiles. The quantiles are recomputed with a call to the self.computeQuantileSort() method.
- computeQuantileRanking(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ranking of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to qantiles=min( 100, max(10,len(self.actions)/10) ).
The actions are ordered along an increasing Borda score of their ranking results.
- computeQuantileSort(self)
- shows a sorting of the actions from decreasing majority quantiles
- computeQuantiles(self, Debug=False)
- renders a quantiles matrix action x criterion with the performance quantile of action on criterion
- computeRankingConsensusQuality(self, ranking, Comments=False, Threading=False, nbrOfCPUs=1)
- Renders the marginal criteria correlations with a given ranking with summary.
- computeThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given constant threshold.
- computeVariableThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given threshold.
- computeWeightPreorder(self)
- renders the weight preorder following from the given
criteria weights in a list of increasing equivalence
lists of criteria.
- computeWeightedAveragePerformances(self, isNormalized=False, lowValue=0.0, highValue=100.0, isListRanked=False)
- Compute normalized weighted average scores by ignoring missing data.
When *isNormalized* == True (False by default),
transforms all the scores into a common 0-100 scale.
A lowValue and highValue parameter
can be provided for a specific normalisation.
- convert2BigData(self)
- Renders a cPerformanceTableau instance, by converting the action keys to integers and evaluations to floats, including the discrimination thresholds, the case given.
- convertDiscriminationThresholds2Decimal(self)
- convertDiscriminationThresholds2Float(self)
- convertEvaluation2Decimal(self)
- Convert evaluations from obsolete float format to decimal format
- convertEvaluation2Float(self)
- Convert evaluations from decimal format to float
- convertInsite2BigData(self)
- Convert in site a standard formated Performance tableau into a bigData formated instance.
- convertInsite2Standard(self)
- Convert in site a bigData formated Performance tableau back into a standard formated PerformanceTableau instance.
- convertWeight2Decimal(self)
- Convert significance weights from obsolete float format
to decimal format.
- convertWeight2Integer(self)
- Convert significance weights from Decimal format
to int format.
- convertWeights2Negative(self)
- Negates the weights of criteria to be minimzed.
- convertWeights2Positive(self)
- Sets negative weights back to positive weights and negates corresponding evaluation grades.
- csvAllQuantiles(self, fileName='quantiles')
- save quantiles matrix criterionxaction in CSV format
- hasOddWeightAlgebra(self, Debug=False)
- Verify if the given criteria[self]['weight'] are odd or not.
Return a Boolen value.
- normalizeEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations between lowValue and highValue on all criteria
- quantizeCriterionEvaluations(self, g, q, ndigits=2, Debug=True)
- q-tile evaluation of criterion q
- replaceNA(self, newNA=None, Comments=False)
- Replaces the current self.NA symbol with the *newNA* symbol of type <Decimal>. If newNA is None, the defauklt value Decimal('-999') is used.
- restoreOriginalEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations to their original values on all criteria
- save(self, fileName='tempperftab', isDecimal=True, valueDigits=2, Comments=True)
- Persistant storage of Performance Tableaux.
- saveCSV(self, fileName='tempPerfTab', Sorted=True, criteriaList=None, actionsList=None, ndigits=2, Debug=False)
- 1
Store the performance Tableau self Actions x Criteria in CSV format.
- saveXMCDA2(self, fileName='temp', category='XMCDA 2.0 Extended format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=False, isStringIO=False, stringNA='NA', comment='produced by saveXMCDA2()', hasVeto=True)
- save performance tableau object self in XMCDA 2.0 format including decision objectives, the case given.
- saveXMCDA2String(self, fileName='temp', category='XMCDA 2.0 format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=True, comment='produced by stringIO()', stringNA='NA')
- save performance tableau object self in XMCDA 2.0 format.
!!! obsolete: replaced by the isStringIO in the saveXMCDA2 method !!!
- setObjectiveWeights(self, Debug=False)
- Set the objective weights to the sum of the corresponding criteria significance weights.
- showActions(self, Alphabetic=False)
- presentation methods for decision actions or alternatives
- showAll(self)
- Show fonction for performance tableau
- showAllQuantiles(self, Sorted=True)
- prints the performance quantiles tableau in the session console.
- showCriteria(self, IntegerWeights=False, Alphabetic=False, ByObjectives=False, Debug=False)
- print Criteria with thresholds and weights.
- showEvaluationStatistics(self)
- renders the variance and standard deviation of
the values observed in the performance Tableau.
- showHTMLCriteria(self, criteriaSubset=None, Sorted=True, ndigits=2, title=None, htmlFileName=None)
- shows the criteria in the system browser view.
- showHTMLPerformanceHeatmap(self, actionsList=None, WithActionNames=False, fromIndex=None, toIndex=None, Transposed=False, criteriaList=None, colorLevels=7, pageTitle=None, ndigits=2, SparseModel=False, outrankingModel='standard', minimalComponentSize=1, rankingRule='NetFlows', StoreRanking=True, quantiles=None, strategy='average', Correlations=False, htmlFileName=None, Threading=False, startMethod=None, nbrOfCPUs=None, Debug=False)
- shows the html heatmap version of the performance tableau in a browser window
(see perfTabs.htmlPerformanceHeatMap() method ).
**Parameters**:
* *actionsList* and *criteriaList*, if provided, give the possibility to show
the decision alternatives, resp. criteria, in a given ordering.
* *WithActionNames* = True (default False) will show the action names instead of the short names or the identifyers.
* *ndigits* = 0 may be used to show integer evaluation values.
* *colorLevels* may be 3, 5, 7, or 9.
* When no *actionsList* is provided, the decision actions are ordered from the best to the worst. This
ranking is obtained by default with the Copeland rule applied on a standard *BipolarOutrankingDigraph*.
* When the *SparseModel* flag is put to *True*, a sparse *PreRankedOutrankingDigraph* construction is used instead.
* the *outrankingModel* parameter (default = 'standard') allows to switch to alternative BipolarOutrankingDigraph constructors, like 'confident' or 'robust' models. When called from a bipolar-valued outrankingDigraph instance, *outrankingModel* = 'this' keeps the current outranking model without recomputing by default the standard outranking model.
* The *minimalComponentSize* allows to control the fill rate of the pre-ranked model.
When *minimalComponentSize* = *n* (the number of decision actions) both the pre-ranked model will be
in fact equivalent to the standard model.
* *rankingRule* = 'NetFlows' (default) | 'Copeland' | 'Kohler' | 'RankedPairs' | 'ArrowRaymond'
| 'IteratedNetFlows' | 'IteraredCopeland'. See tutorial on ranking mith multiple incommensurable criteria.
* when the *StoreRanking* flag is set to *True*, the ranking result is storted in *self*.
* Quantiles used for the pre-ranked decomposition are put by default to *n*
(the number of decision alternatives) for *n* < 50. For larger cardinalities up to 1000, quantiles = *n* /10.
For bigger performance tableaux the *quantiles* parameter may be set to a much lower value
not exceeding usually 10.
* The pre-ranking may be obtained with three ordering strategies for the
quantiles equivalence classes: 'average' (default), 'optimistic' or 'pessimistic'.
* With *Correlations* = *True* and *criteriaList* = *None*, the criteria will be presented from left to right in decreasing
order of the correlations between the marginal criterion based ranking and the global ranking used for
presenting the decision alternatives.
* For large performance Tableaux, *multiprocessing* techniques may be used by setting
*Threading* = *True* in order to speed up the computations; especially when *Correlations* = *True*.
* By default, the number of cores available, will be detected. It may be necessary in a HPC context to indicate the exact number of singled threaded cores in fact allocated to the multiprocessing job.
>>> from randomPerfTabs import RandomPerformanceTableau
>>> rt = RandomPerformanceTableau(seed=100)
>>> rt.showHTMLPerformanceHeatmap(colorLevels=5,Correlations=True)
.. image:: perfTabsExample.png
:alt: HTML heat map of the performance tableau
:width: 600 px
:align: center
- showHTMLPerformanceQuantiles(self, Sorted=True, htmlFileName=None)
- shows the performance quantiles tableau in a browser window.
- showHTMLPerformanceTableau(self, actionsSubset=None, fromIndex=None, toIndex=None, isSorted=False, Transposed=False, ndigits=2, ContentCentered=True, title=None, htmlFileName=None)
- shows the html version of the performance tableau in a browser window.
- showObjectives(self)
- showPairwiseComparison(self, a, b, Debug=False, isReturningHTML=False, hasSymmetricThresholds=True)
- renders the pairwise comprison parameters on all criteria
in html format
- showPerformanceTableau(self, Transposed=False, actionsSubset=None, fromIndex=None, toIndex=None, Sorted=True, ndigits=2)
- Print the performance Tableau.
- showQuantileSort(self, Debug=False)
- Wrapper of computeQuantilePreorder() for the obsolete showQuantileSort() method.
- showRankingConsensusQuality(self, ranking)
- shows the marginal criteria correlations with a given ranking with summary.
- showStatistics(self, Debug=False)
- show statistics concerning the evaluation distributions
on each criteria.
- showWeightPreorder(self)
- Renders a preordering of the the criteria signficance weights.
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class XMCDA2PerformanceTableau(PerformanceTableau) |
| |
XMCDA2PerformanceTableau(fileName='temp', HasSeparatedWeights=False, HasSeparatedThresholds=False, stringInput=None, Debug=False)
For reading stored XMCDA 2.0 formatted instances with exact decimal numbers.
Using the inbuilt module xml.etree (for Python 2.5+).
Parameters:
* fileName is given without the extension ``.xml`` or ``.xmcda``,
* HasSeparatedWeights in XMCDA 2.0.0 encoding (default = False),
* HasSeparatedThresholds in XMCDA 2.0.0 encoding (default = False),
* stringInput: instantiates from an XMCDA 2.0 encoded string argument.
!! *Obsolete by now* !! |
| |
- Method resolution order:
- XMCDA2PerformanceTableau
- PerformanceTableau
- builtins.object
Methods defined here:
- __init__(self, fileName='temp', HasSeparatedWeights=False, HasSeparatedThresholds=False, stringInput=None, Debug=False)
- Initialize self. See help(type(self)) for accurate signature.
Methods inherited from PerformanceTableau:
- __repr__(self)
- Default presentation method for PerformanceTableau instances.
- computeActionCriterionPerformanceDifferences(self, refAction, refCriterion, comments=False, Debug=False)
- computes the performances differences observed between the reference action and the others on the given criterion
- computeActionCriterionQuantile(self, action, criterion, strategy='average', Debug=False)
- renders the quantile of the performance of action on criterion
- computeActionQuantile(self, action, Debug=False)
- renders the overall performance quantile of action
- computeAllQuantiles(self, Sorted=True, Comments=False)
- renders a html string showing the table of
the quantiles matrix action x criterion
- computeCriterionPerformanceDifferences(self, c, Comments=False, Debug=False)
- Renders the ordered list of all observed performance differences on the given criterion.
- computeDefaultDiscriminationThresholds(self, criteriaList=None, quantile={'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto': 80}, Debug=False, Comments=False)
- updates the discrimination thresholds with the percentiles
from the performance differences.
Parameters: quantile = {'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto: 80}.
- computeMinMaxEvaluations(self, criteria=None, actions=None)
- renders minimum and maximum performances on each criterion
in dictionary form: {'g': {'minimum': x, 'maximum': x}}
- computeMissingDataProportion(self, InPercents=False, Comments=False)
- Renders the proportion of missing data,
i.e. NA == Decimal('-999') entries in self.evaluation.
- computeNormalizedDiffEvaluations(self, lowValue=0.0, highValue=100.0, withOutput=False, Debug=False)
- renders and csv stores (withOutput=True) the
list of normalized evaluation differences observed on the family of criteria
Is only adequate if all criteria have the same
evaluation scale. Therefore the performance tableau is normalized to 0.0-100.0 scales.
- computePerformanceDifferences(self, Comments=False, Debug=False, NotPermanentDiffs=True, WithMaxMin=False)
- Adds to the criteria dictionary the ordered list of all observed performance differences.
- computeQuantileOrder(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ordering of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to quantiles=min( 100, max(10,len(self.actions)/10]) ).
The actions are ordered along a decreasing Borda score of their ranking results.
- computeQuantilePreorder(self, Comments=True, Debug=False)
- computes the preorder of the actions obtained from decreasing majority quantiles. The quantiles are recomputed with a call to the self.computeQuantileSort() method.
- computeQuantileRanking(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ranking of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to qantiles=min( 100, max(10,len(self.actions)/10) ).
The actions are ordered along an increasing Borda score of their ranking results.
- computeQuantileSort(self)
- shows a sorting of the actions from decreasing majority quantiles
- computeQuantiles(self, Debug=False)
- renders a quantiles matrix action x criterion with the performance quantile of action on criterion
- computeRankingConsensusQuality(self, ranking, Comments=False, Threading=False, nbrOfCPUs=1)
- Renders the marginal criteria correlations with a given ranking with summary.
- computeThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given constant threshold.
- computeVariableThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given threshold.
- computeWeightPreorder(self)
- renders the weight preorder following from the given
criteria weights in a list of increasing equivalence
lists of criteria.
- computeWeightedAveragePerformances(self, isNormalized=False, lowValue=0.0, highValue=100.0, isListRanked=False)
- Compute normalized weighted average scores by ignoring missing data.
When *isNormalized* == True (False by default),
transforms all the scores into a common 0-100 scale.
A lowValue and highValue parameter
can be provided for a specific normalisation.
- convert2BigData(self)
- Renders a cPerformanceTableau instance, by converting the action keys to integers and evaluations to floats, including the discrimination thresholds, the case given.
- convertDiscriminationThresholds2Decimal(self)
- convertDiscriminationThresholds2Float(self)
- convertEvaluation2Decimal(self)
- Convert evaluations from obsolete float format to decimal format
- convertEvaluation2Float(self)
- Convert evaluations from decimal format to float
- convertInsite2BigData(self)
- Convert in site a standard formated Performance tableau into a bigData formated instance.
- convertInsite2Standard(self)
- Convert in site a bigData formated Performance tableau back into a standard formated PerformanceTableau instance.
- convertWeight2Decimal(self)
- Convert significance weights from obsolete float format
to decimal format.
- convertWeight2Integer(self)
- Convert significance weights from Decimal format
to int format.
- convertWeights2Negative(self)
- Negates the weights of criteria to be minimzed.
- convertWeights2Positive(self)
- Sets negative weights back to positive weights and negates corresponding evaluation grades.
- csvAllQuantiles(self, fileName='quantiles')
- save quantiles matrix criterionxaction in CSV format
- hasOddWeightAlgebra(self, Debug=False)
- Verify if the given criteria[self]['weight'] are odd or not.
Return a Boolen value.
- normalizeEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations between lowValue and highValue on all criteria
- quantizeCriterionEvaluations(self, g, q, ndigits=2, Debug=True)
- q-tile evaluation of criterion q
- replaceNA(self, newNA=None, Comments=False)
- Replaces the current self.NA symbol with the *newNA* symbol of type <Decimal>. If newNA is None, the defauklt value Decimal('-999') is used.
- restoreOriginalEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations to their original values on all criteria
- save(self, fileName='tempperftab', isDecimal=True, valueDigits=2, Comments=True)
- Persistant storage of Performance Tableaux.
- saveCSV(self, fileName='tempPerfTab', Sorted=True, criteriaList=None, actionsList=None, ndigits=2, Debug=False)
- 1
Store the performance Tableau self Actions x Criteria in CSV format.
- saveXMCDA2(self, fileName='temp', category='XMCDA 2.0 Extended format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=False, isStringIO=False, stringNA='NA', comment='produced by saveXMCDA2()', hasVeto=True)
- save performance tableau object self in XMCDA 2.0 format including decision objectives, the case given.
- saveXMCDA2String(self, fileName='temp', category='XMCDA 2.0 format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=True, comment='produced by stringIO()', stringNA='NA')
- save performance tableau object self in XMCDA 2.0 format.
!!! obsolete: replaced by the isStringIO in the saveXMCDA2 method !!!
- setObjectiveWeights(self, Debug=False)
- Set the objective weights to the sum of the corresponding criteria significance weights.
- showActions(self, Alphabetic=False)
- presentation methods for decision actions or alternatives
- showAll(self)
- Show fonction for performance tableau
- showAllQuantiles(self, Sorted=True)
- prints the performance quantiles tableau in the session console.
- showCriteria(self, IntegerWeights=False, Alphabetic=False, ByObjectives=False, Debug=False)
- print Criteria with thresholds and weights.
- showEvaluationStatistics(self)
- renders the variance and standard deviation of
the values observed in the performance Tableau.
- showHTMLCriteria(self, criteriaSubset=None, Sorted=True, ndigits=2, title=None, htmlFileName=None)
- shows the criteria in the system browser view.
- showHTMLPerformanceHeatmap(self, actionsList=None, WithActionNames=False, fromIndex=None, toIndex=None, Transposed=False, criteriaList=None, colorLevels=7, pageTitle=None, ndigits=2, SparseModel=False, outrankingModel='standard', minimalComponentSize=1, rankingRule='NetFlows', StoreRanking=True, quantiles=None, strategy='average', Correlations=False, htmlFileName=None, Threading=False, startMethod=None, nbrOfCPUs=None, Debug=False)
- shows the html heatmap version of the performance tableau in a browser window
(see perfTabs.htmlPerformanceHeatMap() method ).
**Parameters**:
* *actionsList* and *criteriaList*, if provided, give the possibility to show
the decision alternatives, resp. criteria, in a given ordering.
* *WithActionNames* = True (default False) will show the action names instead of the short names or the identifyers.
* *ndigits* = 0 may be used to show integer evaluation values.
* *colorLevels* may be 3, 5, 7, or 9.
* When no *actionsList* is provided, the decision actions are ordered from the best to the worst. This
ranking is obtained by default with the Copeland rule applied on a standard *BipolarOutrankingDigraph*.
* When the *SparseModel* flag is put to *True*, a sparse *PreRankedOutrankingDigraph* construction is used instead.
* the *outrankingModel* parameter (default = 'standard') allows to switch to alternative BipolarOutrankingDigraph constructors, like 'confident' or 'robust' models. When called from a bipolar-valued outrankingDigraph instance, *outrankingModel* = 'this' keeps the current outranking model without recomputing by default the standard outranking model.
* The *minimalComponentSize* allows to control the fill rate of the pre-ranked model.
When *minimalComponentSize* = *n* (the number of decision actions) both the pre-ranked model will be
in fact equivalent to the standard model.
* *rankingRule* = 'NetFlows' (default) | 'Copeland' | 'Kohler' | 'RankedPairs' | 'ArrowRaymond'
| 'IteratedNetFlows' | 'IteraredCopeland'. See tutorial on ranking mith multiple incommensurable criteria.
* when the *StoreRanking* flag is set to *True*, the ranking result is storted in *self*.
* Quantiles used for the pre-ranked decomposition are put by default to *n*
(the number of decision alternatives) for *n* < 50. For larger cardinalities up to 1000, quantiles = *n* /10.
For bigger performance tableaux the *quantiles* parameter may be set to a much lower value
not exceeding usually 10.
* The pre-ranking may be obtained with three ordering strategies for the
quantiles equivalence classes: 'average' (default), 'optimistic' or 'pessimistic'.
* With *Correlations* = *True* and *criteriaList* = *None*, the criteria will be presented from left to right in decreasing
order of the correlations between the marginal criterion based ranking and the global ranking used for
presenting the decision alternatives.
* For large performance Tableaux, *multiprocessing* techniques may be used by setting
*Threading* = *True* in order to speed up the computations; especially when *Correlations* = *True*.
* By default, the number of cores available, will be detected. It may be necessary in a HPC context to indicate the exact number of singled threaded cores in fact allocated to the multiprocessing job.
>>> from randomPerfTabs import RandomPerformanceTableau
>>> rt = RandomPerformanceTableau(seed=100)
>>> rt.showHTMLPerformanceHeatmap(colorLevels=5,Correlations=True)
.. image:: perfTabsExample.png
:alt: HTML heat map of the performance tableau
:width: 600 px
:align: center
- showHTMLPerformanceQuantiles(self, Sorted=True, htmlFileName=None)
- shows the performance quantiles tableau in a browser window.
- showHTMLPerformanceTableau(self, actionsSubset=None, fromIndex=None, toIndex=None, isSorted=False, Transposed=False, ndigits=2, ContentCentered=True, title=None, htmlFileName=None)
- shows the html version of the performance tableau in a browser window.
- showObjectives(self)
- showPairwiseComparison(self, a, b, Debug=False, isReturningHTML=False, hasSymmetricThresholds=True)
- renders the pairwise comprison parameters on all criteria
in html format
- showPerformanceTableau(self, Transposed=False, actionsSubset=None, fromIndex=None, toIndex=None, Sorted=True, ndigits=2)
- Print the performance Tableau.
- showQuantileSort(self, Debug=False)
- Wrapper of computeQuantilePreorder() for the obsolete showQuantileSort() method.
- showRankingConsensusQuality(self, ranking)
- shows the marginal criteria correlations with a given ranking with summary.
- showStatistics(self, Debug=False)
- show statistics concerning the evaluation distributions
on each criteria.
- showWeightPreorder(self)
- Renders a preordering of the the criteria signficance weights.
Data descriptors inherited from PerformanceTableau:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
| |