| | |
- perfTabs.PerformanceTableau(builtins.object)
-
- PerformanceQuantiles
class PerformanceQuantiles(perfTabs.PerformanceTableau) |
| |
PerformanceQuantiles(perfTab=None, numberOfBins=4, LowerClosed=True, Debug=False)
Implements the incremental performance quantiles representation of a
given performance tableau.
*Parameters*:
* *perfTab*: may be either a PerformanceTableau object or the name of a previously saved PerformanceQuantiles instance
* *NumberOfBins* may be either 'quartiles', 'deciles', ... or 'n', the integer number of bins.
Example python session:
>>> import performanceQuantiles
>>> from randomPerfTabs import RandomCBPerformanceTableau
>>> from randomPerfTabs import RandomPerformanceGenerator as PerfTabGenerator
>>> nbrActions=1000
>>> nbrCrit = 7
>>> tp = RandomCBPerformanceTableau(numberOfActions=nbrActions,
... numberOfCriteria=nbrCrit,seed=105)
>>> pq = performanceQuantiles.PerformanceQuantiles(tp,'quartiles',
... LowerClosed=True,Debug=False)
>>> pq.showLimitingQuantiles(ByObjectives=True)
*---- performance quantiles -----*
Costs
criteria | weights | '0.0' '0.25' '0.5' '0.75' '1.0'
---------|--------------------------------------------------
'c1' | 6 | -97.12 -65.70 -46.08 -24.96 -1.85
Benefits
criteria | weights | '0.0' '0.25' '0.5' '0.75' '1.0'
---------|--------------------------------------------------
'b1' | 1 | 2.11 27.92 48.76 68.94 98.69
'b2' | 1 | 0.00 3.00 5.00 7.00 10.00
'b3' | 1 | 1.08 30.41 50.57 69.01 97.23
'b4' | 1 | 0.00 3.00 5.00 7.00 10.00
'b5' | 1 | 1.84 29.77 50.62 70.14 96.40
'b6' | 1 | 0.00 3.00 5.00 7.00 10.00
>>> tpg = PerfTabGenerator(tp,seed=105)
>>> newActions = tpg.randomActions(100)
>>> pq.updateQuantiles(newActions,historySize=None)
>>> pq.showHTMLLimitingQuantiles(Transposed=True)
.. image:: examplePerfQuantiles.png
:alt: Example limiting quantiles html show method
:width: 400 px
:align: center |
| |
- Method resolution order:
- PerformanceQuantiles
- perfTabs.PerformanceTableau
- builtins.object
Methods defined here:
- __init__(self, perfTab=None, numberOfBins=4, LowerClosed=True, Debug=False)
- Initialize self. See help(type(self)) for accurate signature.
- __repr__(self)
- Default presentation method for PerformanceQuantiles instances
- computeQuantileProfile(self, p, qFreq=None, Debug=False)
- Renders the quantile *q(p)* on all the criteria.
- save(self, fileName='tempPerfQuant', valueDigits=2)
- Persistant storage of a PerformanceQuantiles instance.
- showActions(self)
- presentation methods for decision actions or alternatives
- showCriteria(self, IntegerWeights=False, Alphabetic=False, ByObjectives=True, Debug=False)
- print Criteria with thresholds and weights.
- showCriterionStatistics(self, g, Debug=False)
- show statistics concerning the evaluation distributions
on each criteria.
- showHTMLLimitingQuantiles(self, Sorted=True, Transposed=False, ndigits=2, ContentCentered=True, title=None, htmlFileName=None)
- shows the html version of the limiting quantiles in a browser window.
- showLimitingQuantiles(self, ByObjectives=False, Sorted=False, ndigits=2)
- Prints the performance quantile limits in table format: criteria x limits.
- updateQuantiles(self, newData, historySize=None, Debug=False)
- Update the PerformanceQuantiles with a set of new random decision actions.
Parameter *historysize* allows to take more or less into account the historical situation.
For instance, *historySize=0* does not take into account at all any past observations.
Otherwise, if *historySize=None* (the default setting), the new observations become less and less
influential compared to the historical data.
Methods inherited from perfTabs.PerformanceTableau:
- computeActionCriterionPerformanceDifferences(self, refAction, refCriterion, comments=False, Debug=False)
- computes the performances differences observed between the reference action and the others on the given criterion
- computeActionCriterionQuantile(self, action, criterion, strategy='average', Debug=False)
- renders the quantile of the performance of action on criterion
- computeActionQuantile(self, action, Debug=False)
- renders the overall performance quantile of action
- computeAllQuantiles(self, Sorted=True, Comments=False)
- renders a html string showing the table of
the quantiles matrix action x criterion
- computeCriterionPerformanceDifferences(self, c, Comments=False, Debug=False)
- Renders the ordered list of all observed performance differences on the given criterion.
- computeDefaultDiscriminationThresholds(self, criteriaList=None, quantile={'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto': 80}, Debug=False, Comments=False)
- updates the discrimination thresholds with the percentiles
from the performance differences.
Parameters: quantile = {'ind': 10, 'pref': 20, 'weakVeto': 60, 'veto: 80}.
- computeMinMaxEvaluations(self, criteria=None, actions=None)
- renders minimum and maximum performances on each criterion
in dictionary form: {'g': {'minimum': x, 'maximum': x}}
- computeMissingDataProportion(self, InPercents=False, Comments=False)
- Renders the proportion of missing data,
i.e. NA == Decimal('-999') entries in self.evaluation.
- computeNormalizedDiffEvaluations(self, lowValue=0.0, highValue=100.0, withOutput=False, Debug=False)
- renders and csv stores (withOutput=True) the
list of normalized evaluation differences observed on the family of criteria
Is only adequate if all criteria have the same
evaluation scale. Therefore the performance tableau is normalized to 0.0-100.0 scales.
- computePerformanceDifferences(self, Comments=False, Debug=False, NotPermanentDiffs=True, WithMaxMin=False)
- Adds to the criteria dictionary the ordered list of all observed performance differences.
- computeQuantileOrder(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ordering of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to quantiles=min( 100, max(10,len(self.actions)/10]) ).
The actions are ordered along a decreasing Borda score of their ranking results.
- computeQuantilePreorder(self, Comments=True, Debug=False)
- computes the preorder of the actions obtained from decreasing majority quantiles. The quantiles are recomputed with a call to the self.computeQuantileSort() method.
- computeQuantileRanking(self, q0=3, q1=0, Threading=False, nbrOfCPUs=None, startMethod=None, Comments=False)
- Renders a linear ranking of the decision actions from a simulation of pre-ranked outranking digraphs.
The pre-ranking simulations range by default from
quantiles=q0 to qantiles=min( 100, max(10,len(self.actions)/10) ).
The actions are ordered along an increasing Borda score of their ranking results.
- computeQuantileSort(self)
- shows a sorting of the actions from decreasing majority quantiles
- computeQuantiles(self, Debug=False)
- renders a quantiles matrix action x criterion with the performance quantile of action on criterion
- computeRankingConsensusQuality(self, ranking, Comments=False, Threading=False, nbrOfCPUs=1)
- Renders the marginal criteria correlations with a given ranking with summary.
- computeThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given constant threshold.
- computeVariableThresholdPercentile(self, criterion, threshold, Debug=False)
- computes for a given criterion the quantile
of the performance differences of a given threshold.
- computeWeightPreorder(self)
- renders the weight preorder following from the given
criteria weights in a list of increasing equivalence
lists of criteria.
- computeWeightedAveragePerformances(self, isNormalized=False, lowValue=0.0, highValue=100.0, isListRanked=False)
- Compute normalized weighted average scores by ignoring missing data.
When *isNormalized* == True (False by default),
transforms all the scores into a common 0-100 scale.
A lowValue and highValue parameter
can be provided for a specific normalisation.
- convert2BigData(self)
- Renders a cPerformanceTableau instance, by converting the action keys to integers and evaluations to floats, including the discrimination thresholds, the case given.
- convertDiscriminationThresholds2Decimal(self)
- convertDiscriminationThresholds2Float(self)
- convertEvaluation2Decimal(self)
- Convert evaluations from obsolete float format to decimal format
- convertEvaluation2Float(self)
- Convert evaluations from decimal format to float
- convertInsite2BigData(self)
- Convert in site a standard formated Performance tableau into a bigData formated instance.
- convertInsite2Standard(self)
- Convert in site a bigData formated Performance tableau back into a standard formated PerformanceTableau instance.
- convertWeight2Decimal(self)
- Convert significance weights from obsolete float format
to decimal format.
- convertWeight2Integer(self)
- Convert significance weights from Decimal format
to int format.
- convertWeights2Negative(self)
- Negates the weights of criteria to be minimzed.
- convertWeights2Positive(self)
- Sets negative weights back to positive weights and negates corresponding evaluation grades.
- csvAllQuantiles(self, fileName='quantiles')
- save quantiles matrix criterionxaction in CSV format
- hasOddWeightAlgebra(self, Debug=False)
- Verify if the given criteria[self]['weight'] are odd or not.
Return a Boolen value.
- normalizeEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations between lowValue and highValue on all criteria
- quantizeCriterionEvaluations(self, g, q, ndigits=2, Debug=True)
- q-tile evaluation of criterion q
- replaceNA(self, newNA=None, Comments=False)
- Replaces the current self.NA symbol with the *newNA* symbol of type <Decimal>. If newNA is None, the defauklt value Decimal('-999') is used.
- restoreOriginalEvaluations(self, lowValue=0.0, highValue=100.0, Debug=False)
- recode the evaluations to their original values on all criteria
- saveCSV(self, fileName='tempPerfTab', Sorted=True, criteriaList=None, actionsList=None, ndigits=2, Debug=False)
- 1
Store the performance Tableau self Actions x Criteria in CSV format.
- saveXMCDA2(self, fileName='temp', category='XMCDA 2.0 Extended format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=False, isStringIO=False, stringNA='NA', comment='produced by saveXMCDA2()', hasVeto=True)
- save performance tableau object self in XMCDA 2.0 format including decision objectives, the case given.
- saveXMCDA2String(self, fileName='temp', category='XMCDA 2.0 format', user='digraphs Module (RB)', version='saved from Python session', title='Performance Tableau in XMCDA-2.0 format.', variant='Rubis', valuationType='bipolar', servingD3=True, comment='produced by stringIO()', stringNA='NA')
- save performance tableau object self in XMCDA 2.0 format.
!!! obsolete: replaced by the isStringIO in the saveXMCDA2 method !!!
- setObjectiveWeights(self, Debug=False)
- Set the objective weights to the sum of the corresponding criteria significance weights.
- showAll(self)
- Show fonction for performance tableau
- showAllQuantiles(self, Sorted=True)
- prints the performance quantiles tableau in the session console.
- showEvaluationStatistics(self)
- renders the variance and standard deviation of
the values observed in the performance Tableau.
- showHTMLCriteria(self, criteriaSubset=None, Sorted=True, ndigits=2, title=None, htmlFileName=None)
- shows the criteria in the system browser view.
- showHTMLPerformanceHeatmap(self, actionsList=None, WithActionNames=False, fromIndex=None, toIndex=None, Transposed=False, criteriaList=None, colorLevels=7, pageTitle=None, ndigits=2, SparseModel=False, outrankingModel='standard', minimalComponentSize=1, rankingRule='NetFlows', StoreRanking=True, quantiles=None, strategy='average', Correlations=False, htmlFileName=None, Threading=False, startMethod=None, nbrOfCPUs=None, Debug=False)
- shows the html heatmap version of the performance tableau in a browser window
(see perfTabs.htmlPerformanceHeatMap() method ).
**Parameters**:
* *actionsList* and *criteriaList*, if provided, give the possibility to show
the decision alternatives, resp. criteria, in a given ordering.
* *WithActionNames* = True (default False) will show the action names instead of the short names or the identifyers.
* *ndigits* = 0 may be used to show integer evaluation values.
* *colorLevels* may be 3, 5, 7, or 9.
* When no *actionsList* is provided, the decision actions are ordered from the best to the worst. This
ranking is obtained by default with the Copeland rule applied on a standard *BipolarOutrankingDigraph*.
* When the *SparseModel* flag is put to *True*, a sparse *PreRankedOutrankingDigraph* construction is used instead.
* the *outrankingModel* parameter (default = 'standard') allows to switch to alternative BipolarOutrankingDigraph constructors, like 'confident' or 'robust' models. When called from a bipolar-valued outrankingDigraph instance, *outrankingModel* = 'this' keeps the current outranking model without recomputing by default the standard outranking model.
* The *minimalComponentSize* allows to control the fill rate of the pre-ranked model.
When *minimalComponentSize* = *n* (the number of decision actions) both the pre-ranked model will be
in fact equivalent to the standard model.
* *rankingRule* = 'NetFlows' (default) | 'Copeland' | 'Kohler' | 'RankedPairs' | 'ArrowRaymond'
| 'IteratedNetFlows' | 'IteraredCopeland'. See tutorial on ranking mith multiple incommensurable criteria.
* when the *StoreRanking* flag is set to *True*, the ranking result is storted in *self*.
* Quantiles used for the pre-ranked decomposition are put by default to *n*
(the number of decision alternatives) for *n* < 50. For larger cardinalities up to 1000, quantiles = *n* /10.
For bigger performance tableaux the *quantiles* parameter may be set to a much lower value
not exceeding usually 10.
* The pre-ranking may be obtained with three ordering strategies for the
quantiles equivalence classes: 'average' (default), 'optimistic' or 'pessimistic'.
* With *Correlations* = *True* and *criteriaList* = *None*, the criteria will be presented from left to right in decreasing
order of the correlations between the marginal criterion based ranking and the global ranking used for
presenting the decision alternatives.
* For large performance Tableaux, *multiprocessing* techniques may be used by setting
*Threading* = *True* in order to speed up the computations; especially when *Correlations* = *True*.
* By default, the number of cores available, will be detected. It may be necessary in a HPC context to indicate the exact number of singled threaded cores in fact allocated to the multiprocessing job.
>>> from randomPerfTabs import RandomPerformanceTableau
>>> rt = RandomPerformanceTableau(seed=100)
>>> rt.showHTMLPerformanceHeatmap(colorLevels=5,Correlations=True)
.. image:: perfTabsExample.png
:alt: HTML heat map of the performance tableau
:width: 600 px
:align: center
- showHTMLPerformanceQuantiles(self, Sorted=True, htmlFileName=None)
- shows the performance quantiles tableau in a browser window.
- showHTMLPerformanceTableau(self, actionsSubset=None, fromIndex=None, toIndex=None, isSorted=False, Transposed=False, ndigits=2, ContentCentered=True, title=None, htmlFileName=None)
- shows the html version of the performance tableau in a browser window.
- showObjectives(self)
- showPairwiseComparison(self, a, b, Debug=False, isReturningHTML=False, hasSymmetricThresholds=True)
- renders the pairwise comprison parameters on all criteria
in html format
- showPerformanceTableau(self, Transposed=False, actionsSubset=None, fromIndex=None, toIndex=None, Sorted=True, ndigits=2)
- Print the performance Tableau.
- showQuantileSort(self, Debug=False)
- Wrapper of computeQuantilePreorder() for the obsolete showQuantileSort() method.
- showRankingConsensusQuality(self, ranking)
- shows the marginal criteria correlations with a given ranking with summary.
- showStatistics(self, Debug=False)
- show statistics concerning the evaluation distributions
on each criteria.
- showWeightPreorder(self)
- Renders a preordering of the the criteria signficance weights.
Data descriptors inherited from perfTabs.PerformanceTableau:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
| |