matthew-henry-yETqkLnhsUI-unsplash.jpg

Methods

The Risk Management Toolkit

 
 

Our methods push the forefront of sophisticated engineering risk management.

 
 
dtree2.png

decision Analysis

The ultimate purpose of any risk modeling and analysis is to serve as a decision support tool for human decision makers. Hence, the overarching analytical basis for risk management must be based on a normative decision analytic framework, using well-established methods such as von Neumann Morgenstern utility theory. Decision analysis allows decision makers to evaluate multiple alternatives for risk management, given key uncertainties and objective valueswhile also factoring the risk attitude of the decision maker(s). As such, decision analysis provides executive leadership a mathematical certification of optimality in their decisions, which minimizes common cognitive biases such as anchoring, recency, and availability


Probabilistic Risk analysis

Probabilistic Risk Analysis (PRA) utilizes the concept of probability theory and methods that are grounded in the use of information on stochastic events. It assesses complex engineered systems from a distinct systems engineering perspective, decomposing systems into their components, then using Bayesian networks to infer system-level failure modes and failure probabilities. The output of a PRA consists of the system’s probability of failure, minimum cut sets (potential root causes of failure) and their probabilities, and absolute risk ordering of components.

pra2.png

voi2.png

Value of Information

The concept of Value of Information (VoI) — answering the question of “is it worth it to obtain some more information?” — stems from the fundamentals of decision analysis. The primary premise behind VoI is that all information comes at a cost and not all information is worth acquiring when making a decision. Examples of information pertinent to ERM include results from parts inspection, additional parts testing, or expert opinions. Marginal information serves to improve a decision maker’s state of knowledge on risks, but is only worth pursuing if the entailed risk management benefit outweighs the cost of obtaining it.


Loads and capacities

All risks can be characterized as systems with uncertain design capacity experiencing uncertain loads. In a probabilistic framework, these uncertainties can be modeled as probability distributions with overlapping probability density functions (pdfs). These pdfs can further be utilized to identify components which are most susceptible to the risk of overloading failure. The output of this analysis informs decision makers of the probability of component and system failure, given knowledge of the stochastics present in the system.

loads2.png

resource.png

Resource optimization

With a convex optimization framework, it is possible to compute optimal policies that describe how marginal dollars should be spread across the system in the form of reinforcement, redundancies, and enhanced inspections, in order to maximally mitigate risk given a fixed budget. The output of resource optimization analysis is a policy table that identifies the optimal policy for spreading marginal risk mitigation resources to maximally reduce the overall system’s probability of failure.


optimal inspection policies

In a resource-constrained operational environment, system inspections should be prioritized in a manner where inspection resources are directed to yield the greatest prophylactic benefit against risks. By modeling physical component deterioration over time using available data, a prioritization framework can be built which assigns priorities to optimally direct inspection resources to parts with the greatest failure risk. Inspection policy computations utilize discrete state space, finite time Markov chains to identify the components that are at greatest risk of failure prior to inspections. This analysis informs decision makers on how to direct limited inspection resources to maximal efficacy.

inspect.png

recap.png

Optimal recapitalization policies

In risk management of large, complex systems with many components, risk managers often face the question: "When should we replace or reinforce components that may soon fail?" From the perspective of operations management of complex systems, it is useful to consider that there exist opposing forces on cost. On one hand, systems face the risk of unplanned downtime due to parts failure. On the other hand, excessive reinforcement or parts redundancies impose excessive operations costs from overengineering. Therefore, optimal recapitalization policies should be computed which find a local minimum of combined costs (failure risk costs plus operational management costs). Computing an optimal policy can be achieved with the use of Markov decision processes, a method that models a time-variant sequential decision making process to output a time-invariant optimal policy table that informs decision makers of the actions which result in optimal risk reduction of the system.


Data Analysis

The efficacy of mathematical models is limited by the quality of its input data. In particular, risk models are dependent on data-informed inferences on component states of health and probabilities of failure. An effective data analysis capability enables high quality analysis of system risk. Ultimately, effective data analysis capabilities enable the decision maker to extract hidden wisdom from the data to inform quantitative risk management models.

data.png