Mathematics & Statistics

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 20 of 106
  • ItemOpen Access
    Multiple Risk Factors Dependence Structures With Applications to Actuarial Risk Management
    (2024-11-07) Su, Jianxi; Furman, Edward
    Actuarial and financial risk management is one of the most important innovations of the 20th century, and modelling dependent risks is one of its central issues. Traditional insurance models build on the assumption of independence of risks. Criticized as one of the main causes of the recent financial crisis, this assumption has facilitated the quantification of risks for decades, but it has often lead to under-estimation of the risks and as a result under-pricing. Hence importantly, one of the prime pillars of the novel concept of Enterprise Risk Management is the requirement that insurance companies have a clear understanding of the various interconnections that exist within risk portfolios. Modelling dependence is not an easy call. In fact, there is only one way to formulate independence, whereas the shapes of stochastic dependence are infinite. In this dissertation, we aim at developing interpretable practically and tractable technically probabilistic models of dependence that describe the adverse effects of multiple risk drivers on the risk portfolio of a generic insurer. To this end, we introduce a new class of Multiple Risk Factor (MRF) dependence structures. The MRF distributions are of importance to actuaries through their connections to the popular frailty models, as well as because of the capacity to describe dependent heavy-tailed risks. The new constructions are also linked to the factor models that lay in the very basis of the nowadays financial default measurement practice. Moreover, we use doubly stochastic Poisson processes to explore the class of copula functions that underlie the MRF models. Then, motivated by the asymmetric nature of these copulas, we propose and study a new notion of the paths of maximal dependence, which is consequently employed to measure tail dependence in copulas.
  • ItemOpen Access
    Results about Proximal and Semi-proximal Spaces
    (2024-07-18) Almontashery, Khulod Ali M.; Szeptycki, Paul J.
    Proximal spaces were defined by J. Bell as those topological spaces $X$ with a compatible uniformity ${\mathfrak U}$ on which Player I has a winning strategy in the so-called proximal on $(X,{\mathfrak U})$. Nyikos defined the class of semi-proximal spaces where Player II has no winning strategy on $(X,{\mathfrak U})$ with respect to some compatible uniformity. The primary focus of this thesis is to study the relationship between the classes of semi-proximal spaces and normal spaces. Nyikos asked whether semi-proximal spaces are always normal. The main result of this thesis is the construction of two counterexamples to this question. We also examine the characterization of normality in subspaces of products of ordinals, relating it to the class of semi-proximal spaces in finite power of $\omega_1$. In addition, we introduce a strengthening of these classes by restricting the proximal game to totally bounded uniformities. We study connections between the proximal game, the Galvin game, and the Gruenhage game. Further, we explore the relationship between semi-proximality and other convergence properties.
  • ItemOpen Access
    Polarization Operators in Superspace
    (2024-07-18) Chan, Kelvin Tian Yi; Bergeron, Nantel
    The classical coinvariant rings and its variants are quotient rings with rich connections to combinatorics, symmetric function theory and geometry. Studies of a generalization of the classical coinvariant rings known as the diagonal harmonics have fruitfully produced many interesting discoveries in combinatorics including the q, t-Catalan numbers and the Shuffle Theorem. The super coinvariant rings are a direct generalization of the classical coinvariant rings to one set of commuting variables and one set of anticommuting variables. N. Bergeron, Li, Machacek, Sulzgruber, and Zabrocki conjectured in 2018 that the super coinvariant rings are representation theoretic models for the Delta Conjecture at t = 0. In this dissertation, we explore the super coinvariant rings using algebraic and combina- torial methods. In particular, we study the alternating component of the super harmonics and discover a novel basis using polarization operators. We use polarization equivalence to establish a triangularity relation between the new basis and a known basis due to two groups of researchers Bergeron, Li, Machacek, Sulzgruber, and Zabrocki and Swanson and Wallach. Furthermore, we prove a folklore result on the cocharge statistics of standard Young tableaux and propose a basis for every irreducible representation appearing in the super harmonics.
  • ItemOpen Access
    One-Parameter Semigroups Generated by Strongly M-Elliptic Pseudo-Differential Operators on Euclidean Spaces
    (2024-07-18) Gao, Yaodong; Wong, Man Wah
    We begin with a recall of the definitions and basic properties of the standard Hörmander classes of pseudo-differential operators on Rn. Then we introduce a new class of pseudo-differential operators that can be traced back to Taylor, generalized by Garello and Morando and further developed by M. W. Wong. A related class of pseudo-differential operators depending on a complex parameter on an open subset of the complex plane is constructed. We tease out from this related class the strongly M – elliptic pseudo-differential operators and prove that they are infinitesimal generators of holomorphic and hence strongly continuous one-parameter semigroups of bounded linear operators on Lp(Rn), 1
  • ItemOpen Access
    A High-Order Navier-Stokes Solver for Viscous Toroidal Flows
    (2024-03-16) Siewnarine, Vishal; Haslam, Michael C.
    This thesis details our work in the development and testing of highly efficient solvers for the Navier-Stokes problem in simple toroidal coordinates in three spatial dimensions. In particular, the domain of interest in this work is the region occupied by a fluid between two concentric toroidal shells. The study of this problem was motivated in part by extensions of the study of Taylor-Couette instabilities between rotating cylindrical and spherical shells to toroidal geometries. We note that at higher Reynolds numbers, Taylor-Couette instabilities in cylindrical and spherical coordinates are essentially three dimensional in nature, which motivated us to design fully three-dimensional solvers with a OpenMP parallel numerical implementation suitable for a multi-processor workstation. We approach this problem using two different time stepping procedures applied to the so-called Pressure Poisson formulation of the Navier-Stokes equations. In the first case, we develop an ADI-type method based on a finite difference formulation applicable for low Reynolds number flows. This solver was more of a pilot study of the problem formulated in simple toroidal coordinates. In the second case - the main focus of our thesis - our main goal was to develop a spectral solver using an explicit fourth order Runge-Kutta time stepping, which is appropriate to the higher Reynolds number flows associated with Taylor-Couette instabilities. Our spectral solver was developed using a high order Fourier representation in the angular variables of the problem and a high order Chebyshev representation in the radial coordinate between the shells. The solver exhibits super-algebraic convergence in the number of unknowns retained in the problem. Applied to the Taylor-Couette problem, our solver has allowed us to identify (for the first time, in this thesis) highly-resolved Taylor-Couette instabilities between toroidal shells. As we document in this work, these instabilities take on different configurations, depending on the Reynolds number of the flow and the gap width between shells, but as of now, all of these instabilities are essentially two dimensions. Our work on this subject continues, and we confident that we will uncover three-dimensional instabilities that have well-known analogues in the cases of cylindrical and spherical shells. Lastly, a separate physical problem we examine is the flow between oscillating toroidal shells. Again, our spectral solver is able to resolve these flows to spectral accuracy for various Reynolds numbers and gap widths, showing surprisingly rich physical behaviour. Our code also allows us to document the torque required for the oscillation of the shells, a key metric in engineering applications. This problem was investigated since this configuration was recently proposed as a mechanical damping system.
  • ItemOpen Access
    Second-order finite free probability
    (2024-03-16) McConnell, Curran; Bergeron, Nantel
    Finite free probability is a new field lying at the intersection of random matrix theory and non-commutative probability. It is called “finite” because unlike traditional free probability, which takes the perspective of operators on infinite-dimensional vector spaces, finite free probability focuses on the study of d × d matrices. Both fields study the behaviour of the eigenvalues of random linear transformations under addition. Finite free probability seeks in particular to characterize random matrices in terms of their (random) characteristic polynomials. I studied the covariance between the coefficients of these polynomials, in order to deepen our knowledge of how random characteristic polynomials fluctuate about their expected values. Focusing on a special case related to random unitary matrices, I applied the representation theory of the unitary group to derive a combinatorial summation expression for the covariance.
  • ItemOpen Access
    Markov Chains, Clustering, and Reinforcement Learning: Applications in Credit Risk Assessment and Systemic Risk Reduction
    (2023-12-08) Le, Richard; Ku, Hyejin
    In this dissertation we demonstrate how credit risk assessment using credit rating transition matrices can be improved, as well as present a novel reinforcement learning (RL) model capable of determining a multi-layer financial network configuration with reduced levels of systemic risk. While in this dissertation we treat credit risk and systemic risk independently, credit risk and systemic risk are two sides of the same coin. Financial systems are highly interconnected by their very nature. When a member of this system experiences distress such as default, a credit risk event, this distress is often not felt in isolation. Due to the highly interconnected nature of financial systems, these shocks can spread throughout the system resulting in catastrophic failure, a systemic risk event. The treatment of credit risk begins with the introduction of our first-order Markov model augmented with sequence-based clustering (SBC). Once we established this model, we explored its ability to predict future credit rating transitions, the transition direction of the credit ratings, and the default behaviour of firms using historical credit rating data. Once validated, we then extend this model using higher-order Markov chains. This time around, focusing more on the absorbing behaviour of Markov chains, and hence, the default behaviour under this new model. Using higher-order Markov chains, we also enjoy the benefit of capturing a phenomenon known as rating momentum, characteristic of credit rating transition behaviour. Other than the credit rating data set, this model was also applied to a Web-usage mining data set, highlighting its generalizability. Finally, we shift our focus to the treatment of systemic risk. While methods exist to determine optimal interbank lending configurations, they only treat single-layer networks. This is due to technical optimization challenges that arise when one considers additional layers and the interactions between them. These layers can represent lending products of different maturities. To consider the interaction between layers, we extend the DebtRank (DR) measure to track distress across layers. Next, we develop a constrained deep-deterministic policy gradient (DDPG) model capable of reorganizing the interbank lending network structure, such that the spread of distress is better mitigated.
  • ItemOpen Access
    Invisible Frontiers: Robust and Risk-Sensitive Financial Decision-Making within Hidden Regimes
    (2023-12-08) Wang, Mingfu; Ku, Hyejin
    In this dissertation, we delve into the exploration of robust and risk-sensitive strategies for financial decision-making within hidden regimes, focusing on the effective portfolio management of financial market risks under uncertain market conditions. The study is structured around three pivotal topics, that is, Risk-sensitive Policies for Portfolio Management, Robust Optimal Life Insurance Purchase and Investment-consumption with Regime-switching Alpha-ambiguity Maxmin Utility, and Robust and Risk-sensitive Markov Decision Process with Hidden Regime Rules. In Risk-sensitive policies for Portfolio Management, we propose two novel Reinforcement Learning (RL) models. Tailored specifically for portfolio management, these models align with investors’ risk preference, ensuring the strategies balance between risk and return. In Robust Optimal Life Insurance Purchase and Investment-consumption with Regime-switching Alpha-ambiguity Maxmin Utility, we introduce a pre-commitment strategy that robustly navigates insurance purchasing and investment-consumption decisions. This strategy adeptly accounts for model ambiguity and individual ambiguity aversion within a regime-switching market context. In Robust and Risk-sensitive Markov Decision Process with Hidden Regime Rules, we integrate hidden regimes into Markov Decision Process (MDP) framework, enhancing its capacity to address both market regime shifts and market fluctuations. In addition, we adopt a risk-sensitive objective and construct a risk envelope to portray the worst-case scenario from RL perspective. Overall, this research strives to provide investors with the tools and insights for optimal balance between reward and risk, effective risk management and informed investment choices. The strategies are designed to guide investors in the face of market uncertainties and risk, further underscoring the criticality of robust and risk-sensitive financial decision-making.
  • ItemOpen Access
    On Laplace transforms, generalized gamma convolutions, and their applications in risk aggregation
    (2023-12-08) Miles, Justin Christopher; Kuznetsov, Alexey
    This dissertation begins with two introductory chapters to provide some relevant background information: an introduction on the Laplace transform and an introduction on Generalized Gamma Convolutions (GGCs). The heart of this dissertation is the final three chapters comprised of three contributions to the literature. In Chapter 3, we study the analytical properties of the Laplace transform of the log-normal distribution. Two integral expressions for the analytic continuation of the Laplace transform of the log-normal distribution are provided, one of which takes the form of a Mellin-Barnes integral. As a corollary, we obtain an integral expression for the characteristic function; we show that the integral expression derived by Leipnik in \cite{Leipnik1991} is incorrect. We present two approximations for the Laplace transform of the log-normal distribution, both valid in $\C \setminus(-\infty,0]$. In the last section, we discuss how one may use our results to compute the density of a sum of independent log-normal random variables. In Chapter 4, we explore the topic of risk aggregation with moment matching \\approximations. We put forward a refined moment matching approximation (MMA) method for approximating the distributions of the sums of insurance risks. Our method approximates the distributions of interest to any desired precision, works equally well for light and heavy-tailed distributions, and is reasonably fast irrespective of the number of the involved summands. In Chapter 5, we study the convergence of the Gaver-Stehfest algorithm. The Gaver-Stehfest algorithm is widely used for numerical inversion of Laplace transform. In this chapter we provide the first rigorous study of the rate of convergence of the Gaver-Stehfest algorithm. We prove that the Gaver-Stehfest approximations of order $n$ converge exponentially fast if the target function is analytic in a neighbourhood of a point and they converge at a rate $o(n^{-k})$ if the target function is $(2k+3)$-times differentiable at a point.
  • ItemOpen Access
    Mathematical and Statistical Analysis of Non-stationary Time Series Data
    (2023-12-08) Hang, Du; Wang, Steven
    Non-stationary time series, with intrinsic properties constantly changing over time, present significant challenges for analysis in various scientific fields, particularly in biomedical signal analysis. This dissertation presents novel methodologies for analyzing and classifying highly noisy and non-stationary signals with applications to electroencephalograms (EEGs) and electrocardiograms (ECGs). The first part of the dissertation focuses on a framework integrating pseudo-differential operators with convolutional neural networks (CNNs). We present their synergistic potential for signal classification from an innovative perspective. Building on the fundamental concept of pseudo-differential operators, the dissertation further proposes a novel methodology that addresses the challenges of applying time-variant filters or transforms to non-stationary signals. This approach enables the neural network to learn a convolution kernel that changes over time or location, providing a refined strategy to effectively handle these dynamic signals. This dissertation also introduces a hybrid convolutional neural network that integrates both complex-valued and real-valued components with the discrete Fourier transform (DFT) for EEG signal classification. This fusion of techniques significantly enhances the neural network's ability to utilize the phase information contained in the DFT, resulting in substantial accuracy improvements for EEG signal classification. In the final part of this dissertation, we apply a conventional machine learning approach for the detection and localization of myocardial infarctions (MIs) in electrocardiograms (ECGs) and vectorcardiograms (VCGs), using the innovative features extracted from the geometrical and kinematic properties within VCGs. This boosts the accuracy and sensitivity of traditional MI detection.
  • ItemOpen Access
    Retirement Annuities: Optimization, Analysis and Machine Learning
    (2023-12-08) Nikolic, Branislav; Salisbury, Tom
    Over the last few decades, we have seen a steady shift away from Defined Benefit (DB) pension plans to Defined Contribution (DC) pension plans in the United States. Even though a deferred income annuity (DIA) purchased while saving for retirement can pay aguaranteed stream of income for life, practically serving as a pension substitute, several questions arise. Our main contribution is answering the question of purchasing DIAs under the interest rate uncertainty. We pose the question as an optimal control problem, solve its centerpiece Hamilton-Jacobi-Bellman equation numerically, and provide a verification theorem. The result is an optimal DIA purchasing map. With Cash Refund Income Annuities (CRIA) gaining traction quickly over the past few years, the literature is growing in the area of price sensitivity and its viability when viewed through the lens of key pricing parameters, particularly insurance loading. To that end, we explored the effect of reserving requirements on pricing and have analytically proven that, if accounted for properly at the beginning, reserving requirements would be satisfied at any time during the lifetime of the annuity. Lower interest rates in the last decade prompted the explosion of fixed indexed annuities (FIAs) in the United States. These popular insurance policies offered a growth component with the addition of a lifetime income provisions. In FIAs, accumulation is achieved through exposure to a variety of indices while offering principal protection guarantees. The vast array of new products and features have created the need for a means of consistent comparisons between FIA products available to consumers. We illustr ate that statistical issues in the temporal and cross-sectional return correlations of indices used in FIAs necessitates more sophisticated modelling than is currently employed. We outline few novel approaches to handle these two issues. We model the risk control mechanisms of a class of FIA indices using machine learning. This is done using a small set of core macroeconomic variables as modelling features. This makes for more robust cross-sectional comparisons. Then we outline the properties of a sufficient model for said features, namely ‘rough’ stochastic volatility.
  • ItemOpen Access
    Adolescent Vaping Behaviors: Exploring the Dynamics of a Social Contagion Model
    (2023-12-08) Machado-Marques, Sarah Isabella; Moyles, Iain
    Vaping, or the use of electronic cigarettes (e-cigarettes), is an ongoing issue for public health. The rapid increase in e-cigarette usage, particularly among adolescents, has often been referred to as an epidemic. Drawing upon this epidemiological analogy between vaping and infectious diseases as a theoretical framework, we aim to study this issue through mathematical modeling to better understand the underlying dynamics. In this thesis, we present a deterministic compartmental model of adolescent e-cigarette smoking which accounts for social influences on initiation, relapse, and cessation behaviors. We use results from a sensitivity analysis of the model’s parameters on various response variables to identify key influences on system dynamics and simplify the model into one that can be analyzed more thoroughly. Through steady state and stability analyses and simulations of the model, we conclude that (1) social influences from and on temporary quitters are not important in overall model dynamics and (2) social influences from permanent quitters can have a significant impact on long-term system dynamics, including the reduction of the smokers' equilibrium and emergence of multiple smoking waves.
  • ItemOpen Access
    Tracial simplex of every unital C*-algebra is a Choquet simplex
    (2023-12-08) Wang, Jiyu; Farah, Ilijas
    C*-algebras are norm-closed self-adjoint subalgebras of bounded linear operators on a complex Hilbert space. Choquet simplex is a special type of compact convex set with a unique representation property. The goal of this thesis is to find a self-contained and easily accessible proof of the classical fact that the set of tracial states of a unital C*-algebra is a Choquet simplex.
  • ItemOpen Access
    High-Dimensional Data Integration with Multiple Heterogeneous and Outlier Contaminated Tasks
    (2023-02) Zhong, Yuan; Xu, Wei; Gao, Xin
    Data integration is the process of extracting information from multiple sources and analyzing different related data sets simultaneously. The aggregated information can reduce the sample biases caused by low-quality data, boost the statistical power for joint inference, and enhance the model prediction. Therefore, this dissertation focuses on the development and implementation of statistical methods for data integration. In clinical research, the study outcomes usually consist of various patients' information corresponding to the treatment. Since the joint inference across related data sets can provide more efficient estimates compared with marginal approaches, analyzing multiple clinical endpoints simultaneously can better understand treatment effects. Meanwhile, the data from different research are usually heterogeneous with continuous and discrete endpoints. To alleviate computational difficulties, we apply the pairwise composite likelihood method to analyze the data. We can show that the estimators are consistent and asymptotically normally distributed based on the Godambe information. Under high dimensionality, the joint model needs to select the important features to analyze the intrinsic relatedness among all data sets. The multi-task feature learning is widely used to recover this union support through the penalized M-estimation framework. However, the heterogeneity among different data sets may cause difficulties in formulating the joint model. Thus, we propose the mixed $\ell_{2,1}$ regularized composite quasi-likelihood function to perform multi-task feature learning. In our framework, we relax the distributional assumption of responses, and our result establishes the sign recovery consistency and estimation error bounds of the penalized estimates. When data from multiple sources are contaminated by large outliers, the multi-task learning methods suffer efficiency loss. Next, we propose robust multi-task feature learning by combining the adaptive Huber regression tasks with mixed regularization. The robustification parameters can be chosen to adapt to the sample size, model dimension, and error moments while striking a balance between unbiasedness and robustness. We consider heavy-tailed distributions for multiple data sets that have bounded $(1+\omega)$th moment for any $\omega>0$. Our method is shown to achieve estimation consistency and sign recovery consistency. In addition, the robust information criterion can conduct joint inference on related tasks for consistent model selection.
  • ItemOpen Access
    A Dependence Analysis Within the Context of Risk Allocations: Distributions on the Simplex and the Notion of Counter-Monotonicity
    (2023-08-04) Mohammed, Nawaf Mahmood Abdullah; Furman, Ed; Su, Jianxi
    The remarkable development of today's financial and insurance products demands sound methodologies for the accumulation and characterization of intertwined risks. As a result, modern risk management emerges as a by product querying two key foundations. The first is concerned with the aggregation of said risks into one randomness which is consequently easily measured by a convenient risk measure and thereafter reported. The pooling is done from the different business units (BUs) composing the financial entity. The second pillar pertains to the opposite direction which concerns itself with the allocation of the total risk. It seeks to accurately and concretely attribute the riskiness of each individual BU with respect to the whole. The aggregation process, on one hand, has been fairly well studied in the literature, implemented in the industry and even embedded into the different accords. Risk capital allocation, on the other, is generally much more involved even when a specific risk measure inducing the allocation rule is assumed, let alone the case when a class of risk measures is considered. And unlike the aggregation exercise, which is moderately determined by the collection function, attributing capital is often more heavily influenced by the dependencies among the different BUs. In the literature, nonetheless, allocating capital can be categorized into two main camps. One is built upon the pretense that the distribution of risk should satisfy certain regulatory requirements. This leads to an axiomatic approach which is quite often mathematically tractable yet ignores the economic incentives of the market. The other school of thought is economically driven, allocating risk based on a profit-maximizing paradigm. It argues that capital allocation should reflect the risk perception of the institution and not be imposed by any arbitrary measure, for which its selection is dubious at best. However, the economic approach suffers from complex relations that lack clear definitive forms. At first glance the two perspectives may seem distant, as they arise naturally in their own contexts and are justified accordingly. Nonetheless, they can coincide for particular losses that enjoy certain peculiar model settings which are described thoroughly in the chapters thereafter. Surprisingly, the reconciliation comes in connection with the concept of trivial allocations. Triviality, in itself, attracts practitioners as it requires no discernible dependencies leading to a convenient yet faulty method of attributing risk. Regardless, when used in the right context it unveils surprising connections and conveys useful conclusions. The intersection of the regulatory and profit-maximizing principles, for example, mainly utilizes a milder version of triviality (proportional) which allows for distinct, albeit few, probabilistic laws that accommodate both theories. Furthermore, when a stronger triviality (absolute) condition is imposed, it yields another intriguing corollary, specifically that of restrictive extreme laws commonly known for antithetic or counter-monotonic variates. To address the framework hitherto introduced, in the first chapter of this dissertation, we present a general class of weighted pricing functionals. This wide class covers most of the risk measures and allocations found in the literature today and adequately represents their various properties. We begin by investigating the order characteristics of the functionals under certain sufficient conditions. The results reveal interactive relationships between the weight and the aggregation make-up of the measures, which consequently, allow for effective comparison between the different risks. Then upon imposing restrictions on the allocation constituents, we establish equivalent statements for trivial allocations that uncover a novel general concept of counter-monotonicity. More significantly, similar equivalences are obtained for a weaker triviality notion that pave the path to answer the aforementioned question of allocation reconciliation. The class of weighted functionals, though constructive, is too general to apply effectively to the allocation theories. Thus, in the second chapter, we consider the special case of conditional tail expectation (CTE), defining its risk measure and the allocation it induces. These represent the regulatory approach to allocation as CTE is arguably one of the most prominent and front-runner measures used and studied today. On the other side, we consider the allocation arising from the economic context that aims to maximize profit subject to other market forces as well as individual perceptions. Both allocations are taken as proportions as they are formed from compositional maps which relate to the standard simplex in either a stochastic or non-stochastic manner. Then we equate the two allocations and derive a general description for the laws that satisfy the two functionals. The Laplace transform of the multivariate size bias is used as the prime identifier delineating the general distributions and detailing subsequent corollaries and examples. While studying the triviality nature of allocations, we focused on the central element of stochastic dependence. We showed how certain models, extremal dependence for instance, enormously influences the attribution outcome. Thus far, nonetheless, our query started from the point of allocation relations, be it proportional or absolute, then ended in law characterizations that satisfy those relations. Equally important, on the other hand, is deriving allocations expressions based on a priori assumed models. This task requires apt choices of general structures which convey the desired probabilistic nature of losses. Since constructing joint laws can be quite challenging, the compendium of probabilistic models relies heavily on leveraging the stochastic representations of known distributions. This feat allows not only for simpler computations but as well for useful interpretations. Basic mathematical operations are usually deployed to derive different joint distributions with certain desirable properties. For example, taking the minimum yields the Marshall-Olkin distribution, addition gives the additive background model and multiplication/division naturally leads to the multiplicative background model. Simultaneously, univariate manipulation through location, scale and power transforms adds to the flexibility of the margins while preserving the overall copula. In the last chapter of this dissertation, we introduce a composite of the Marshall-Olkin, additive and multiplicative models to obtain a novel multivariate Pareto-Dirichlet law possessing a profound composition capable of modelling heavy tailed events descriptor of many extremal scenarios in insurance and finance. We study its survival function and the corresponding moments and mixed moments. Then we focus on the bivariate case, detailing the intricacies of its inherent expressions. And finally, we conclude with a thorough application to the risk and allocation functionals respectively.
  • ItemOpen Access
    Assessing Control Strategies and Timelines for Mycobacterium Tuberculosis Elimination
    (2023-08-04) Abdollahi, Elaheh; Moghadas, Seyed
    Tuberculosis (TB) continues to inflict a disproportionate impact on Inuit communities in Canada, with reported rates of active TB that are over 300 times higher than those of Canadian-born, non-Indigenous populations. The Inuit Tuberculosis Elimination Framework aims to reduce the incidence of active TB by at least 50% by 2025, with the ultimate goal of eliminating it (i.e., reducing the incidence of active TB below 1 case per 1,000,000 population) by 2030. However, whether these goals can be achieved with the resources and interventions currently available has not been investigated. This dissertation formulates an agent-based model (ABM) of TB transmission dynamics and control to assess the feasibility of achieving the goals of elimination framework in Nunavut, Canada. I applied the model to project the annual incidence of active TB from 2025 to 2040, taking into account factors such as time to case identification after developing active TB, contact tracing and testing, patient isolation and compliance, household size, and the potential impact of a therapeutic vaccine. In order to determine the potential reduction in TB incidence, various scenarios of treatment regimens were evaluated within the action plans for TB elimination. The scenario analyses demonstrate that the time-to-identification of active TB cases is a crucial factor in attainability of the goals, highlighting the importance of investment in early case detection. The findings also indicate that the goal of 50% reduction in annual incidence of TB by 2025 is only achievable under best case scenarios of combined interventions. However, TB elimination will likely exceed timelines indicated in the action plans.
  • ItemOpen Access
    Bayesian Model Selection for Discrete Graphical Models
    (2023-08-04) Roach, Lyndsay; Gao, Xin
    Graphical models allow for easy interpretation and representation of complex distributions. There is an expanding interest in model selection problems for high-dimensional graphical models, particularly when the number of variables increases with the sample size. A popular model selection tool is the Bayes factor, which compares the posterior probabilities of two competing models. Consider data given in the form of a contingency table where N objects are classified according to q random variables, where the conditional independence structure of these random variables are represented by a discrete graphical model G. We assume the cell counts follow a multinomial distribution with a hyper Dirichlet prior distribution imposed on the cell probability parameters. Then we can write the Bayes factor as a product of gamma functions indexed by the cliques and separators of G. In this thesis, we study the behaviour of the Bayes factor when the dimension of a true discrete graphical model is fixed and when the dimension increases to infinity with the sample size. We prove that the Bayes factor is strong model selection consistent for both decomposable and non-decomposable discrete graphical models. When the true graph is non-decomposable, we prove that the Bayes factor selects a minimal triangulation of the true graph. We support our theoretical results with various simulations. In addition, we introduce a variation of the genetic algorithm, called the graphical local genetic algorithm, which can be implemented on large data sets. We use a local search operator and a normalizing constant proportionate to the posterior probability of the candidate models to determine optimal submodels, then reconstruct the full graph from the resulting subgraphs. We demonstrate the graphical local genetic algorithm's capabilities on both simulated data sets with known true graphs and on a real-world data set.
  • ItemOpen Access
    Multivariate One-Sided Tests for Nonlinear Mixed-Effects Models with Incomplete Data
    (2023-08-04) Zhang, Yi-Xin; Liu, Wei
    Nonlinear mixed-effects (NLME) models are widely used in the analysis of longitudinal studies. The parameters in an NLME model typically have meaningful scientific interpretations, and these parameters may have some natural order restrictions such as being strictly positive. The problems of testing parameters with order restrictions are known as multivariate one-sided hypothesis testing. However, multivariate one-sided testing problems in NLME models have not been discussed thoroughly. In many longitudinal studies, the inter-individual variation can be partially explained by the time-varying covariates which, however, may be measured with substantial errors. Moreover, censoring and non-ignorable missingness in response are very common in practice. Standard testing procedures ignoring covariate measurement errors and/or response censoring/missingness may lead to biased results. We propose multiple imputation methods to address the foregoing data complication. The multiple imputation methods allow us to use existing "complete-data" hypothesis testing procedures for parameters with order restrictions. In this thesis, we propose testing statistics for the multivariate one-sided testing problems in NLME models with: (i) mis-measured covariates, (ii) both mis-measured covariates and left-censored response, and (iii) both mis-measured covariates and non-ignorable missing response, which are discussed in Chapters 2-4, respectively. Some asymptotic null distributions of the proposed test statistics are derived. The proposed methods are illustrated by HIV data examples and evaluated by simulation studies under different scenarios. Simulation results have shown the power advantage of the proposed testing statistics over the commonly used ones.
  • ItemOpen Access
    A Proposed Numerical Method for the 3D Wave Equation with Constant Speed
    (2023-08-04) Cayley, Omar; Gibson, Peter
    This thesis implements a new numerical scheme to solve the (classical) constant speed wave equation in the three dimensional space, currently existing methods (even if more restrictive) rely on time iterations and observe the accumulation of error at each time step iteration, the new method is iteration free ! Making it a good choice for applications requiring accurate results at large times. Numerical experiments and error analysis reveal accuracy of the scheme. The principal conclusion is that the method, based on the Radon transform, must be considered, and we propose it should be developed and count among the standard methods implemented in computational software for engineering and industrial applications.
  • ItemOpen Access
    Linear Spectral Unmixing Algorithms for Abundance Fraction Estimation in Spectroscopy
    (2023-03-28) Oh, Changin; Moyles, Iain
    Fluorescence spectroscopy is commonly used in modern biological and chemical studies, especially for cellular and molecular analysis. Since the measured fluorescence spectrum is the sum of the spectrum of each fluorophore in a sample, a reliable separation of fluorescent labels is the key to the successful analysis of the sample. A technique known as linear spectral unmixing is often used to linearly decompose the measured fluorescence spectrum into a set of constituent fluorescence spectra with abundance fractions. Various algorithms have been developed for linear spectral unmixing. In this work, we implement the existing linear unmixing algorithms and compare their results to discuss their strengths and drawbacks. Furthermore, we apply optimization methods to the linear unmixing problem and evaluate their performance to demonstrate their capabilities of solving the linear unmixing problem. Finally, we denoise noisy fluorescence emission spectra and examine how noise may affect the performance of the algorithms.