Mathematics & Statistics
Permanent URI for this collection
Browse
Browsing Mathematics & Statistics by Title
Now showing 1 - 20 of 106
Results Per Page
Sort Options
Item Open Access A Dependence Analysis Within the Context of Risk Allocations: Distributions on the Simplex and the Notion of Counter-Monotonicity(2023-08-04) Mohammed, Nawaf Mahmood Abdullah; Furman, Ed; Su, JianxiThe remarkable development of today's financial and insurance products demands sound methodologies for the accumulation and characterization of intertwined risks. As a result, modern risk management emerges as a by product querying two key foundations. The first is concerned with the aggregation of said risks into one randomness which is consequently easily measured by a convenient risk measure and thereafter reported. The pooling is done from the different business units (BUs) composing the financial entity. The second pillar pertains to the opposite direction which concerns itself with the allocation of the total risk. It seeks to accurately and concretely attribute the riskiness of each individual BU with respect to the whole. The aggregation process, on one hand, has been fairly well studied in the literature, implemented in the industry and even embedded into the different accords. Risk capital allocation, on the other, is generally much more involved even when a specific risk measure inducing the allocation rule is assumed, let alone the case when a class of risk measures is considered. And unlike the aggregation exercise, which is moderately determined by the collection function, attributing capital is often more heavily influenced by the dependencies among the different BUs. In the literature, nonetheless, allocating capital can be categorized into two main camps. One is built upon the pretense that the distribution of risk should satisfy certain regulatory requirements. This leads to an axiomatic approach which is quite often mathematically tractable yet ignores the economic incentives of the market. The other school of thought is economically driven, allocating risk based on a profit-maximizing paradigm. It argues that capital allocation should reflect the risk perception of the institution and not be imposed by any arbitrary measure, for which its selection is dubious at best. However, the economic approach suffers from complex relations that lack clear definitive forms. At first glance the two perspectives may seem distant, as they arise naturally in their own contexts and are justified accordingly. Nonetheless, they can coincide for particular losses that enjoy certain peculiar model settings which are described thoroughly in the chapters thereafter. Surprisingly, the reconciliation comes in connection with the concept of trivial allocations. Triviality, in itself, attracts practitioners as it requires no discernible dependencies leading to a convenient yet faulty method of attributing risk. Regardless, when used in the right context it unveils surprising connections and conveys useful conclusions. The intersection of the regulatory and profit-maximizing principles, for example, mainly utilizes a milder version of triviality (proportional) which allows for distinct, albeit few, probabilistic laws that accommodate both theories. Furthermore, when a stronger triviality (absolute) condition is imposed, it yields another intriguing corollary, specifically that of restrictive extreme laws commonly known for antithetic or counter-monotonic variates. To address the framework hitherto introduced, in the first chapter of this dissertation, we present a general class of weighted pricing functionals. This wide class covers most of the risk measures and allocations found in the literature today and adequately represents their various properties. We begin by investigating the order characteristics of the functionals under certain sufficient conditions. The results reveal interactive relationships between the weight and the aggregation make-up of the measures, which consequently, allow for effective comparison between the different risks. Then upon imposing restrictions on the allocation constituents, we establish equivalent statements for trivial allocations that uncover a novel general concept of counter-monotonicity. More significantly, similar equivalences are obtained for a weaker triviality notion that pave the path to answer the aforementioned question of allocation reconciliation. The class of weighted functionals, though constructive, is too general to apply effectively to the allocation theories. Thus, in the second chapter, we consider the special case of conditional tail expectation (CTE), defining its risk measure and the allocation it induces. These represent the regulatory approach to allocation as CTE is arguably one of the most prominent and front-runner measures used and studied today. On the other side, we consider the allocation arising from the economic context that aims to maximize profit subject to other market forces as well as individual perceptions. Both allocations are taken as proportions as they are formed from compositional maps which relate to the standard simplex in either a stochastic or non-stochastic manner. Then we equate the two allocations and derive a general description for the laws that satisfy the two functionals. The Laplace transform of the multivariate size bias is used as the prime identifier delineating the general distributions and detailing subsequent corollaries and examples. While studying the triviality nature of allocations, we focused on the central element of stochastic dependence. We showed how certain models, extremal dependence for instance, enormously influences the attribution outcome. Thus far, nonetheless, our query started from the point of allocation relations, be it proportional or absolute, then ended in law characterizations that satisfy those relations. Equally important, on the other hand, is deriving allocations expressions based on a priori assumed models. This task requires apt choices of general structures which convey the desired probabilistic nature of losses. Since constructing joint laws can be quite challenging, the compendium of probabilistic models relies heavily on leveraging the stochastic representations of known distributions. This feat allows not only for simpler computations but as well for useful interpretations. Basic mathematical operations are usually deployed to derive different joint distributions with certain desirable properties. For example, taking the minimum yields the Marshall-Olkin distribution, addition gives the additive background model and multiplication/division naturally leads to the multiplicative background model. Simultaneously, univariate manipulation through location, scale and power transforms adds to the flexibility of the margins while preserving the overall copula. In the last chapter of this dissertation, we introduce a composite of the Marshall-Olkin, additive and multiplicative models to obtain a novel multivariate Pareto-Dirichlet law possessing a profound composition capable of modelling heavy tailed events descriptor of many extremal scenarios in insurance and finance. We study its survival function and the corresponding moments and mixed moments. Then we focus on the bivariate case, detailing the intricacies of its inherent expressions. And finally, we conclude with a thorough application to the risk and allocation functionals respectively.Item Open Access A High-Order Navier-Stokes Solver for Viscous Toroidal Flows(2024-03-16) Siewnarine, Vishal; Haslam, Michael C.This thesis details our work in the development and testing of highly efficient solvers for the Navier-Stokes problem in simple toroidal coordinates in three spatial dimensions. In particular, the domain of interest in this work is the region occupied by a fluid between two concentric toroidal shells. The study of this problem was motivated in part by extensions of the study of Taylor-Couette instabilities between rotating cylindrical and spherical shells to toroidal geometries. We note that at higher Reynolds numbers, Taylor-Couette instabilities in cylindrical and spherical coordinates are essentially three dimensional in nature, which motivated us to design fully three-dimensional solvers with a OpenMP parallel numerical implementation suitable for a multi-processor workstation. We approach this problem using two different time stepping procedures applied to the so-called Pressure Poisson formulation of the Navier-Stokes equations. In the first case, we develop an ADI-type method based on a finite difference formulation applicable for low Reynolds number flows. This solver was more of a pilot study of the problem formulated in simple toroidal coordinates. In the second case - the main focus of our thesis - our main goal was to develop a spectral solver using an explicit fourth order Runge-Kutta time stepping, which is appropriate to the higher Reynolds number flows associated with Taylor-Couette instabilities. Our spectral solver was developed using a high order Fourier representation in the angular variables of the problem and a high order Chebyshev representation in the radial coordinate between the shells. The solver exhibits super-algebraic convergence in the number of unknowns retained in the problem. Applied to the Taylor-Couette problem, our solver has allowed us to identify (for the first time, in this thesis) highly-resolved Taylor-Couette instabilities between toroidal shells. As we document in this work, these instabilities take on different configurations, depending on the Reynolds number of the flow and the gap width between shells, but as of now, all of these instabilities are essentially two dimensions. Our work on this subject continues, and we confident that we will uncover three-dimensional instabilities that have well-known analogues in the cases of cylindrical and spherical shells. Lastly, a separate physical problem we examine is the flow between oscillating toroidal shells. Again, our spectral solver is able to resolve these flows to spectral accuracy for various Reynolds numbers and gap widths, showing surprisingly rich physical behaviour. Our code also allows us to document the torque required for the oscillation of the shells, a key metric in engineering applications. This problem was investigated since this configuration was recently proposed as a mechanical damping system.Item Open Access A Proposed Numerical Method for the 3D Wave Equation with Constant Speed(2023-08-04) Cayley, Omar; Gibson, PeterThis thesis implements a new numerical scheme to solve the (classical) constant speed wave equation in the three dimensional space, currently existing methods (even if more restrictive) rely on time iterations and observe the accumulation of error at each time step iteration, the new method is iteration free ! Making it a good choice for applications requiring accurate results at large times. Numerical experiments and error analysis reveal accuracy of the scheme. The principal conclusion is that the method, based on the Radon transform, must be considered, and we propose it should be developed and count among the standard methods implemented in computational software for engineering and industrial applications.Item Open Access A Study of L-Functions: At The Edge of the Critical Strip and Within(2020-05-11) Lumley, Allysa; Lamzouri, YounessIn analytic number theory, and increasingly in other surprising places, L-functions arise naturally when describing algebraic and geometric phenomena. For example, when attempting to prove the Prime Number Theorem the values of L-functions on the one-line played a crucial role. In this thesis we discuss the theory of L-functions in two different settings. In the classical context we provide results which give estimates for the size of a general L-function on the right edge of the critical strip, that is complex numbers with real part one. We also provide a bound for the number of zeros for the classical Riemann zeta function inside the critical strip commonly referred to as a zero density estimate. In the second setting we study L-functions over the polynomial ring A, which is all polynomials with coefficients in a finite field of size q. As A and the ring of integers have similar structure, A is a natural candidate for analyzing classical number theoretic questions. Additionally, the truth of the Riemann Hypothesis (RH) in A yields deeper unconditional results currently unattainable over the integers. We will focus on the distribution of values of specific L-functions in two different places: On the right edge of the critical strip, that is complex numbers with real part one, and inside of the critical strip, meaning the complex numbers will have real part between one half and one.Item Open Access Adjusted Empirical Likelihood Method and Parametric Higher Order Asymptotic Method with Applications to Finance(2019-07-02) Wang, Hang Jing; Fu, Yuejiao; Wong, AugustineIn recent years, applying higher order likelihood-based method to obtain inference for a scalar parameter of interest is becoming more popular in statistics because of the extreme accuracy that it can achieve. In this dissertation, we applied higher order likelihood-based method to obtain inference for the correlation coefficient of a bivariate normal distribution with known variances, and the mean parameter of a normal distribution with a known coefficient of variation. Simulation results show that the higher order method has remarkable accuracy even when the sample size is small. The empirical likelihood (EL) method extends the traditional parametric likelihood-based inference method to a nonparametric setting. The EL method has several nice properties, however, it is subject to the convex hall problem, especially when the sample size is small. In order to overcome this difficulty, Chen et al. (2008) proposed the adjusted empirical likelihood (AEL) method which adjusts the EL function by adding one ``artificial'' point created form the observed sample. In this dissertation, we extended the AEL inference to the situation with nuisance parameters. In particular, we applied the AEL method to obtain inference for the correlation coefficient. Simulation results show that the AEL method is more robust than its competitors. For the application to finance, we apply both the higher order parametric method and the AEL method to obtain inference for the Sharpe ratio. The Sharpe ratio is the prominent risk-adjusted performance measure used by practitioners. Simulation results show that the higher order parametric method performs well for data from normal distribution, but it is very sensitive to model specifications. On the other hand, the AEL method has the most robust performance under a variety of model specifications.Item Open Access Adolescent Vaping Behaviors: Exploring the Dynamics of a Social Contagion Model(2023-12-08) Machado-Marques, Sarah Isabella; Moyles, IainVaping, or the use of electronic cigarettes (e-cigarettes), is an ongoing issue for public health. The rapid increase in e-cigarette usage, particularly among adolescents, has often been referred to as an epidemic. Drawing upon this epidemiological analogy between vaping and infectious diseases as a theoretical framework, we aim to study this issue through mathematical modeling to better understand the underlying dynamics. In this thesis, we present a deterministic compartmental model of adolescent e-cigarette smoking which accounts for social influences on initiation, relapse, and cessation behaviors. We use results from a sensitivity analysis of the model’s parameters on various response variables to identify key influences on system dynamics and simplify the model into one that can be analyzed more thoroughly. Through steady state and stability analyses and simulations of the model, we conclude that (1) social influences from and on temporary quitters are not important in overall model dynamics and (2) social influences from permanent quitters can have a significant impact on long-term system dynamics, including the reduction of the smokers' equilibrium and emergence of multiple smoking waves.Item Open Access Algebraic-Delay Differential Systems: Co - Extendable Banach Manifolds and Linearization(2015-01-26) Kosovalic, Nemanja; Wu, JianhongConsider a population of individuals occupying some habitat, and assume that the population is structured by age. Suppose that there are two distinct life stages, the immature stage and the mature stage. Suppose that the mature and immature population are not competing in the sense that they are consuming different resources. A natural question is ``What determines the age of maturity?" A subsequent natural question is ``How does the answer to the latter question affect the population dynamics?" In many biological contexts, including those from plant and insect populations, the age of maturity is not merely constant but is more accurately determined by whether or not the food concentration reaches a prescribed threshold. We consider a model for such a population in terms of a nonlinear transport equation with nonlocal boundary conditions. The variable age of maturity gives rise to an implicit state-dependent delay in the system of first order partial differential equations. We explain the relevance of this problem and provide a mechanistic derivation of the model equations. We address the existence, positivity, and continuity of the solution semiflow arising from the model equations, and then we discuss the differentiability of the semiflow with respect to initial data, in a suitable weak sense. The problem of the differentiability of the solution semiflow arising from even ordinary differential equations containing state-dependent delays was a long standing open problem for some time. Prior to this work, there were no results which addressed the linearization of the solution semiflow corresponding to a partial differential equation having a state-dependent delay.Item Open Access Analytical Methods For Levy Processes With Applications To Finance(2015-08-28) Hackmann, Daniel; Kuznetsov, AlexeyThis dissertation is divided into two parts: the first part is a literature review and the second describes three new contributions to the literature. The literature review aims to provide a self-contained introduction to some popular Levy models and to two key objects from the theory of Levy processes: the Wiener-Hopf factors and the exponential functional. We pay special attention to techniques and results associated with two “analytically tractable” families of processes known as the meromorphic and hyper-exponential families. We also demonstrate some important numerical techniques for working with these families and for solving numerical integration and rational approximation problems. In the second part of the dissertation we prove that the exponential functional of a meromorphic Levy process is distributed like an infinite product of independent Beta random variables. We also identify the Mellin transform of the exponential functional, and then, under the assumption that the log-stock price follows a meromorphic process, we use this to develop a fast and accurate algorithm for pricing continuously monitored, fixed strike Asian call options. Next, we answer an open question about the density of the supremum of an alpha-stable process. We find that the density has a conditionally convergent double series representation when alpha is an irrational number. Lastly, we develop an effective and simple algorithm for approximating any process in the class of completely monotone processes –some members of this class include the popular variance gamma, CGMY, and normal inverse Gaussian processes – by a hyper-exponential process. Under the assumption that the log-stock price follows a variance gamma or CGMY process we use this approximation to price several exotic options such as Asian and barrier options. Our algorithms are easy to implement and produce accurate prices.Item Open Access Assessing Control Strategies and Timelines for Mycobacterium Tuberculosis Elimination(2023-08-04) Abdollahi, Elaheh; Moghadas, SeyedTuberculosis (TB) continues to inflict a disproportionate impact on Inuit communities in Canada, with reported rates of active TB that are over 300 times higher than those of Canadian-born, non-Indigenous populations. The Inuit Tuberculosis Elimination Framework aims to reduce the incidence of active TB by at least 50% by 2025, with the ultimate goal of eliminating it (i.e., reducing the incidence of active TB below 1 case per 1,000,000 population) by 2030. However, whether these goals can be achieved with the resources and interventions currently available has not been investigated. This dissertation formulates an agent-based model (ABM) of TB transmission dynamics and control to assess the feasibility of achieving the goals of elimination framework in Nunavut, Canada. I applied the model to project the annual incidence of active TB from 2025 to 2040, taking into account factors such as time to case identification after developing active TB, contact tracing and testing, patient isolation and compliance, household size, and the potential impact of a therapeutic vaccine. In order to determine the potential reduction in TB incidence, various scenarios of treatment regimens were evaluated within the action plans for TB elimination. The scenario analyses demonstrate that the time-to-identification of active TB cases is a crucial factor in attainability of the goals, highlighting the importance of investment in early case detection. The findings also indicate that the goal of 50% reduction in annual incidence of TB by 2025 is only achievable under best case scenarios of combined interventions. However, TB elimination will likely exceed timelines indicated in the action plans.Item Open Access Bandwidth Selection for Level Set Estimation in the Context of Regression and a Simulation Study for Non Parametric Level Set Estimation When the Density Is Log-Concave(2022-08-08) Gonzalez Martinez, Gabriela; Jankowski, HannaBandwidth selection is critical for kernel estimation because it controls the amount of smoothing for a function's estimator. Traditional methods for bandwidth selection involve optimizing a global loss function (e.g. least squares cross validation, asymptotic mean integrated squared error). Nevertheless, a global loss function becomes suboptimal for the level set estimation problem which is local in nature. For a function $g$, the level set is the set LSλ = {x : g(x) ≥ λ}. In the first part of this thesis we study optimal bandwidth selection for the Nadaraya-Watson kernel estimator in one dimension. We present a local loss function as an alternative to $L_2$ metric and derive an asymptotic approximation of its corresponding risk. The level set optimal bandwidth $(h_{opt})$ is the argument that minimizes the asymptotic approximation. We show that the rate of $h_{opt}$ coincides with the rate from traditional global bandwidth selectors. We then derive an algorithm to obtain the practical bandwidth and study its performance through simulations. Our simulation results show that in general, for small samples and small levels, the level set optimal bandwidth shows improvement in estimating the level set when compared to the cross validation bandwidth selection or the local polynomial kernel estimator. We illustrate this new bandwidth selector on a decompression sickness study on the effects of duration and pressure on mortality during a dive. In the second part, motivated by our simulation findings and the relationship of the level set estimation to the highest density region (HDR) problem, we study via simulations the properties of a plug-in estimator where the density is estimated with a log-concave mixed model. We focus in particular on univariate densities and compare this method against a kernel plug-in estimator. The bandwidth for the kernel plug-in estimator is chosen optimally for the HDR problem. We observe through simulations that when the number of components in the model is correctly specified, the log-concave plug-in estimator performs better than the kernel estimator for lower levels and similarly for the rest of the levels considered. We conclude with an analysis on the daily maximum temperatures in Melbourne, Australia.Item Open Access Bayesian Estimation of Graphical Gaussian Models with Edges and Vertices Symmetries(2017-07-27) Li, Qiong; Massam, Helene; Gao, XinWe consider the Bayesian analysis of undirected graphical Gaussian models with edges and vertices symmetries. The graphical Gaussian models with equality constraints on the precision matrix, that is the inverse covariance matrix, introduced by Hojsgaard and Lauritzen as RCON models. The models can be represented by colored graphs, where edges or vertices have the same coloring if the corresponding elements of the precision matrix are equal. In this thesis, we define a conjugate prior distribution for RCON models. We will, therefore, call this conjugate prior the colored G-Wishart. We first develop a sampling scheme for the colored G-Wishart distribution. This sampling method is based on the Metropolis-Hastings algorithm and the Cholesky decomposition of matrices. In order to assess the accuracy of the Metropolis-Hastings sampling method, we compute the normalizing constants of the colored G-Wishart distribution for some special graphs: general trees, star graphs, a complete graph on 3 vertices and a simple decomposable model on 4 vertices with various symmetry constraints. By differentiating the analytic expression of normalizing constants, we can obtain the true mean of the colored G-Wishart distribution for these particular graphs. Moreover, we conduct a number of simulations to compare the true mean of the colored G-Wishart distribution with the sample mean obtained from a number of iterations of our Metropolis-Hastings algorithm. Then, we give three methods for estimating the normalizing constant of the colored G-Wishart distribution. The three methods are the Monte Carlo method, the importance sampling, and the Laplace approximation. We furthermore apply these methods on the model search for a real dataset using Bayes factors. At last, we propose the distributed Bayesian estimate of the precision matrix in colored graphical Gaussian models. We also study the asymptotic behaviour of our proposed estimate under the regular asymptotic regime when the number of variables p is fixed and under the double asymptotic regime when both p and the sample size n grow to infinity.Item Open Access Bayesian Model Selection for Discrete Graphical Models(2023-08-04) Roach, Lyndsay; Gao, XinGraphical models allow for easy interpretation and representation of complex distributions. There is an expanding interest in model selection problems for high-dimensional graphical models, particularly when the number of variables increases with the sample size. A popular model selection tool is the Bayes factor, which compares the posterior probabilities of two competing models. Consider data given in the form of a contingency table where N objects are classified according to q random variables, where the conditional independence structure of these random variables are represented by a discrete graphical model G. We assume the cell counts follow a multinomial distribution with a hyper Dirichlet prior distribution imposed on the cell probability parameters. Then we can write the Bayes factor as a product of gamma functions indexed by the cliques and separators of G. In this thesis, we study the behaviour of the Bayes factor when the dimension of a true discrete graphical model is fixed and when the dimension increases to infinity with the sample size. We prove that the Bayes factor is strong model selection consistent for both decomposable and non-decomposable discrete graphical models. When the true graph is non-decomposable, we prove that the Bayes factor selects a minimal triangulation of the true graph. We support our theoretical results with various simulations. In addition, we introduce a variation of the genetic algorithm, called the graphical local genetic algorithm, which can be implemented on large data sets. We use a local search operator and a normalizing constant proportionate to the posterior probability of the candidate models to determine optimal submodels, then reconstruct the full graph from the resulting subgraphs. We demonstrate the graphical local genetic algorithm's capabilities on both simulated data sets with known true graphs and on a real-world data set.Item Open Access Block Systems of Ranks 3 and 4 Toroidal Hypertopes(2018-11-21) Ens, Eric James Loepp; Weiss, Asia IvicThis dissertation deals with abstract combinatorial structure of toroidal polytopes and toroidal hypertopes. Abstract polytopes are objects satisfying the main combinatorial properties of a classical (geometric) polytope. A regular toroidal polytope is an abstract polytope which can be constructed from the string affine Coxeter groups. A hypertope is a generalization of an abstract polytope, and a regular toroidal hypertope is a hypertope which can be constructed from any affine Coxeter group. In this thesis we classify the rank 4 regular toroidal hypertopes. We also seek to find all block systems on a set of (hyper)faces of toroidal polytopes and hypertopes of ranks 3 and 4 as well as the regular and chiral toroidal polytopes of ranks 3. A block system of a set X under the action of a group G is a partition of X which is invariant under the action of G.Item Open Access C+ - Algebras and the Uncountable: A Systematic Study of the Combinatorics of the Uncountable in the Noncommutative Framework(2019-11-22) Vaccaro, Andrea; Farah, IlijasIn this dissertation we investigate nonseparable C-algebras using methods coming from logic, specifically from set theory. The material is divided into three main parts. In the first part we study algebras known as counterexamples to Naimarks problem, namely C-algebras that are not isomorphic to the algebra of compact operators on some Hilbert space, yet still have only one irreducible representation up to unitary equivalence. Such algebras have to be simple, nonseparable and non-type I, and they are known to exist if the diamond principle (a strengthening of the continuum hypothesis) is assumed. With the motivation of finding further characterizations for these counterexamples, we undertake the study of their trace spaces, led by some elementary observations about the unitary action on the state space of these algebras, which seem to suggest that a counterexample to Naimarks problem could have at most one trace. We show that this is not the case and, assuming diamond, we prove that every Choquet simplex with countably many extreme points occurs as the trace space of a counterexample to Naimarks problem and that, moreover, there exists a counterexample whose tracial simplex is nonseparable. The second part of this dissertation revolves around the Calkin algebra (H) and the general problem of what nonseparable C-algebras embed into it. We prove that, under Martins axiom, all C-algebras of density character less than 20 embed into the Calkin algebra. Moving to larger C-algebras, we show that (within ZFC alone) Cred(F20 ) and Cm ax(F20 ), where F20 is the free group on 20 generators, and every nonseparable UHF algebra with density character at most 20 , embed into the Calkin algebra. On the other hand, we prove that it is consistent with ZFC + 20 , for every ordinal 2, that the abelian C-algebra generated by an increasing chain of 2 projections does not embed into Q(H). Hence, the statement Every C-algebra of density character strictly less than 20 embeds into the Calkin algebra is independent from ZFC+ 20 , for every ordinal > 2. Finally, we show that the proof of Voiculescus noncommutative version of the Weyl- von Neumann theorem consists, when looked from the right perspective, of a sequence of applications of the Baire category theorem to certain ccc posets. This allows us, assuming Martins axiom, to generalize Voiculescus results to nonseparable C-algebras of density character less than 20 . The last part of this manuscript concerns lifting of abelian subalgebras of coronas of non-unital C-algebras. Given a subset of commuting elements in a corona algebra, we study what could prevent the existence of a commutative lifting of such subset to the multiplier algebra. While for finite and countable families the only issues arising are of K-theoretic nature, for larger families the size itself becomes an obstruction. We prove in fact, for a primitive, non-unital, -unital C-algebra A, that there exists a set of 1 orthogonal positive elements in the corona of A which cannot be lifted to a collection of commuting elements in the multiplier algebra of A.Item Open Access Can You Take Akemann-Weaver's Diamond Away(2019-11-22) Wilches, Daniel Calderon; Farah, IlijasIn 2004 Akemann and Weaver showed that if Diamond holds, there is a C*-algebra with a unique irreducible representation up to spatial equivalence that is not isomorphic to any algebra of compact operators. This answered, under some additional set-theoretic assumptions, an old question due to Naimark. All known counterexamples to Naimark's Problem have been constructed using a modification of the Akemann-Weaver technique and it was not known whether there exists an algebra of this kind in the absence of Diamond. We show that it is relatively consistent with ZFC that there is a counterexample to Naimark's Problem while Diamond fails.Item Open Access Combining Test Statistics and Information Criteria for High Dimensional Data Integration(2015-08-28) Xu, Yawen; Gao, Xin; Wang, XiaogangThis research is focused on high dimensional data integration by combing test statistics or information criteria. Our research contains four projects. Firstly, an integration method is developed to perform hypothesis testing and biomarkers selection based on multi-platform data sets observed from normal and diseased populations. Secondly, non-parametric method is developed to cluster continuous data mixed with categorical data, where modified Chi-squared tests are used to detect of cluster patterns on the product space. Thirdly, weighted integrative AICs criterion is developed to be used for model selection across multiple data sets. Finally, Linhart's and Shimodaria's test statistics are extended onto composite likelihood function to perform model comparison test for correlated data.Item Open Access Complex Powers of a Fourth-Order Operator: Heat Kernels, Green Functions and Lp - Lp1 Estimates(2016-09-20) Duan, Xiaoxi; Wong, Man WahWe first construct the minimal and maximal operators of the Hermite operator.Then we apply a classical reslult by Askey and Wainger, to prove that for 4/3 < p < 4. This implies that the Hermite operator is essentially self-adjoint, which means that its minimal and maximal operators coincide. Using the asymptotic behaviour of the Lp-norms of the Hermite functions and essentially the same method as in the proof of 4/3 < p < 4, the same results are true for 1 p . We also compute the spectrum for the minimal and the maximal operator for 4/3 < p < 4. Then we construct a fourth-order operator, called the twisted bi-Laplacian, from the Laplacian on the Heisenberg group, namely, the twisted Laplacian. Using spectral analysis, we obtain explicit formulas for the heat kernel and Green function of the twisted bi-Laplacian. We also give results on the spectral theory and number theory associated with it. We then consider all complex powers of the twisted bi-Laplacian and compute their heat kernels and Green functions, and moreover, we obtain Lp Lp0 estimates for the solutions of the initial value problem for the heat equation and the Poisson equation governed by complex powers of the twisted bi-Laplacian.Item Open Access Composite Likelihood: Multiple Comparisons and Non-Standard Conditions in Hypothesis Testing(2018-05-28) Azadbakhsh, Mahdis; Jankowski, Hanna; Gao, XinComputational intensity in using full likelihood estimation of multivariate and correlated data is a valid motivation to employ composite likelihood as an alternative that eases the process by using marginal or conditional densities and reducing the dimension. We study the problem of multiple hypothesis testing for multidimensional clustered data. The problem of multiple comparisons is common in many applications. We propose to construct multiple comparisons procedures based on composite likelihood statistics. The simultaneous multivariate normal quantile is chosen as the threshold that controls the multiplicity. We focus on data arising in four cases: multivariate Gaussian, probit, quadratic exponential models and gamma. To assess the quality of our proposed methods, we assess their empirical performance via Monte Carlo simulations. It is shown that composite likelihood-based procedures maintain good control of the familywise type I error rate in the presence of intra-cluster correlation, whereas ignoring the correlation leads to invalid performance. Using data arising from a depression study and also kidney study, we show how our composite likelihood approach makes an otherwise intractable analysis possible. Moreover, we study distribution of composite likelihood ratio test when the true parameter is not an interior point of the parameter space. We approached the problem looking at the geometry of the parameter space and approximating it at the true parameter by a cone under Chernoff's regularity. First, we established the asymptotic properties of the test statistic for testing continuous differentiable linear and non-linear combinations of parameters and then we provide algorithms to compute the distribution of both full and composite likelihood ratio tests for different cases and dimensions. The proposed approach is evaluated by running simulations.Item Open Access Computational Methods for One-Dimensional Scattering in Non-Smooth Media(2022-08-08) Chien-Cheng Chiu; Gibson, PeterThis thesis implements various numerical algorithms used for acoustic imaging of layered media: the scattering-based algorithm, the classical Born approximation, the refined impedance transform and the echoes-to-impedance transform. The last three are inverse scattering algorithms that numerically convert data in the time domain to an impedance in the spatial domain; however, the simplicity, speed and accuracy can differ greatly among them. In place of physical recordings, the scattering-based algorithm is used to generate accurate synthetic data. Numerical experiments and error analyses reveal significant differences among the three. The principal conclusion is that the method based on the most sophisticated mathematical ideas, namely the echoes-to-impedance transform, is far superior.Item Open Access Convergence Rate Analysis of Markov Chains(2015-01-26) Jovanovski, Oliver; Madras, NealWe consider a number of Markov chains and derive bounds for the rate at which convergence to equilibrium occurs. For our main problem, we establish results for the rate of convergence in total variation of a Gibbs sampler to its equilibrium distribution. This sampler is motivated by a hierarchical Bayesian inference construction for a gamma random variable. The Bayesian hierarchical method involves statistical models that incorporate prior beliefs about the likelihood of observed data to arrive at posterior interpretations, and appears in applications for information technology, statistical genetics, market research and others. Our results apply to a wide range of parameter values in the case that the hierarchical depth is 3 or 4, and are more restrictive for depth greater than 4. Our method involves showing a relationship between the total variation of two ordered copies of our chain and the maximum of the ratios of their respective co-ordinates. We construct auxiliary stochastic processes to show that this ratio does converge to 1 at a geometric rate. In addition, we also consider a stochastic image restoration model proposed by A. Gibbs, and give an upper bound on the time it takes for a Markov chain defined by this model to be arbitrarily close in total variation to equilibrium. We use Gibbs' result for convergence in the Wasserstein metric to arrive at our result. Our bound for the time to equilibrium is of similar order to that of Gibbs.