YorkSpace has migrated to a new version of its software. Access our Help Resources to learn how to use the refreshed site. Contact diginit@yorku.ca if you have any questions about the migration.
 

Mathematics & Statistics

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 20 of 101
  • ItemOpen Access
    A High-Order Navier-Stokes Solver for Viscous Toroidal Flows
    (2024-03-16) Siewnarine, Vishal; Haslam, Michael C.
    This thesis details our work in the development and testing of highly efficient solvers for the Navier-Stokes problem in simple toroidal coordinates in three spatial dimensions. In particular, the domain of interest in this work is the region occupied by a fluid between two concentric toroidal shells. The study of this problem was motivated in part by extensions of the study of Taylor-Couette instabilities between rotating cylindrical and spherical shells to toroidal geometries. We note that at higher Reynolds numbers, Taylor-Couette instabilities in cylindrical and spherical coordinates are essentially three dimensional in nature, which motivated us to design fully three-dimensional solvers with a OpenMP parallel numerical implementation suitable for a multi-processor workstation. We approach this problem using two different time stepping procedures applied to the so-called Pressure Poisson formulation of the Navier-Stokes equations. In the first case, we develop an ADI-type method based on a finite difference formulation applicable for low Reynolds number flows. This solver was more of a pilot study of the problem formulated in simple toroidal coordinates. In the second case - the main focus of our thesis - our main goal was to develop a spectral solver using an explicit fourth order Runge-Kutta time stepping, which is appropriate to the higher Reynolds number flows associated with Taylor-Couette instabilities. Our spectral solver was developed using a high order Fourier representation in the angular variables of the problem and a high order Chebyshev representation in the radial coordinate between the shells. The solver exhibits super-algebraic convergence in the number of unknowns retained in the problem. Applied to the Taylor-Couette problem, our solver has allowed us to identify (for the first time, in this thesis) highly-resolved Taylor-Couette instabilities between toroidal shells. As we document in this work, these instabilities take on different configurations, depending on the Reynolds number of the flow and the gap width between shells, but as of now, all of these instabilities are essentially two dimensions. Our work on this subject continues, and we confident that we will uncover three-dimensional instabilities that have well-known analogues in the cases of cylindrical and spherical shells. Lastly, a separate physical problem we examine is the flow between oscillating toroidal shells. Again, our spectral solver is able to resolve these flows to spectral accuracy for various Reynolds numbers and gap widths, showing surprisingly rich physical behaviour. Our code also allows us to document the torque required for the oscillation of the shells, a key metric in engineering applications. This problem was investigated since this configuration was recently proposed as a mechanical damping system.
  • ItemOpen Access
    Markov Chains, Clustering, and Reinforcement Learning: Applications in Credit Risk Assessment and Systemic Risk Reduction
    (2023-12-08) Le, Richard; Ku, Hyejin
    In this dissertation we demonstrate how credit risk assessment using credit rating transition matrices can be improved, as well as present a novel reinforcement learning (RL) model capable of determining a multi-layer financial network configuration with reduced levels of systemic risk. While in this dissertation we treat credit risk and systemic risk independently, credit risk and systemic risk are two sides of the same coin. Financial systems are highly interconnected by their very nature. When a member of this system experiences distress such as default, a credit risk event, this distress is often not felt in isolation. Due to the highly interconnected nature of financial systems, these shocks can spread throughout the system resulting in catastrophic failure, a systemic risk event. The treatment of credit risk begins with the introduction of our first-order Markov model augmented with sequence-based clustering (SBC). Once we established this model, we explored its ability to predict future credit rating transitions, the transition direction of the credit ratings, and the default behaviour of firms using historical credit rating data. Once validated, we then extend this model using higher-order Markov chains. This time around, focusing more on the absorbing behaviour of Markov chains, and hence, the default behaviour under this new model. Using higher-order Markov chains, we also enjoy the benefit of capturing a phenomenon known as rating momentum, characteristic of credit rating transition behaviour. Other than the credit rating data set, this model was also applied to a Web-usage mining data set, highlighting its generalizability. Finally, we shift our focus to the treatment of systemic risk. While methods exist to determine optimal interbank lending configurations, they only treat single-layer networks. This is due to technical optimization challenges that arise when one considers additional layers and the interactions between them. These layers can represent lending products of different maturities. To consider the interaction between layers, we extend the DebtRank (DR) measure to track distress across layers. Next, we develop a constrained deep-deterministic policy gradient (DDPG) model capable of reorganizing the interbank lending network structure, such that the spread of distress is better mitigated.
  • ItemOpen Access
    Invisible Frontiers: Robust and Risk-Sensitive Financial Decision-Making within Hidden Regimes
    (2023-12-08) Wang, Mingfu; Ku, Hyejin
    In this dissertation, we delve into the exploration of robust and risk-sensitive strategies for financial decision-making within hidden regimes, focusing on the effective portfolio management of financial market risks under uncertain market conditions. The study is structured around three pivotal topics, that is, Risk-sensitive Policies for Portfolio Management, Robust Optimal Life Insurance Purchase and Investment-consumption with Regime-switching Alpha-ambiguity Maxmin Utility, and Robust and Risk-sensitive Markov Decision Process with Hidden Regime Rules. In Risk-sensitive policies for Portfolio Management, we propose two novel Reinforcement Learning (RL) models. Tailored specifically for portfolio management, these models align with investors’ risk preference, ensuring the strategies balance between risk and return. In Robust Optimal Life Insurance Purchase and Investment-consumption with Regime-switching Alpha-ambiguity Maxmin Utility, we introduce a pre-commitment strategy that robustly navigates insurance purchasing and investment-consumption decisions. This strategy adeptly accounts for model ambiguity and individual ambiguity aversion within a regime-switching market context. In Robust and Risk-sensitive Markov Decision Process with Hidden Regime Rules, we integrate hidden regimes into Markov Decision Process (MDP) framework, enhancing its capacity to address both market regime shifts and market fluctuations. In addition, we adopt a risk-sensitive objective and construct a risk envelope to portray the worst-case scenario from RL perspective. Overall, this research strives to provide investors with the tools and insights for optimal balance between reward and risk, effective risk management and informed investment choices. The strategies are designed to guide investors in the face of market uncertainties and risk, further underscoring the criticality of robust and risk-sensitive financial decision-making.
  • ItemOpen Access
    On Laplace transforms, generalized gamma convolutions, and their applications in risk aggregation
    (2023-12-08) Miles, Justin Christopher; Kuznetsov, Alexey
    This dissertation begins with two introductory chapters to provide some relevant background information: an introduction on the Laplace transform and an introduction on Generalized Gamma Convolutions (GGCs). The heart of this dissertation is the final three chapters comprised of three contributions to the literature. In Chapter 3, we study the analytical properties of the Laplace transform of the log-normal distribution. Two integral expressions for the analytic continuation of the Laplace transform of the log-normal distribution are provided, one of which takes the form of a Mellin-Barnes integral. As a corollary, we obtain an integral expression for the characteristic function; we show that the integral expression derived by Leipnik in \cite{Leipnik1991} is incorrect. We present two approximations for the Laplace transform of the log-normal distribution, both valid in $\C \setminus(-\infty,0]$. In the last section, we discuss how one may use our results to compute the density of a sum of independent log-normal random variables. In Chapter 4, we explore the topic of risk aggregation with moment matching \\approximations. We put forward a refined moment matching approximation (MMA) method for approximating the distributions of the sums of insurance risks. Our method approximates the distributions of interest to any desired precision, works equally well for light and heavy-tailed distributions, and is reasonably fast irrespective of the number of the involved summands. In Chapter 5, we study the convergence of the Gaver-Stehfest algorithm. The Gaver-Stehfest algorithm is widely used for numerical inversion of Laplace transform. In this chapter we provide the first rigorous study of the rate of convergence of the Gaver-Stehfest algorithm. We prove that the Gaver-Stehfest approximations of order $n$ converge exponentially fast if the target function is analytic in a neighbourhood of a point and they converge at a rate $o(n^{-k})$ if the target function is $(2k+3)$-times differentiable at a point.
  • ItemOpen Access
    Mathematical and Statistical Analysis of Non-stationary Time Series Data
    (2023-12-08) Hang, Du; Wang, Steven
    Non-stationary time series, with intrinsic properties constantly changing over time, present significant challenges for analysis in various scientific fields, particularly in biomedical signal analysis. This dissertation presents novel methodologies for analyzing and classifying highly noisy and non-stationary signals with applications to electroencephalograms (EEGs) and electrocardiograms (ECGs). The first part of the dissertation focuses on a framework integrating pseudo-differential operators with convolutional neural networks (CNNs). We present their synergistic potential for signal classification from an innovative perspective. Building on the fundamental concept of pseudo-differential operators, the dissertation further proposes a novel methodology that addresses the challenges of applying time-variant filters or transforms to non-stationary signals. This approach enables the neural network to learn a convolution kernel that changes over time or location, providing a refined strategy to effectively handle these dynamic signals. This dissertation also introduces a hybrid convolutional neural network that integrates both complex-valued and real-valued components with the discrete Fourier transform (DFT) for EEG signal classification. This fusion of techniques significantly enhances the neural network's ability to utilize the phase information contained in the DFT, resulting in substantial accuracy improvements for EEG signal classification. In the final part of this dissertation, we apply a conventional machine learning approach for the detection and localization of myocardial infarctions (MIs) in electrocardiograms (ECGs) and vectorcardiograms (VCGs), using the innovative features extracted from the geometrical and kinematic properties within VCGs. This boosts the accuracy and sensitivity of traditional MI detection.
  • ItemOpen Access
    Retirement Annuities: Optimization, Analysis and Machine Learning
    (2023-12-08) Nikolic, Branislav; Salisbury, Tom
    Over the last few decades, we have seen a steady shift away from Defined Benefit (DB) pension plans to Defined Contribution (DC) pension plans in the United States. Even though a deferred income annuity (DIA) purchased while saving for retirement can pay aguaranteed stream of income for life, practically serving as a pension substitute, several questions arise. Our main contribution is answering the question of purchasing DIAs under the interest rate uncertainty. We pose the question as an optimal control problem, solve its centerpiece Hamilton-Jacobi-Bellman equation numerically, and provide a verification theorem. The result is an optimal DIA purchasing map. With Cash Refund Income Annuities (CRIA) gaining traction quickly over the past few years, the literature is growing in the area of price sensitivity and its viability when viewed through the lens of key pricing parameters, particularly insurance loading. To that end, we explored the effect of reserving requirements on pricing and have analytically proven that, if accounted for properly at the beginning, reserving requirements would be satisfied at any time during the lifetime of the annuity. Lower interest rates in the last decade prompted the explosion of fixed indexed annuities (FIAs) in the United States. These popular insurance policies offered a growth component with the addition of a lifetime income provisions. In FIAs, accumulation is achieved through exposure to a variety of indices while offering principal protection guarantees. The vast array of new products and features have created the need for a means of consistent comparisons between FIA products available to consumers. We illustr ate that statistical issues in the temporal and cross-sectional return correlations of indices used in FIAs necessitates more sophisticated modelling than is currently employed. We outline few novel approaches to handle these two issues. We model the risk control mechanisms of a class of FIA indices using machine learning. This is done using a small set of core macroeconomic variables as modelling features. This makes for more robust cross-sectional comparisons. Then we outline the properties of a sufficient model for said features, namely ‘rough’ stochastic volatility.
  • ItemOpen Access
    Adolescent Vaping Behaviors: Exploring the Dynamics of a Social Contagion Model
    (2023-12-08) Machado-Marques, Sarah Isabella; Moyles, Iain
    Vaping, or the use of electronic cigarettes (e-cigarettes), is an ongoing issue for public health. The rapid increase in e-cigarette usage, particularly among adolescents, has often been referred to as an epidemic. Drawing upon this epidemiological analogy between vaping and infectious diseases as a theoretical framework, we aim to study this issue through mathematical modeling to better understand the underlying dynamics. In this thesis, we present a deterministic compartmental model of adolescent e-cigarette smoking which accounts for social influences on initiation, relapse, and cessation behaviors. We use results from a sensitivity analysis of the model’s parameters on various response variables to identify key influences on system dynamics and simplify the model into one that can be analyzed more thoroughly. Through steady state and stability analyses and simulations of the model, we conclude that (1) social influences from and on temporary quitters are not important in overall model dynamics and (2) social influences from permanent quitters can have a significant impact on long-term system dynamics, including the reduction of the smokers' equilibrium and emergence of multiple smoking waves.
  • ItemOpen Access
    Tracial simplex of every unital C*-algebra is a Choquet simplex
    (2023-12-08) Wang, Jiyu; Farah, Ilijas
    C*-algebras are norm-closed self-adjoint subalgebras of bounded linear operators on a complex Hilbert space. Choquet simplex is a special type of compact convex set with a unique representation property. The goal of this thesis is to find a self-contained and easily accessible proof of the classical fact that the set of tracial states of a unital C*-algebra is a Choquet simplex.
  • ItemOpen Access
    High-Dimensional Data Integration with Multiple Heterogeneous and Outlier Contaminated Tasks
    (2023-02) Zhong, Yuan; Xu, Wei; Gao, Xin
    Data integration is the process of extracting information from multiple sources and analyzing different related data sets simultaneously. The aggregated information can reduce the sample biases caused by low-quality data, boost the statistical power for joint inference, and enhance the model prediction. Therefore, this dissertation focuses on the development and implementation of statistical methods for data integration. In clinical research, the study outcomes usually consist of various patients' information corresponding to the treatment. Since the joint inference across related data sets can provide more efficient estimates compared with marginal approaches, analyzing multiple clinical endpoints simultaneously can better understand treatment effects. Meanwhile, the data from different research are usually heterogeneous with continuous and discrete endpoints. To alleviate computational difficulties, we apply the pairwise composite likelihood method to analyze the data. We can show that the estimators are consistent and asymptotically normally distributed based on the Godambe information. Under high dimensionality, the joint model needs to select the important features to analyze the intrinsic relatedness among all data sets. The multi-task feature learning is widely used to recover this union support through the penalized M-estimation framework. However, the heterogeneity among different data sets may cause difficulties in formulating the joint model. Thus, we propose the mixed $\ell_{2,1}$ regularized composite quasi-likelihood function to perform multi-task feature learning. In our framework, we relax the distributional assumption of responses, and our result establishes the sign recovery consistency and estimation error bounds of the penalized estimates. When data from multiple sources are contaminated by large outliers, the multi-task learning methods suffer efficiency loss. Next, we propose robust multi-task feature learning by combining the adaptive Huber regression tasks with mixed regularization. The robustification parameters can be chosen to adapt to the sample size, model dimension, and error moments while striking a balance between unbiasedness and robustness. We consider heavy-tailed distributions for multiple data sets that have bounded $(1+\omega)$th moment for any $\omega>0$. Our method is shown to achieve estimation consistency and sign recovery consistency. In addition, the robust information criterion can conduct joint inference on related tasks for consistent model selection.
  • ItemOpen Access
    A Dependence Analysis Within the Context of Risk Allocations: Distributions on the Simplex and the Notion of Counter-Monotonicity
    (2023-08-04) Mohammed, Nawaf Mahmood Abdullah; Furman, Ed; Su, Jianxi
    The remarkable development of today's financial and insurance products demands sound methodologies for the accumulation and characterization of intertwined risks. As a result, modern risk management emerges as a by product querying two key foundations. The first is concerned with the aggregation of said risks into one randomness which is consequently easily measured by a convenient risk measure and thereafter reported. The pooling is done from the different business units (BUs) composing the financial entity. The second pillar pertains to the opposite direction which concerns itself with the allocation of the total risk. It seeks to accurately and concretely attribute the riskiness of each individual BU with respect to the whole. The aggregation process, on one hand, has been fairly well studied in the literature, implemented in the industry and even embedded into the different accords. Risk capital allocation, on the other, is generally much more involved even when a specific risk measure inducing the allocation rule is assumed, let alone the case when a class of risk measures is considered. And unlike the aggregation exercise, which is moderately determined by the collection function, attributing capital is often more heavily influenced by the dependencies among the different BUs. In the literature, nonetheless, allocating capital can be categorized into two main camps. One is built upon the pretense that the distribution of risk should satisfy certain regulatory requirements. This leads to an axiomatic approach which is quite often mathematically tractable yet ignores the economic incentives of the market. The other school of thought is economically driven, allocating risk based on a profit-maximizing paradigm. It argues that capital allocation should reflect the risk perception of the institution and not be imposed by any arbitrary measure, for which its selection is dubious at best. However, the economic approach suffers from complex relations that lack clear definitive forms. At first glance the two perspectives may seem distant, as they arise naturally in their own contexts and are justified accordingly. Nonetheless, they can coincide for particular losses that enjoy certain peculiar model settings which are described thoroughly in the chapters thereafter. Surprisingly, the reconciliation comes in connection with the concept of trivial allocations. Triviality, in itself, attracts practitioners as it requires no discernible dependencies leading to a convenient yet faulty method of attributing risk. Regardless, when used in the right context it unveils surprising connections and conveys useful conclusions. The intersection of the regulatory and profit-maximizing principles, for example, mainly utilizes a milder version of triviality (proportional) which allows for distinct, albeit few, probabilistic laws that accommodate both theories. Furthermore, when a stronger triviality (absolute) condition is imposed, it yields another intriguing corollary, specifically that of restrictive extreme laws commonly known for antithetic or counter-monotonic variates. To address the framework hitherto introduced, in the first chapter of this dissertation, we present a general class of weighted pricing functionals. This wide class covers most of the risk measures and allocations found in the literature today and adequately represents their various properties. We begin by investigating the order characteristics of the functionals under certain sufficient conditions. The results reveal interactive relationships between the weight and the aggregation make-up of the measures, which consequently, allow for effective comparison between the different risks. Then upon imposing restrictions on the allocation constituents, we establish equivalent statements for trivial allocations that uncover a novel general concept of counter-monotonicity. More significantly, similar equivalences are obtained for a weaker triviality notion that pave the path to answer the aforementioned question of allocation reconciliation. The class of weighted functionals, though constructive, is too general to apply effectively to the allocation theories. Thus, in the second chapter, we consider the special case of conditional tail expectation (CTE), defining its risk measure and the allocation it induces. These represent the regulatory approach to allocation as CTE is arguably one of the most prominent and front-runner measures used and studied today. On the other side, we consider the allocation arising from the economic context that aims to maximize profit subject to other market forces as well as individual perceptions. Both allocations are taken as proportions as they are formed from compositional maps which relate to the standard simplex in either a stochastic or non-stochastic manner. Then we equate the two allocations and derive a general description for the laws that satisfy the two functionals. The Laplace transform of the multivariate size bias is used as the prime identifier delineating the general distributions and detailing subsequent corollaries and examples. While studying the triviality nature of allocations, we focused on the central element of stochastic dependence. We showed how certain models, extremal dependence for instance, enormously influences the attribution outcome. Thus far, nonetheless, our query started from the point of allocation relations, be it proportional or absolute, then ended in law characterizations that satisfy those relations. Equally important, on the other hand, is deriving allocations expressions based on a priori assumed models. This task requires apt choices of general structures which convey the desired probabilistic nature of losses. Since constructing joint laws can be quite challenging, the compendium of probabilistic models relies heavily on leveraging the stochastic representations of known distributions. This feat allows not only for simpler computations but as well for useful interpretations. Basic mathematical operations are usually deployed to derive different joint distributions with certain desirable properties. For example, taking the minimum yields the Marshall-Olkin distribution, addition gives the additive background model and multiplication/division naturally leads to the multiplicative background model. Simultaneously, univariate manipulation through location, scale and power transforms adds to the flexibility of the margins while preserving the overall copula. In the last chapter of this dissertation, we introduce a composite of the Marshall-Olkin, additive and multiplicative models to obtain a novel multivariate Pareto-Dirichlet law possessing a profound composition capable of modelling heavy tailed events descriptor of many extremal scenarios in insurance and finance. We study its survival function and the corresponding moments and mixed moments. Then we focus on the bivariate case, detailing the intricacies of its inherent expressions. And finally, we conclude with a thorough application to the risk and allocation functionals respectively.
  • ItemOpen Access
    Assessing Control Strategies and Timelines for Mycobacterium Tuberculosis Elimination
    (2023-08-04) Abdollahi, Elaheh; Moghadas, Seyed
    Tuberculosis (TB) continues to inflict a disproportionate impact on Inuit communities in Canada, with reported rates of active TB that are over 300 times higher than those of Canadian-born, non-Indigenous populations. The Inuit Tuberculosis Elimination Framework aims to reduce the incidence of active TB by at least 50% by 2025, with the ultimate goal of eliminating it (i.e., reducing the incidence of active TB below 1 case per 1,000,000 population) by 2030. However, whether these goals can be achieved with the resources and interventions currently available has not been investigated. This dissertation formulates an agent-based model (ABM) of TB transmission dynamics and control to assess the feasibility of achieving the goals of elimination framework in Nunavut, Canada. I applied the model to project the annual incidence of active TB from 2025 to 2040, taking into account factors such as time to case identification after developing active TB, contact tracing and testing, patient isolation and compliance, household size, and the potential impact of a therapeutic vaccine. In order to determine the potential reduction in TB incidence, various scenarios of treatment regimens were evaluated within the action plans for TB elimination. The scenario analyses demonstrate that the time-to-identification of active TB cases is a crucial factor in attainability of the goals, highlighting the importance of investment in early case detection. The findings also indicate that the goal of 50% reduction in annual incidence of TB by 2025 is only achievable under best case scenarios of combined interventions. However, TB elimination will likely exceed timelines indicated in the action plans.
  • ItemOpen Access
    Bayesian Model Selection for Discrete Graphical Models
    (2023-08-04) Roach, Lyndsay; Gao, Xin
    Graphical models allow for easy interpretation and representation of complex distributions. There is an expanding interest in model selection problems for high-dimensional graphical models, particularly when the number of variables increases with the sample size. A popular model selection tool is the Bayes factor, which compares the posterior probabilities of two competing models. Consider data given in the form of a contingency table where N objects are classified according to q random variables, where the conditional independence structure of these random variables are represented by a discrete graphical model G. We assume the cell counts follow a multinomial distribution with a hyper Dirichlet prior distribution imposed on the cell probability parameters. Then we can write the Bayes factor as a product of gamma functions indexed by the cliques and separators of G. In this thesis, we study the behaviour of the Bayes factor when the dimension of a true discrete graphical model is fixed and when the dimension increases to infinity with the sample size. We prove that the Bayes factor is strong model selection consistent for both decomposable and non-decomposable discrete graphical models. When the true graph is non-decomposable, we prove that the Bayes factor selects a minimal triangulation of the true graph. We support our theoretical results with various simulations. In addition, we introduce a variation of the genetic algorithm, called the graphical local genetic algorithm, which can be implemented on large data sets. We use a local search operator and a normalizing constant proportionate to the posterior probability of the candidate models to determine optimal submodels, then reconstruct the full graph from the resulting subgraphs. We demonstrate the graphical local genetic algorithm's capabilities on both simulated data sets with known true graphs and on a real-world data set.
  • ItemOpen Access
    Multivariate One-Sided Tests for Nonlinear Mixed-Effects Models with Incomplete Data
    (2023-08-04) Zhang, Yi-Xin; Liu, Wei
    Nonlinear mixed-effects (NLME) models are widely used in the analysis of longitudinal studies. The parameters in an NLME model typically have meaningful scientific interpretations, and these parameters may have some natural order restrictions such as being strictly positive. The problems of testing parameters with order restrictions are known as multivariate one-sided hypothesis testing. However, multivariate one-sided testing problems in NLME models have not been discussed thoroughly. In many longitudinal studies, the inter-individual variation can be partially explained by the time-varying covariates which, however, may be measured with substantial errors. Moreover, censoring and non-ignorable missingness in response are very common in practice. Standard testing procedures ignoring covariate measurement errors and/or response censoring/missingness may lead to biased results. We propose multiple imputation methods to address the foregoing data complication. The multiple imputation methods allow us to use existing "complete-data" hypothesis testing procedures for parameters with order restrictions. In this thesis, we propose testing statistics for the multivariate one-sided testing problems in NLME models with: (i) mis-measured covariates, (ii) both mis-measured covariates and left-censored response, and (iii) both mis-measured covariates and non-ignorable missing response, which are discussed in Chapters 2-4, respectively. Some asymptotic null distributions of the proposed test statistics are derived. The proposed methods are illustrated by HIV data examples and evaluated by simulation studies under different scenarios. Simulation results have shown the power advantage of the proposed testing statistics over the commonly used ones.
  • ItemOpen Access
    A Proposed Numerical Method for the 3D Wave Equation with Constant Speed
    (2023-08-04) Cayley, Omar; Gibson, Peter
    This thesis implements a new numerical scheme to solve the (classical) constant speed wave equation in the three dimensional space, currently existing methods (even if more restrictive) rely on time iterations and observe the accumulation of error at each time step iteration, the new method is iteration free ! Making it a good choice for applications requiring accurate results at large times. Numerical experiments and error analysis reveal accuracy of the scheme. The principal conclusion is that the method, based on the Radon transform, must be considered, and we propose it should be developed and count among the standard methods implemented in computational software for engineering and industrial applications.
  • ItemOpen Access
    Linear Spectral Unmixing Algorithms for Abundance Fraction Estimation in Spectroscopy
    (2023-03-28) Oh, Changin; Moyles, Iain
    Fluorescence spectroscopy is commonly used in modern biological and chemical studies, especially for cellular and molecular analysis. Since the measured fluorescence spectrum is the sum of the spectrum of each fluorophore in a sample, a reliable separation of fluorescent labels is the key to the successful analysis of the sample. A technique known as linear spectral unmixing is often used to linearly decompose the measured fluorescence spectrum into a set of constituent fluorescence spectra with abundance fractions. Various algorithms have been developed for linear spectral unmixing. In this work, we implement the existing linear unmixing algorithms and compare their results to discuss their strengths and drawbacks. Furthermore, we apply optimization methods to the linear unmixing problem and evaluate their performance to demonstrate their capabilities of solving the linear unmixing problem. Finally, we denoise noisy fluorescence emission spectra and examine how noise may affect the performance of the algorithms.
  • ItemOpen Access
    Retirement Spending under a Habit Formation Model
    (2023-03-28) Kirusheva, Snezhana; Salisbury, Thomas S.; Huang, Huaxiong
    In the thesis we consider the problem of optimizing lifetime consumption under a habit formation model. Our work differs from previous results, because of incorporating mortality and pension income, using a fixed rather than a variable asset allocation, and adopting habit into the utility multiplicatively rather than additively, Lifetime utility of consumption makes the problem time inhomogeneous, because of the effect of ageing. Considering habit formation means increasing the dimension of the stochastic control problem, because one must track smoothed-consumption using an additional variable, habit. Including exogenous pension income means that we cannot rely on a kind of scaling transformation to reduce the dimension of the problem as in earlier work, therefore we solve it numerically, using a finite difference scheme and then using a static programming approach. We also explore how consumption changes over time based on habit if the retiree follows the optimal strategy in the first part and a greedy strategy in the second part of the thesis. Also we explore how the optimal consumption and asset allocation change when pension varies. Finally, we answer the question of whether it is reasonable to annuitize wealth at the time of retirement or not by varying parameters, such as asset allocation and the smoothing factor.
  • ItemOpen Access
    Results on R-Diagonal Operators in Bi-Free Probability Theory and Applications of Set Theory to Operator Algebras
    (2023-03-28) Katsimpas, Georgios; Skoufranis, Paul
    The contents of this dissertation lie in the branch of pure mathematics known as functional analysis and are focused on the theory of bi-free probability and on the interplay between set theory and the field of operator algebras. The material is comprised of two main parts. The first part of this dissertation investigates applications of set theory to operator algebras and is further divided into two chapters. The first chapter is focused on the Calkin algebra Q(H) and explores the class of C*-algebras which embed into it. We prove that under Martin's axiom every C*-algebra of density character less than the caridnality of the continuum embeds into the Calkin algebra and, moreover, we show that the assertion "every C*-algebra of density character less than continuum embeds into Q(H)'' is independent from ZFC. In the second chapter we investigate separably representable AF operator algebras from a descriptive set-theoretic viewpoint. Contrary to the case of separable AF C*-algebras which are classified up to isomorphism by their K-theory, we show that the canonical isomorphism relations for separable, non-self-adjoint AF operator algebras are not classifiable by countable structures. The second part of this dissertation focuses on the theory of bi-free probability. This part is further divided into four chapters, the first of which concerns the development of the theory of R-diagonal operators in the setting of bi-free probability theory. We define bi-R-diagonal pairs based on certain alternating cumulant conditions and give a complete description of their joint distributions in terms of their invariance under multiplication by bi-Haar unitary pairs. The final three chapters of this manuscript concern the development of non-microstate bi-free Fisher information and entropy with respect to completely positive maps. By extending the operator-valued bi-free structures and allowing the implementation of completely positive maps into bi-free conjugate variable expressions, we define notions of Fisher information and entropy which generalize the corresponding notions of entropy in the bi-free setting. As an application we show that minimal values of the bi-free Fisher information and maximal values of the non-microstates bi-free entropy are attained at bi-R-diagonal pairs of operators.
  • ItemOpen Access
    Modified BIC for Model Selection in Linear Mixed Models
    (2022-12-14) Lai, Thi Hang Thi; Gao, Xin
    Linear mixed effects models are widely used in applications to analyze clustered and longitudinal data. Model selection in linear mixed models is more challenging than that of linear models as the parameter vector in a linear mixed model includes both fixed effects and variance components parameters. When selecting the variance components of the random effects, the variance of the random effects must be non-negative and therefore, parameters may lie on the boundary of the parameter space. In this dissertation, we propose a modified BIC for model selection with linear mixed effects models that can solve the case when the variance components are on the boundary of the parameter space. We first derive a modified BIC to choose random effects assuming that the random effects are independent. Then, we propose a modified BIC to choose random effects when random effects are assumed to be correlated. Lastly, we propose a modified BIC to choose both fixed effects and random effects simultaneously. Through the simulation results, we found that the modified BIC performs well and performs better than the regular BIC in most cases. The modified BIC is also applied to a real data set to choose the most appropriate linear mixed model.
  • ItemOpen Access
    Galois Representation on Elliptic Curve
    (2022-12-14) Min, Hyeck Ki; Bergeron, Nantel
    This thesis explores the orders of Galois representations about torsion subgroups of elliptic curves. We review the literature on elliptic curves and Serre's theorem. We describe a field formed by adjoining torsion subgroup of an elliptic curve. We show that the extension is finite and algebraic. Next, we construct a Galois group from the extension and use the relationship with a generalized linear group to find the possible values of the order of the Galois group. The order depends on the field where an elliptic curve is defined, the reducibility of f(x), and structure of the torsion subgroup. This approach provides the same insight as Serre's theorem that provides an upper bound of the order of Galois representation of an extended field given by adjoining a subgroup of points of an elliptic curve.
  • ItemOpen Access
    Modeling the Impact Of Environmental Factors, Diapause and Control Strategies on Tick and Tick-Borne Disease Dynamics
    (2022-12-14) Tosato, Marco; Wu, Jianhong
    In this thesis, we focus on mathematical formulation and analyses for a specific vector responsible for a wide variety of diseases: ticks. Due to climate change, various tick species are rapidly spreading northward from the United States and have increasingly affected the Canadian population through a variety of tick-borne diseases including Lyme disease. In order to address this problem, the Public Health Agency of Canada has dedicated an entire section of its website to discuss the risks and the possibility of preventing, recognising and treating tick-bites. It is therefore important to analyse tick and tick-borne disease dynamics in order to better understand, study and prevent possible new outbreaks. We aim to achieve this by using mathematical and epidemiological tools including dynamical systems, ordinary and delay differential equations, basic reproduction numbers and Hopf bifurcation theory. For this purpose, we produce three different models to study the effect of physiological features such as diapause, control strategies and the effect of environmental conditions on tick and tick-borne disease persistence and periodicity. The first model we propose is a two-patch tick population model in which we show how the tick reproduction number $R_{T,c}$ affects the long-term behaviours of tick population and how it might lead to extinction, convergence to a coexistent or to a periodic solution. The second model is a single tick population model with switching delays. In this, we prove how oscillations of different frequencies caused by two delays might produce multi-cycle periodic solutions. The last model we analyse is a tick-host model including host control strategies. In this work, we find that there are situations in which the improper application of repellent and acaricide may lead to unexpected results and improve disease spread instead of reducing it.