Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Viswanath (Sri) Srikanth, Senior Advanced Analytics Manager & Senior Data Scientist, Cisco
Presented at the 7th Annual BayesiaLab Conference at the North Carolina Biotechnology Center.
Large & complex purchases at organizations are often made using input from multiple individuals who would have evaluated the solution from multiple perspectives. Identifying all of these influencers from a given organization becomes critical in ensuring marketing and messaging consistency and providing relevant answers to their questions. The use of BayesiaLab allowed the Marketing Analytics team at Cisco to identify these buying groups amongst its customer organizations and subsequently target them as part of its marketing campaigns to serve them and help accelerate their purchase journeys.
Viswanath (Sri) Srikanth is the Advanced Analytics Manager for Cisco’s Global Marketing Analytics team, Cisco Marketing. During his tenure at Cisco, he led data initiatives to understand and define customer engagement, marketing attributes, customer journeys, and more. His work has received multiple industry recognition, including the ANNY Award in 2017, the ANA Award in 2018, and the Highly Commended DRUM Citation in 2018. Prior to joining Cisco, Sri worked at IBM and, among other things, chaired the creation of an industry standard for customer data collection at the W3C standards organization.
Mohsen Hosseini, Ph.D., Assistant Professor, Industrial Engineering Technology University of Southern Mississippi
Presented at the 7th Annual BayesiaLab Conference at the North Carolina Biotechnology Center.
The ripple effect can occur when a supplier base disruption cannot be localized and consequently downstream the supply chain (SC), adversely affecting performance. While stress-testing of SC design and assessment of their vulnerability to disruption in a single-echelon-single-event setting is desirable and indeed critical for some firms, modeling the ripple effect impact in multi-echelon-correlated-events systems is becoming increasingly important. Notably, the ripple effect assessment in multi-stage SCs is particularly challenged by the need to consider both vulnerability and recoverability capabilities at individual firms in the network. We construct a new model based on the integration of a Discrete-Time Markov Chain (DTMC) and a Dynamic Bayesian Network (DBN) to quantify the ripple effect. We use the DTMC to model the recovery and vulnerability of the supplier.
Yong Zhang, Ph.D., Senior Scientist, Procter & Gamble
Presented at the 7th Annual BayesiaLab Conference at the North Carolina Biotechnology Center.
Successful product innovation relies heavily on multiple types of product tests. These tests include a virtual online concept test at an earlier stage, a blind and identified usage test of prototype products in the middle, and a household panel survey when the product is in the market. Currently, most data analytics are conducted based on separate analyses of sporadic and piecemeal data from different tests. The product development decisions were often made through team meetings to qualitatively summarize isolated analytics of different product tests. We developed a Bayesian framework based on Bayesian Belief Network (BBN) model to systematically aggregate data from different tests and fuse information quantitatively from different sources for better product innovation and consumer understanding. The developed methods can be used either to identify “Body of Evidence” from all available data sources or to conduct cross-inference from one data source to another data source.
Yong Zhang, Ph.D., Senior Scientist, Procter & Gamble
Dr. Yong Zhang leverages Bayesian data and modeling science to develop a strategy for product design, manufacturing, storage, and transportation across P&G to improve consumers’ quality of life and drive positive influence on the environment and society under different climate change scenarios. He develops modeling and simulation methods and tools through Front End Innovation projects to enable and promote the capability across P&G for breakthrough consumer understanding and product innovation. The methods and tools can be used to extract and integrate information from a variety of data sources to find a “Body of Evidence” for consumer and product research based on Nonparametric Bayesian statistics and deep learning algorithms.
Jacqueline MacDonald Gibson, Ph.D., Chair, Department of Environmental and Occupational Health, School of Public Health, Indiana University
Nationwide, more than 42.5 million Americans obtain their drinking water from private wells that are not regulated by the U.S. Safe Drinking Water Act. Recent research has shown that in some areas, the risk of lead in drinking water in houses relying on private wells is comparable to that in Flint, Michigan, during the highly publicized water crisis in 2015. Lead can cause irreversible neurological damage in children, leading to decreases in IQ, poor performance in school, and increased risk of juvenile delinquency. Yet, research has shown few private well owners are aware of the contamination risk, and few get their water tested for lead or other contaminants. This presentation will describe the development of a Bayesian network model to predict households where children are at the greatest risk of exposure to lead from drinking private well water. The model is based on a data set of 182,821 children’s blood lead test results obtained from the North Carolina Childhood Blood Lead Poisoning Prevention Program. These records were matched with data on drinking water sources at each household obtained from county tax records, data on characteristics of each house (also from tax records), and neighborhood demographic information (from the U.S. Census). The model can be used to predict the probability that a child in a specific house has elevated blood lead as a result of exposure to lead in private well water, conditional on characteristics of the house and neighborhood. We plan to develop a web-based model version and train local health departments on the use of the model so that they can use it to prioritize outreach programs, encouraging those relying on private wells to test their water for lead and to install filters when lead is detected.
Jackie MacDonald Gibson has a multi-disciplinary background in mathematics and engineering that she applies to risk assessment and policy problems. Before her appointment as Chair of the Department of Environmental and Occupational Health at Indiana University, she was a professor in the Department of Environmental Sciences and Engineering at the University of North Carolina, Chapel Hill. Her prior experience also includes positions as Associate Director of the Water Science and Technology Board, U.S. National Research Council. She was also a Senior Engineer at the RAND Corp. She holds Ph.D. degrees in Engineering and Public Policy and Civil and Environmental Engineering from Carnegie Mellon University, an M.S. in Civil and Environmental Engineering from the University of Illinois at Urbana–Champaign, and a B.A. in mathematics from Bryn Mawr College.
Steven F. Wilson, Ph.D., Standpoint Decision Support Inc.
Bayesian networks are commonly used to address "big data" problems and can also model expert knowledge in the absence of any data. Between these extremes lies a broad class of small data problems, which I define as those where causal explanations are sought from observational datasets with small sample sizes relative to the number of dimensions. Many of these problems are central to ongoing, important policy debates, but machine learning techniques and standard statistical analyses are generally unhelpful. Using examples from endangered species policy development, I present an analysis workflow based on causal identification, model instantiation with informed priors, and Bayesian updating to generate models that blend existing knowledge and available data. Such models can serve an important role in decision-making where policy alternatives cannot be tested experimentally and/or where datasets are constrained.
Steve Wilson has over 25 years of experience working at technical and professional levels in strategic and operational planning for public and private-sector clients. He specializes in quantitative approaches to decision support and policy analysis. Steve holds a Ph.D. in wildlife ecology from the University of British Columbia in Vancouver.
Presented at the at the North Carolina Biotechnology Center.
Javad Roostaei, Ph.D., University of North Carolina at Chapel Hill
Presented at the 7th Annual BayesiaLab Conference at the North Carolina Biotechnology Center.
During the past year, per- and poly-fluoroalkyl substances (PFAS), including GenX, have been detected in more than 75% of 769 private water supply wells located near the Chemours Company Fayetteville Works in North Carolina. GenX concentrations exceeded the North Carolina provisional public health goal of 140 ng/L in nearly 25% of the wells. High geographic variation in PFAS occurrence has been observed in multiple areas; properties with highly contaminated wells neighbor properties where no PFAS have been detected. The causes of this variation are not understood. A wide variety of factors—from fine-scale geologic heterogeneity to well depth and age to wind direction relative to the Chemours facility—could influence contamination risk. However, the relative importance of such factors and how they interact to influence whether a specific drinking water well will be contaminated are not understood. This presentation will describe a detailed spatial data set and a machine-learned Bayesian network model for risk assessment of GenX—one type of PFAS—in private drinking water wells in North Carolina. The accuracy of the model has been verified by 10-fold cross-validation. The Bayesian network model will be useful for predicting which unsampled wells may be at risk, not only in North Carolina but also potentially in other locations struggling with PFAS contamination of groundwater.
Javad Roostaei, Ph.D., Postdoctoral Research Associate at the University of North Carolina at Chapel Hill
Dr. Roostaei received his Ph.D. in Civil and Environmental Engineering with a master’s degree in Computer Science at Wayne State University in Detroit, Michigan. Currently, he is working as a machine learning postdoctoral research associate at the University of North Carolina at Chapel Hill. His research involves developing Bayesian network models for environmental risk evaluation in private water wells. He is applying machine learning methods to a variety of public health and civil and environmental engineering problems ranging from emerging contaminants and lead in drinking water to the development of harmful algal blooms in surface waters.
V Anne Smith, Ph.D., & Edwin Hui, University of St Andrews
Biological systems consist of interlinked interacting elements, and unravelling these interactions for understanding system behaviour, such as neuronal activity during behaviour, gene regulation in response to cancer treatment, and ecological shifts in changing climate, is of great interest for biologists. Bayesian networks hold promise for revealing interactions in these complex biological systems due to their ability to simultaneously model multiple types of interactions from the heterogeneous and noisy data common in biological data collection. Here, we present first an overview of research in the Smith lab advancing Bayesian network algorithms for structure discovery from observational data in three types of biological systems: neuronal systems, genetic systems, and ecological systems. Each system presents both its own particulars of data kind and availability, as well as differing goals for interpretation: what the biological researcher cares to learn from the model. We briefly discuss algorithm developments for handling features such as small data amount, differing data distributions, and making use of spatially explicit observations, then concentrate on the types of biological discovery Bayesian networks support in each system. In neuronal systems, the perspective of the network scientist and biologist are the most congruent, where entire networks are of interest: their structure changing during behaviour and informing features of neural control of behaviour. In contrast, genetic researchers are more often interested in identifying only a small set of genes or pathways that can direct future experimental research, such as into mechanisms of drug resistance. Ecologists can make use of both detailed features, such as identification of 'keystone' interacting species, as well as using networks for further analysis of species constellations, such as identifying groups that respond similarly to environmental gradients. We finish with a case study of applying Bayesian networks to rocky shore ecosystems, looking at networks of interactions in areas of differing species composition.
V Anne Smith is on the Biology faculty at the University of St Andrews in Scotland, where she runs an integrative computational biology research programme. She traces her dual interest in biology and computation back to her undergraduate days, with a degree in Biology with a Mathematics minor from the College of William and Mary. Her Ph.D. work at Indiana University examined animal behaviour from a complex systems perspective. She has since researched areas as diverse as neuroscience, genetics, cancer, and ecology. She is active in decision-making bodies for several Scottish and UK organisations in both biology and computer science.
Edwin Hui is a Master's student from the University of St Andrews, where he is currently focusing on applying machine learning algorithms to study community ecology. Through the use of different types of machine learning techniques, he hopes to bring a new perspective to the study of ecological dynamics.
Presented at the at the North Carolina Biotechnology Center.
Kurt S. Schulzke, JD, CPA, CFE, University of North Georgia
Presented at the 7th Annual BayesiaLab Conference at the North Carolina Biotechnology Center.
In 2010, Robert M. Lloyd wrote, “In an ideal world, a court would be able to hear the evidence, estimate the plaintiff’s damages, and quantify its own confidence that the estimate was accurate.” This presentation argues that Bayesian networks can move the legal world very close to Lloyd’s ideal. Using an actual court case, this presentation demonstrates how expert witnesses can use Bayesian belief networks (BBNs) to estimate economic damages with "reasonable certainty" as required by case law, challenges the mythology that point estimates offer higher certainty than value ranges, and illustrates how courts, arbitrators, and negotiators can use BBNs to "quantify their own confidence" in damages estimates.
Kurt S. Schulzke, JD, CPA, CFE Associate Professor of Accounting & Law University of North Georgia Email: kurt.schulzke@ung.edu
Kurt Schulzke, JD, CPA, CFE, teaches forensic accounting and audit analytics at the University of North Georgia. He has published on revenue recognition, materiality, expert witnessing, economic damages, and business valuation through a Bayesian networks lens in a variety of outlets, including the Columbia Journal of Transnational Law, Vanderbilt Journal of Transnational Law, Journal of Forensic Accounting Research, Tennessee Journal of Business Law, and The Value Examiner. With an M.S. in Applied Statistics from Kennesaw State University, he is equally adept as counsel, expert witness, or neutral in valuation-related matters.
Zabi Ulla S, Sr. Director Advanced Analytics, Course5 Intelligence
Presented at the 7th Annual BayesiaLab Conference at the North Carolina Biotechnology Center.
The discussion focuses on knowledge elimination in causal modeling, especially transfer learning, and how Course5 uses these concepts to solve complex problems in marketing measurement and optimization. Course5 marketing applications are primarily built around Bayesian Science, leveraging BayesiaLab at various levels in its development. In the specific application of BayesiaLab in its solution ‘Integrated Marketing Measurement,’ I demonstrate how Course5 infers knowledge from one model and transfers it into other related models to handle various challenges in marketing measurement today. The solution is exclusively built leveraging the Bayesian Inference engine from BayesiaLab and will talk about the flexibility and agility of the technology along with the robustness of Bayesian Inference Modeling.
Zabi Ulla S, Sr. Director Advanced Analytics, Course5 Intelligence
Zabi has 15 years of experience in data analytics, machine learning, and applied artificial intelligence, primarily in the business consulting domain. He has worked for marque clients such as Lenovo, Intel, Microsoft, YouTube, Del Monte, Wrigley, T-Mobile, etc., solving complex business problems related to customer monetization and marketing optimization.
In his current role, Zabi leads advanced analytics and data science practice with Course5 Intelligence. In his previous stint with other companies, he gained experience in designing and executing machine learning models and developed teams to develop niche solutions.
Zabi comes from an applied statistics background. He has a master's in statistics and was recently selected as a Top-40 data scientist in India by Analytics India Magazine. Zabi carries an acute interest in machine reasoning, causal inference, and experimental designs, along with machine learning and data science.
John F. Carriger, U.S. Environmental Protection Agency
Presented at the 7th Annual BayesiaLab Conference at the North Carolina Biotechnology Center.
Bayesian networks are useful for generating insights from survey data on workforce satisfaction and beyond. Employee viewpoint survey interpretations may be supported by data-driven probabilistic graphical support tools. The capabilities for a Bayesian network survey analysis are demonstrated and explored through an initial analysis of the 2018 Federal Employee Viewpoint Survey (FEVS) response data from personnel at the U.S. Environmental Protection Agency (EPA). The FEVS is a voluntarily-taken survey that has been administered annually to federal employees across the U.S. since 2002. A focused analysis of EPA data was conducted to examine the insights from applying Bayesian networks. First, EPA data were isolated from the rest of the federal employee responses. Three partitions of the EPA survey response data were further made for separate analyses: all data from the EPA, data from only the Office of Research and Development personnel, and data from all personnel except from the Office of Research and Development. Core survey questions were used for this analysis that comprised questions related to viewpoints on workplace experiences, supervision, and employee satisfaction. Demographics and work/life balance questions were not included in this analysis. Each of the three partitions of responses was separately analyzed with Bayesian networks and then compared. An exploratory analysis was first conducted to examine the importance of each variable from contribution to the joint probability of a tree-based network. Node force statistics provided quantitative measures for the centrality of the response question in the model, and the visual relationships and arc force measures were used to examine associations. Next, supervised learning was conducted to examine the relationships between the core questions and responses to a target question. The resulting model was used with dynamic profile and target optimization tree methods to develop a priority order and pathway proposals for maximizing a positive response to the target question. Additional approaches for generating insights with the survey data, including clustering of survey questions, were also examined but not fully implemented in this exploratory analysis. Advances in Bayesian network methods for handling large and complex data sets from surveys can allow for clear insights from multivariate survey data and clarification of potential pathways for optimization under uncertainty.
EPA Disclaimer: The views expressed in this presentation are those of the authors and do not necessarily represent the views or policies of the U.S. Environmental Protection Agency.
U.S. Environmental Protection Agency, Office of Research and Development, National Risk Management Research Laboratory, Land and Materials Management Division, Life Cycle and Decision Support Branch, Cincinnati, OH USA
U.S. Environmental Protection Agency, Office of Research and Development, National Risk Management Research Laboratory, Land and Materials Management Division, Remediation and Technology Evaluation Branch, Cincinnati, OH USA
U.S. Environmental Protection Agency, Office of Research and Development, National Risk Management Research Laboratory, Water Systems Division, Water Resources Recovery Branch, Cincinnati, OH USA
John Carriger is a research scientist at the U.S. Environmental Protection Agency’s Office of Research and Development in Cincinnati, Ohio. John has a marine science Ph.D. from the College of William and Mary. John’s research interests include applying risk assessment, decision analysis, and weight of evidence tools to environmental problems.
Annie M. Lasway, MPH, PMP, CPC, MITRE Corporation, McLean, VA, USA
Presented at the 7th Annual BayesiaLab Conference at the North Carolina Biotechnology Center.
Feature selection is a crucial and challenging task in statistical and probabilistic modeling, there are many studies that try to optimize and standardize this process for various types of data. To build and interpret a model that takes into consideration all variables is difficult. Historically, feature selection has been made based on provider knowledge and experience. As Judea Pearl noted in his book on causality, ‘leaving causality in the hands of intuition and judgment is an impediment in research’. Advancements in Bayesian Network research is tied closely to causality and the seminal work of Judea Pearl. Methods such as Directed Cyclic Graphs and Bayesian Networks provide techniques that enable the establishment of causation from association when working with non-experimental data. Variable selection is even more important in high-dimensional datasets and it is often difficult to determine which variables are relevant. In high dimensional observational data, the causal impact of treatment. can be optimized and achieved through blocking Back-Door paths in order to identify the Markov Blanket. By holding comorbidities ceterus paribus, we can isolate the treatment effect. The observed differences are then identified as the treatment effect. The Markov blanket of treatment is a group of covariates that blocks the effect of other covariates on treatment. Markov blankets include direct causes that are the parents and co-parents, as well as the effects which are the children. Parents in the Markov Blanket (pMB) can be determined by analyzing independent variables that occur before treatment, this removes covariates in the causal path from treatment to outcome which tends to be the complications associated with treatment. The pMB in this study is determined using likelihood ratios (LR). LR is the ratio of two conditional probabilities. It is a probability concept that can be used to develop predictive data algorithms. In probability theory, LR are used to indicate how useful one element is in predicting the occurrence of an event. LR measure the association between each predictor and an outcome variable. To demonstrate the efficiency of LASSOSql, the model will be applied to real data. This study will use causal methods to predict the optimal Revisit Intervals (RVI) for patients with diabetes using claims data from the Centers for Medicare and Medicaid Services (CMS). Currently, revisits are scheduled by providers based on heuristics and experience, with large variability and little empirical evidence. Yet, evidence suggests that RVI can be safely lengthened for many patients without decrements in quality or outcomes. Longer RVI for diabetic patients who need to be seen sooner can lead to complications related to high blood sugar which can affect various cells and organs in the body. Complications may include kidney and eye damage, which could result in blindness, or an increased risk for heart disease or stroke. The proposed methodology uses LR to assess the chances that a patient would have kidney disease given a RVI based and also based on patient comorbidities.
The dataset contains data from the CMS Limited Dataset from 2016.
The average RVI is calculated across the entire sample. Long RVI is defined as the RVI that is 1 standard deviation above the average. Short RVI is defined as 1 standard deviation below the average. Using standard deviations instead of values below and above the cut-off point increases the sensitivity of the model. The LR are calculated as the prevalence of the risk factor in the presence of a positive outcome over the prevalence of the same risk factor in the absence of the outcome. A sensitivity analysis is then conducted. Conditional probabilities are used to reduce the number of comorbidities needed in the predictive analysis.
LASSOSql is an effective methodology of predicting optimal RVI for chronic conditions.
We assume that the data points are independent of each other between and within groups.
This study focuses on determining the pMB for the impact of RVI and every comorbidity on Kidney Disease in order to optimize routine RVI for primary care. The results of which, could help maximize access to care for diabetic patients and therefore inform and influence practice management and policy standards related to RVI.
Annie M. Lasway, MPH, PMP, CPC, MITRE Corporation, McLean, VA, USA
Farrokh Alemi, PhD, George Mason University (GMU), Fairfax, VA, USA
Annie Lasway is a Senior Health Systems Analyst at the MITRE Corporation supporting Public Sector Healthcare Research. Before joining MITRE, Ms. Lasway worked at Altarum Institute and the National Institutes of Health. Annie holds a bachelor’s degree in Community Health from the University of Maryland College Park and a Master of Public Health with a concentration in Global and Community Health from George Mason University (GMU). She is a certified Project Management Professional (PMP) and a Certified Professional Coder (CPC). She is currently pursuing her Doctoral Degree in Health Services Research with a concentration in Informatics at GMU.
Michael Thompson, Ph.D., The Procter & Gamble Company
Presented at the at the North Carolina Biotechnology Center.
North Carolina Biotechnology Center, October 10–11, 2019
The 7th Annual BayesiaLab Conference turned out to be a great success. Participants from all across North America, Europe, and Asia joined us for this event at the North Carolina Biotechnology Center in Durham. We have video recordings of all talks available in this archive for those who missed the event.
If you missed some of the talks, we uploaded recordings of all presentations and the corresponding slides.
Emmanuel Keita
Presented at the 7th Annual BayesiaLab Conference at the North Carolina Biotechnology Center.
At the heart of the companies' information systems, close to decision-makers and operational staff, my professional experiences led me to observe that Business Intelligence (BI) did not provide a satisfactory answer to their real problems. What had changed in 20 years? Rainbow colors on the screens, OLAP cubes democratization, calculation speed… exciting — and also a nightmare — for those in the IT department. But what is the real added value for the decision-makers and the workers? Curious about the potential of AI, I had just been trained in Python, but as a statistician, I saw nothing intellectually satisfying in my discovery of machine learning. Then, a guy talked to me about "Bayesian networks."
That will be the starting point for this talk. An escapade into "augmented intelligence": Bayesian networks and BayesiaLab, encounters, change of relationship to the data, to beliefs, to the world, and to myself… But, who am I, really?
has a multi-disciplinary background in mathematics and information system management:
"Augmented Intelligence Evangelist" — he popularizes decision-making discernment to different audiences: from high school to Google Zurich events, he focuses on human discernment in decision-making: not all is data!
National defense auditor (France), Emmanuel was contributor of the "Villany report" — AI’s French strategy — (03/2018) and autor of an article "Le potentiel des réseaux bayésiens" in Défense & Stratégie Internationale (04/2019).
Data manager and statistician consultant in the pharmaceutical industry (GSK), in charge of statistics at Fujifilm, France.
Head of the Business Intelligence division of Groupe Soufflet (agri-food industry).
Emmanuel recently stopped Sundiata’s adventure for new challenges.
Dr. Lionel Jouffe, Bayesia S.A.S.
Presented at the 7th Annual BayesiaLab Conference at the North Carolina Biotechnology Center.
Dr. Lionel Jouffe explains key innovations in BayesiaLab 9, including Structural Priors Learning, Data Perturbation for Structural Learning, and Most Relevant Explanation.
Dr. Lionel Jouffe is co-founder and CEO of France-based Bayesia S.A.S. Lionel holds a Ph.D. in Computer Science from the University of Rennes and has worked in Artificial Intelligence since the early 1990s. While working as a Professor/Researcher at ESIEA, Lionel started exploring the potential of Bayesian networks.
After co-founding Bayesia in 2001, he and his team have been working full-time on the development of BayesiaLab. Since then, BayesiaLab has emerged as the leading software package for knowledge discovery, data mining, and knowledge modeling using Bayesian networks. It enjoys broad acceptance in academic communities, business, and industry.
Nicholas V. Scott, Ph.D., Riverside Research
Presented at the 7th Annual BayesiaLab Conference at the North Carolina Biotechnology Center.
Environmental engineering remote sensing platforms using hyperspectral imagery are often responsible for monitoring coastal regions in order to safeguard national waters. This objective requires determining subsurface turbulent structure from surface water spatial measurements for flow state assessment and decision-making. The inability of remote sensing platforms to penetrate the water column at depth because of turbulence-induced sediment-concentration modulation necessitates using models that dynamically link surface and subsurface structures. A hidden Markov model is applied to large-eddy simulated three-dimensional turbulent flow for the purpose of exploring the feasibility of constructing a system model possessing turbulent state evolution diagnostic/prognostic statistical power. Parameters for a temporal Bayesian network model are estimated from data based on the Markov assumption utilizing data statistical covariance structure. Initial results suggest a strong nonlinear coupling between the mean flow-directed vorticity, cross-mean flow velocity, and sediment concentration. In addition, a Bayesian-based state-action estimation algorithm is employed that demonstrates which turbulent feature variables should be focused on at specific times, given the desire to reach a known goal state and given only a limited number of observations. Such a model gives experimentalists time- and resource-saving guidance for determining what turbulent variables to measure at different times in order to reach a known turbulent goal state. Overall, preliminary model analysis results set the stage for implementing and exploiting algorithms using high-level industrial Bayesian belief network software such as BayesiaLab.
Dr. Nicholas Scott is a modeling scientist and physical oceanographer and has been a member of the professional staff at Riverside Research in Dayton, OH, since October 2012. He investigates the applicability of traditional and non-traditional signal and image processing techniques to the extraction of information from remotely sensed imagery. This includes hyperspectral and multispectral imagery. His present work includes cognitive modeling of geo-intelligence information, sensor array time series analysis of environmental data, and the application of pattern recognition techniques to turbulent flow imagery and numerically simulated data. He is also involved in the application of probabilistic graphical modeling algorithms for information fusion and statistical inference.