Schedule for: 17w5076 - Synthesis of Statistics, Data Mining and Environmental Sciences in Pursuit of Knowledge Discovery
Beginning on Sunday, October 29 and ending Friday November 3, 2017
All times in Oaxaca, Mexico time, CDT (UTC-5).
Sunday, October 29 | |
---|---|
14:00 - 23:59 | Check-in begins (Front desk at your assigned hotel) |
19:30 - 22:00 | Dinner (Restaurant Hotel Hacienda Los Laureles) |
20:30 - 21:30 | Informal gathering (Hotel Hacienda Los Laureles) |
Monday, October 30 | |
---|---|
07:30 - 09:00 | Breakfast (Restaurant at your assigned hotel) |
09:00 - 09:15 | Introduction and Welcome (Conference Room San Felipe) |
09:15 - 09:50 |
Katherine Ensor: Furthering Our Understanding of the Link between Health and Environment in an Urban Setting ↓ There is an increasing focus on geo-spatial linking of the large data sets related to both health and the environment at the urban level. For the last several years, I have been leading a team to build the Kinder Institute for Urban Research Urban Data Platform (UDP) to study the greater Houston area. Similar to many open government data platforms, the UDP hosts a large amount of data across many categories for the Houston area. The objective is to build up a system that helps to understand how residents live, work, learn and play in the area. In this discussion, I will demonstrate the use of the UDP resources coupled with advanced space-time statistical modeling to further our understanding of the link between air quality and health. This knowledge has led to improved management of childhood asthma faced by over 6000 students in the Houston Independent School District. I will close with how the UDP system is facilitating environmental sampling in the post-Hurricane Harvey recovery period and offer insights on developing a space-time cumulative environmental map for the area. (Conference Room San Felipe) |
09:50 - 10:25 |
Lizzy Warner: Big Data, Downscaling, and Interdisciplinary Approaches to Understanding Extreme Events ↓ In a world with rapid climactic change and intensifying extremes, it becomes increasingly important to improve predictive modeling. The Sustainability and Data Sciences Laboratory (SDS Lab) at Northeastern University in Boston, MA has taken an interdisciplinary approach to understanding interconnected complex systems using a combination of mathematical, scientific, engineering, and computational tools. Through the use of machine learning, statistics, physics, and nonlinear dynamical methods—such as chaos and complex networks—we have developed enhanced quantitative understandings of extremes and change in a way that can be translated so as to inform policy and create more resilient social systems. The focus of our research centers around risk and adaptation, resilience of critical infrastructure and lifeline networks, and sustainability of ecosystems and resources. This presentation provides an overview of the research done at the SDS Lab, including methodologies, the use of interdisciplinary approaches, and important trends and outputs being observed in our results. (Conference Room San Felipe) |
10:25 - 10:55 | Coffee Break (Conference Room San Felipe) |
10:55 - 11:30 |
Kyo Lee: Multi-objective optimization for generating a weighted multi-model ensemble by applying the Virtual Information Fabric Infrastructure (VIFI) distributed analytics framework ↓ We present an approach to assess the accuracy of climate models based on multi-objective optimization and an infrastructure to support analyzing massive amounts of the model data. Many previous studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global chemistry climate models and generating a weighted multi-model ensemble. In general, both observational and model data required for this optimization effort are scattered across the network. As a result, the optimization can be hampered by increasing costs of computation and communication between data servers where NASA satellite data and climate model simulations are archived. To address this Big Data challenge, we outline a plan to apply the Virtual Information Fabric Infrastructure (VIFI) to the multi-objective optimization of climate model simulations with large ensembles. VIFI enables executing scalable analytics optimized for distributed data systems. Our proof-of-concept implementation shows the considerable variability across the climate simulations. We conclude that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble average and may provide reliable future projections. The VIFI architecture, including resource management and scheduling, is critical to achieve processing of massive Earth Science datasets from observations and climate models. (Conference Room San Felipe) |
11:30 - 12:05 |
Singdhansu B Chatterjee: Data-geometry and resampling-based inference for selecting predictors for monsoon precipitation ↓ We will present a technique for studying the geometry of data-clouds, using multivariate quantiles and extremes. Using multivariate quantiles one can construct data-depth functions, which are rank-like functions that may be used for center-to-tails ordering of observations. Several robust inferential procedure can be based on data-depth functions and multivariate quantiles, and we will first discuss a few such techniques. Then, we present a method of using resampling coupled with data-depth functions, that can be used for consistently estimating the joint distribution of all parameter estimators under all candidate models, while simultaneously assigning a score to each candidate model. The model-score may be used for model evaluation and selection. The candidate models do not need to be nested within each other, and the number of parameters in each model as well as in the data generating process can grow with sample size. An illustrative example of prediction and obtaining the true physical forces driving Indian summer monsoon rainfall will be presented. This talk includes joint work with Lindsey Dietz, Megan Heyman, and Subho Majumdar. (Conference Room San Felipe) |
12:20 - 12:30 | Group Photo (Hotel Hacienda Los Laureles) |
12:30 - 14:00 | Lunch (Restaurant Hotel Hacienda Los Laureles) |
14:00 - 14:35 |
Geoffrey Fairchild: Real-time Social Internet Data to Guide Desease Forecasting Models ↓ Globalization has created complex problems that can no longer be adequately forecasted and mitigated using traditional data analysis techniques and data sources. Disease spread is a major health concern around the world and is compounded by the increasing globalization of our society. As such, epidemiological modeling approaches need to account for rapid changes in human behavior and community perceptions. Social media has recently played a crucial role in informing and changing the response of people to the spread of infectious diseases. Recent events, such as the 2014-2015 Ebola epidemic and the 2015-2016 Zika virus epidemic, have highlighted the importance of reliable disease forecasting for decision support. This talk will discuss: 1) an operational analytic that provides global context during an unfolding outbreak and 2) a framework that combines clinical surveillance data with social Internet data and mathematical models to provide probabilistic forecasts of disease incidence and will demonstrate the value of Internet data and the real-time utility of our approach. (Conference Room San Felipe) |
14:35 - 15:10 |
Georgiy Bobashev: Agent-Based (and other) Modeling with Synthetic Populations ↓ In this presentation we will define Synthetic populations and illustrate the value it provides to modelers and policy makers. Accuracy of models optimizing response to disease, natural disasters, or distributions of various resources among people depends on the accuracy of the knowledge about the population. This knowledge is not limited to demographics, but has to consider geography, ethnography, social connectivity, and many other factors. Synthetic populations are computational representations of every person in a country. They provide an opportunity to probabilistically link multiple datasets into an accurate database that could be then used by modelers to simulate outcomes of interest, such as evacuation routes, distributions of vaccines, optimal location of first responders, etc. With large amounts of information that is publicly available linking multiple databases poses a threat to privacy. Synthetic populations are natural means to provide census-like accuracy without violating anyone’s privacy. Finally, Synthetic populations could be projected into the future so that the forecasts of 2020 epidemics would be based on 2020 projection and not on 2010 data. This is critical for forecasting the consequences of climate change, population aging and depletion of natural resources. Publicly available tool is available at http://synthpopviewer.rti.org (Conference Room San Felipe) |
15:10 - 15:40 | Coffee Break (Conference Room San Felipe) |
15:40 - 16:15 |
Leticia Ramirez Ramirez: Combining traditional and online-media information for forecasting emerging climate sensitive mosquito-borne diseases ↓ In December 2013 and April 2015 the first cases of chikungunya and zika were reported in the Caribbean and Brazil, respectively. Since then these viruses rapidly spread across the continent attracting a lot of attention from governments and health care professionals. Since data of new diseases in a region is scared, we exploit different source of information such as the originated from surveillance systems, and non-traditional information sources (like online and social media) to propose a forecasting model. In this work we present a forecasting model for chikungunya. This model incorporates information of the number of cases at the beginning of an outbreak and the activity reported by Google Dengue Trend. Since this chikungunya virus is transmitted by the same type of mosquito as dengue, we include Google Dengue Trend as a proxy for the mosquito population and the mosquito-human interaction in the neighboring countries. The two information sources are incorporated as exogenous covariates of a time series model to predict the epidemic curve. (Conference Room San Felipe) |
16:15 - 16:50 |
Matthew Dixon: Uncertainty Quantification of Spatio-Temporal Flows with Deep Learning ↓ Modeling spatio-temporal flows is a challenging problem, as dynamic spatio-temporal data possess underlying complex interactions and nonlinearities. Traditional statistical modeling approaches use a data generating process, generally motivated by physical laws or constraints. Deep learning (DL) is a form of machine learning for nonlinear high dimensional data reduction and prediction. It applies layers of hierarchical hidden variables to capture these interactions and nonlinearities without using a data generating process.
This talk uses a Bayesian perspective of DL to explain its application to the prediction and uncertainty quantification of spatio-temporal flows from big data. Using examples in traffic flow and high frequency trading, we demonstrate why DL is able to predict sharp discontinuities in spatio-temporal flows. We proceed to discussing the far reaching practical implications of embedding deep spatio-temporal flow predictors into novel actuarial climate models. (Conference Room San Felipe) |
19:00 - 21:00 | Dinner (Restaurant Hotel Hacienda Los Laureles) |
Tuesday, October 31 | |
---|---|
07:30 - 09:00 | Breakfast (Restaurant at your assigned hotel) |
09:00 - 09:35 |
Alan Gelfand: Stochastic Modeling for Climate Change Velocities ↓ The ranges of plants and animals are moving in response to change in climate. In particular, if temperatures rise, some species will have to move their range. On a fine spatial scale, this may mean moving up in elevation; on a larger spatial scale, this may result in a latitudinal change. In this regard, the notion of velocity of climate change has been introduced to reflect change in location corresponding to change in temperature. If location is viewed as one dimensional, say $x$ and time is denoted by $t$, the velocity becomes $dx/dt$. In the crudest form, given a relationship between temperature (Temp) and time as well as a relationship between Temp and location, we would have $\frac{dx}{dt} =\frac{dTemp}{dt} /\frac{Temp}{dx}$.
The contribution here is to extend this simple definition to more realistic models, models incorporating more sophisticated explanations of temperature, models introducing spatial
locations, and, most importantly, models that are stochastic over space and time. With such model components, we can learn about directional velocities, with uncertainty. We can capture spatial structure in velocities. We can assess whether velocities tend to be positive or negative, and in fact, whether and where they tend to be significantly different from 0. Extension of the model development can be envisioned to the species level, i.e., to species-
specific velocities. Here, we replace a temperature model as the driver with presence-only or
presence/absence models. We can make attractive connections to customary advection and
diffusion specifications through partial differential equations.
We illustrate with 118 years of data at 10 km resolution (resulting in more than 21,000 cells) for the eastern United States. We adopt a Bayesian framework and can obtain posterior distributions of directional velocities at arbitrary spatial locations and times. This is joint work with Erin Schliep. (Conference Room San Felipe) |
09:35 - 10:10 |
Robert Lund: Bayesian Multiple Breakpoint Detection: Mixing Documented and Undocumented Changepoints ↓ This talk presents methods to estimate the number of changepoint time(s) and their locations in time-ordered data sequences when prior information is known about some of the changepoint times. A Bayesian version of a penalized likelihood objective function is developed from minimum description length (MDL) information theory principles. Optimizing the objective function yields estimates of the changepoint number(s) and location time(s). Our MDL penalty depends on where the changepoint(s) lie, but not solely on the total number of changepoints (such as classical AIC and BIC penalties). Specifically, configurations with changepoints that occur relatively closely to one and other are penalized more heavily than sparsely arranged changepoints. The techniques allow for autocorrelation in the observations and mean shifts at each changepoint time. This scenario arises in climate time series where a "metadata" record exists documenting some, but not necessarily all, of station move times and instrumentation changes. Applications to climate time series are presented throughout. (Conference Room San Felipe) |
10:10 - 10:40 | Coffee Break (Conference Room San Felipe) |
10:40 - 11:15 |
Vadim Sokolov: Deep Learning: A Bayesian Perspective ↓ Deep learning is a form of machine learning for nonlinear high dimensional pattern matching and prediction. We present a Bayesian probabilistic perspective, and provide a number of insights, for example, more efficient algorithms for optimization and hyper-parameter tuning, and an explanation of finding good predictors. Traditional high-dimensional data reduction techniques, such as principal component analysis (PCA), partial least squares (PLS), reduced rank regression (RRR), projection pursuit regression (PPR) are all shown to be shallow learners. Their deep learning counterparts exploit multiple deep layers of data reduction which provide performance gains. We discuss stochastic gradient descent (SGD) training optimisation, and Dropout (DO) that provide estimation and variable selection, as well as Bayesian regularization, which is central to finding weights and connections in networks to optimize the bias-variance trade-off. To illustrate our methodology, we provide an analysis of spatio-temporal data. Finally, we conclude with directions for future research. (Conference Room San Felipe) |
11:50 - 12:30 | Open Discussion (Conference Room San Felipe) |
12:30 - 14:00 | Lunch (Restaurant Hotel Hacienda Los Laureles) |
14:00 - 14:35 |
Sloan Coats: Paleoclimate constraints on the spatio-temporal character of past and future drought in climate models ↓ Drought is a spatio-temporal phenomenon; however, due to limitations of traditional statistical techniques it is often analyzed solely temporally—for instance, by taking the hydroclimate average over a spatial area to produce a timeseries. Herein, we use machine learning based Markov Random Field methods that identify drought in three-dimensional space-time. Critically, the joint space-time character of this technique allows both the temporal and spatial characteristics of drought to be analyzed. We apply these methods to climate model output from the Coupled Model Intercomparison Project phase 5 and tree-ring based reconstructions of hydroclimate over the full Northern Hemisphere for the past 1000 years. Analyzing reconstructed and simulated drought in this context provides a paleoclimate constraint on the spatio-temporal character of past and future droughts, with some surprising and important insights into future drought projections. Climate models, for instance, suggest large increases in the severity and length of future droughts but little change in their width (latitudinal and longitudinal extent). These models, however, exhibit biases in the mean width of drought over large parts of the Northern Hemisphere, which may undermine their usefulness for future projections. Despite these limitations, and in contrast to previous high-profile claims, there are no fundamental differences in the spatio-temporal character of simulated and reconstructed drought during the historical interval (1850-present), with critical implications for our confidence in future projections derived from climate models. (Conference Room San Felipe) |
14:35 - 15:10 |
Adam Sykulski: Spatiotemporal modelling of ocean surface drifter trajectories ↓ The oceans plays a pivotal role in the global climate system, and to develop our understanding there is a need to connect physical models of ocean dynamics with vast arrays of data collected from modern sensors. In this talk, I will present a stochastic spatiotemporal model that describes the motion of freely drifting satellite-tracked instruments, commonly known as “drifters”. The trajectories of drifters provide useful measurements about currents, turbulence and dispersion across our oceans. The challenge is that the data moves in both time and space, sometimes referred to a “Lagrangian” perspective, and these types of data require new data science methodology. Our spatiotemporal model captures effects that are oscillatory, spatially anisotropic, and have varying degrees of small-scale roughness or fractal dimension. We use our model to analyse the entire Global Drifter Program database of observations since 1979, constituting over 70 million data points from over 20,000 drifters. Our findings uncover interesting spatial patterns and develop general understanding of ocean circulation and ocean surface dynamics. This is a joint work with Sofia Olhede (UCL) and Jonathan Lilly and Jeffrey Early (NWRA, Seattle). (Conference Room San Felipe) |
15:10 - 15:40 | Coffee Break (Conference Room San Felipe) |
19:00 - 21:00 | Dinner (Restaurant Hotel Hacienda Los Laureles) |
Wednesday, November 1 | |
---|---|
07:30 - 09:00 | Breakfast (Restaurant at your assigned hotel) |
09:00 - 09:35 |
Alexander Brenning: Statistical challenges in the analysis of high-dimensional spatial and spectral data in environmental science ↓ In environmental monitoring and modelling, an increasingly common challenge is the need to identify patterns in series of tests or estimators that are replicated either spatially or in a high-dimensional feature space. Spatially replicated tests or estimators occur in especially in spatiotemporal trend detection in (historical or projected) climate or hydrological data, in environmental monitoring using remote sensing, and in ecological modelling. Moreover, with the increasing availability of hyperspectral remote-sensing sensors with hundreds of spectral bands and thousands of derived features, high-dimensional knowledge discovery and prediction problems are becoming more and more prevalent in remote-sensing data analysis. How can meaningful patterns be derived in order to discover relationships in such data? How can we go from individual grid cells or features to larger spatial or spectral regions that show a homogeneous response? This talk presents case studies from environmental remote sensing and environmental science that face these challenges, and explores current research directions that promise to provide solutions. (Conference Room San Felipe) |
09:35 - 10:10 |
Murali Haran: A projection-based approach for spatial generalized linear mixed models ↓ Non-Gaussian spatial data arise in a number of disciplines. Examples include spatial data on disease incidences
(counts), and satellite images of ice sheets
(presence-absence). Spatial generalized linear mixed models
(SGLMMs), which build on latent Gaussian processes or Gaussian Markov random
fields, are convenient and flexible models for such data and are
used widely in mainstream statistics and other disciplines. For
high-dimensional data, SGLMMs present significant computational
challenges due to the large number of dependent spatial random
effects. Furthermore, spatial confounding makes the regression
coefficients challenging to interpret. I will discuss
projection-based approaches that reparameterize and reduce the
number of random effects in SGLMMs, thereby improving the
efficiency of Markov chain Monte Carlo (MCMC) algorithms. Our
approach also addresses spatial confounding issues. This talk is
based on joint work with Yawen Guan (SAMSI) and John Hughes (U of
Colorado-Denver). (Conference Room San Felipe) |
10:10 - 10:40 | Coffee Break (Conference Room San Felipe) |
10:40 - 11:15 |
Juan Martin Barrios Vargas: Two approaches to species distribution modeling to consider climate change ↓ One of the main purposes of the National Commission for the Knowledge and Use of
the Biodiversity (CONABIO) is to help optimise the protection of the habitat for
different species. To achieve this task it is important to study and to model
the species potential habitat. Historically, models for the species distribution
consider the climate and topographic features of the environment together with
the spatial information collected on species presence.
In this talk we introduce two approaches utilised at CONABIO to model the
species distributions. One of them provides us with a exploratory tool that also
shows how to incorporate the effects of the climate change into the model. This
tool has been jointly developed with the Complexity Sciences Center (CCC) at
UNAM.
We also discuss some of the challenges that represent to work with real (species)
data: most of this data is not structurally collected, and there are some
misidentification issues. (Conference Room San Felipe) |
11:15 - 11:50 |
Robert Beach: Modeling Climate Change Impacts on Agricultural Production and Implications for Risk Management ↓ Agriculture is one of the sectors most likely to be impacted by climate change. Agricultural producers have always operated under high levels of production and price risk, but there are concerns that climate change will further exacerbate these risks while making recent historical experience less predictive of future conditions. The impacts are generally expected to increase over time as temperatures become more likely to exceed thresholds that negatively impact crop growth and the distribution of precipitation is increasingly altered. However, there is considerable variation in future climate projections both temporally and spatially as well as differences in responsiveness to climate change across different crops and production practices. Agriculture is a very heterogeneous sector, making it important to incorporate disaggregated biophysical data within economic models used to assess the potential impacts of alternative climate and policy scenarios. To assess potential long-term implications of climate change on landowner decisions regarding land use, crop mix, and production practices, we combine the outputs of global circulation models (GCMs) with the Environmental Policy Integrated Climate (EPIC) crop process model and the Forest and Agricultural Sector Optimization Model (FASOM) economic model. GCMs use assumptions regarding future emissions and atmospheric concentrations of GHGs as model inputs to simulate impacts on the future spatial distribution of temperature and precipitation across the globe. The outputs of the GCMs were then incorporated into EPIC to simulate the impacts of alternative climate scenarios on crop yields over time. Crop growth is simulated by calculating the potential daily photosynthetic production of biomass. Daily potential growth is decreased by stresses caused by shortages of solar radiation, water, and nutrients, by temperature extremes and by inadequate soil aeration. Thus, EPIC can account for the effects of climate-induced changes in temperature, precipitation, and other variables, including episodic events affecting agriculture, on potential yields. The model also includes a nonlinear equation accounting for plant response to CO2 concentration and has been applied in several previous studies of climate change impacts. In this application, we simulated yields for barley, corn, cotton, hay, potatoes, rice, sorghum, soybeans, and wheat under each climate scenario considered. These crop yields were then used as inputs into a stochastic version of FASOM to assess market outcomes given climate-induced shifts in yields that vary by crop and region. The stochastic version of the model is used to model crop allocation decisions by crop and management categories based on the relative returns and risk associated with alternative cropping patterns under each of the modeled scenarios. This enables exploration of potential shifts in cropping patterns within and across regions in response to changing yield distributions as well as the associated price effects. In addition to implications for landowner decisions regarding land use, crop mix, and production practices, changing agricultural risks could potentially affect the performance of risk management strategies such as crop insurance programs. Thus, we also explore the potential implications of changes in yield and price distributions for these insurance markets. (Conference Room San Felipe) |
11:50 - 12:25 |
Alicia Mastretta-Yanes: Genetic diversity in space and time, an insurance vs climate change ↓ Genetic diversity is the engine of evolution. Thanks to it species can adapt to different environmental conditions and we humans are able to domesticate wild species, modifying them to fit our needs. When climate changes, species either move with it, become extinct or adapt to the new conditions. The domesticated species upon which our food systems are based are also affected by environmental conditions, so adapting them to the current human induced climate change is of special concern. Mexico is a mega-diverse country where the domestication of important cultivates occurred, such as maize, beans and pumpkins. As a consequence, here there are crop wild relatives that have been evolving for million of years; and traditional crop varieties that have been grown in a wide range of environmental conditions for thousands of years, and that currently continue evolving. The genetic diversity enclosed within these crop wild relatives and traditional varieties is enormous, and likely holds the needed diversity to adapt our cultivate to climate change. Here, I will discuss the need to appreciate Mexican crops genetic diversity in terms of the evolutionary service it provides; and then I will discuss the outcomes and challenges of characterizing, modeling, conserving and using such genetic diversity at a national scale. (Conference Room San Felipe) |
12:30 - 14:00 | Lunch (Restaurant Hotel Hacienda Los Laureles) |
14:00 - 14:35 |
Vyacheslav Lyubchich: Modeling agricultural insurance risks using modern deep machine learning algorithms ↓ Agriculture is probably the most vulnerable sector of economics under the climate variability and climate change. National and global concerns in ability of agricultural producers to sustain financial losses (due to price fluctuations and, primarily, due to weather-induced damages of crops) and meet the growing demand in food and energy bring to the forefront the development of agricultural risk management strategies. However, actuarial and statistical methodology for agricultural insurance applications is still relatively limited. Even less is known on uncertainty quantification and uncertainty propagation in a context of agricultural risk management. Weather-based index insurance is a relatively new and promising instrument for managing weather-related risks in agriculture. An index should provide a good estimate of losses for individual clients, while involving lower costs due to omitting the loss verification step, quicker claim settlement process, and elimination of fraud. A natural and popular choice is to use yield indices that can depend on a number of weather variables. Challenges with modeling the complex weather and climate dynamics include analyzing massive multi-resolution, multi-source data with a non-stationary space-time structure, a nonlinear relationship of weather events and crop yields, and the respective actuarial implications due to imprecise estimation of risk. Conventional parametric statistical and actuarial approaches are constrained in being able to address these problems. In this project, we investigate the utility of novel deep learning methods for evaluation of basis risk in agricultural index-based insurance. This study aims to provide a better understanding of nonlinear relationship of crop yields and weather events, identify optimal indicators that reliably track future risks of climate and weather to crop production and better identification, quantification and propagation of uncertainty to improve crop production basic risk estimation. This is a joint work with Azar Ghahari (University of Texas at Dallas), Y.R. Gel (University of Texas at Dallas), and Nathaniel Newlands (Agriculture and Agri-Food Canada). (Conference Room San Felipe) |
14:35 - 15:00 |
Ola Haug: Spatial trend analysis of gridded temperature data sets at varying spatial scales ↓ In general, reliable trend estimates for temperature data may be challenging to obtain, mainly due to data scarcity. Short data series represent an intrinsic problem, whereas spatial sparsity may, in the case of spatially correlated data, be managed by adding appropriate spatial structure to the model. In this study, we analyse European temperature data over a period of 65 years. We search for trends in seasonal means and investigate the effect of varying the data grid resolution on the significance of the trend estimates obtained. We consider a set of models with different temporal and spatial structures and compare the resulting spatial trends along axes of model complexity and data grid resolution. This is ongoing work and the presentation will sketch the idea and give some preliminary results. (Conference Room San Felipe) |
15:00 - 15:30 | Coffee Break (Conference Room San Felipe) |
19:00 - 21:00 | Dinner (Restaurant Hotel Hacienda Los Laureles) |
Thursday, November 2 | |
---|---|
07:30 - 09:00 | Breakfast (Restaurant at your assigned hotel) |
09:00 - 10:10 | Poster Session (Conference Room San Felipe) |
10:10 - 10:40 | Coffee Break (Conference Room San Felipe) |
10:40 - 12:30 | Poster Session (Conference Room San Felipe) |
12:30 - 13:30 | Lunch (Restaurant Hotel Hacienda Los Laureles) |
13:30 - 18:00 | Tour to Monte Alban (optional, cost is $300 MXP) (Monte Alban) |
19:00 - 21:00 | Dinner (Restaurant Hotel Hacienda Los Laureles) |
Friday, November 3 | |
---|---|
07:30 - 09:00 | Breakfast (Restaurant at your assigned hotel) |
09:00 - 10:30 | Informal Discussion (Conference Room San Felipe) |
10:30 - 11:00 | Coffee Break (Conference Room San Felipe) |
11:00 - 12:30 | Informal Discussion (Conference Room San Felipe) |
12:30 - 14:00 | Lunch (Restaurant Hotel Hacienda Los Laureles) |