Journal articles, books, book chapters, and select letters and reports.
The transportation system is rapidly evolving to integrate with cybertechnology and emerging cyberphysical ecosystems that do not recognize legacy boundaries of technology and governance. The integration of microprocessors into vehicles and infrastructure, the development of low-cost sensors, and the availability of real-time data, communications, and computation have resulted in emerging cognitive capabilities and are changing the nature of transportation systems. We discuss two key changes: the diffusing of traditional boundaries between transportation and other infrastructure systems, and increasingly distributed control. We argue that the changing nature of transportation systems requires a new approach to sustainability - one that reflects their evolving cognitive capabilities, their increasingly diffusive boundaries, and Anthropocene complexity. As new actors enter the transportation space, we suggest that public agencies take on new roles as consensus builders, identifying shared goals among distinct actors to navigate increasingly complex environments.
There appears to be a growing decoupling between the conditions that infrastructures were designed for and today’s rapidly changing environments. Infrastructures today are largely predicated on the technologies, goals, and governance structures from a century ago. While infrastructures continue to deliver untold value, there is growing evidence that these critical, basic, and lifeline systems appear ill-equipped to confront the volatility, uncertainty, accelerating conditions, and complexity that define them and their changing environments. Innovative and disruptive first principles are needed to guide infrastructures in the Anthropocene. Drawing from emerging infrastructure research and disciplines that appear better able to confront disruption and change, a novel set of first principles are identified: 1) Plan for complex conditions and surprise; 2) Recouple with agility and flexibility; 3) Govern for exploration and instability; 4) Build consensus as control decentralizes; 5) Restructure to engage with porous boundaries; and, 6) Cyberthreat planning is now mission critical. These principles should guide infrastructure planning recognizing the changing nature and increasingly obsolete boundaries that have defined engineered systems in the modern era.
With increasing frequency and severity, coastal cities are facing the effects of extreme weather events, such as sea-level rise, storm surges, hurricanes, and various types of flooding. Recent urban resilience scholarship suggests that responding to the cascading complexities of climate change requires an understanding of cities as social-ecological-technological systems, or SETS. Advances in data visualization, sensors, and analytics are making it possible for urban planners to gain more comprehensive views of cities. Yet, addressing climate complexity requires more than deploying the latest technologies; it requires transforming the institutional knowledge systems upon which cities rely for preparation and response in a climate-changed future. While debates in the theory and practice of knowledge co-production offer a rich contextual starting point, there are few practical examples of what it means to co-produce new knowledge systems capable of steering urban resilience planning in fundamentally new directions. This paper helps address this gap by offering a case study approach to co-producing new knowledge systems for SETS data visualization in three US coastal cities. Through a series of innovation spaces – dialogues, labs, and webinars – with residents, data experts, and other city stakeholders from multiple sectors, we show how to apply a knowledge systems approach to better understand, represent, and support cities as SETS. To illustrate what a redesigned knowledge system for urban resilience planning entails, we document the key steps and activities that led to a new prototype SETS platform that works with a wider range of ways of knowing – including community-based expertise, interdisciplinary research contributions, and various municipal actors’ know-how – to build anticipatory capacity for visualizing and navigating the complex dynamics of a climate-changed future. Our findings point to new roles for activity-based learning, conflict, and SETS visualization technologies in connecting, amplifying, and reorganizing the knowledge assets of community perspectives previously ignored. We conclude with a new understanding of how innovation towards coastal city resilience resides within the co-production process for (re)designing knowledge systems to make them more robust and responsive to cross-sector and cross-city learning.
There is growing interest in understanding the interaction between weather and transportation and the ability of communities and the nation’s infrastructure to withstand extreme conditions and events. This study aims to provide detailed insights on how people adjust and change their activity-travel and time use behaviors in the face of extreme heat conditions. By leveraging time use records integrated with weather data, the study compares activity-mobility patterns between extreme heat days and non-extreme days. A series of models are estimated to understand the impact of extreme heat even after controlling for other variables. The findings reveal that heat significantly impacts time use and activity-mobility patterns, with some groups exhibiting potentially greater vulnerability arising from the inability to adapt sufficiently to extreme heat. Designing dense, shaded urban environments, declaring heat days to facilitate indoor stays, and providing transportation vouchers for vulnerable populations can help mitigate the ill-effects of extreme heat.
Extreme weather-related events are showing how infrastructure disruptions in hinterlands can affect cities. This paper explores the risks to city infrastructure services including transportation, electricity, communication, fuel supply, water distribution, stormwater drainage, and food supply from hinterland hazards of fire, precipitation, post-fire debris flow (PFDF), smoke, and flooding. There is a large and growing body of research that describes the vulnerabilities of infrastructures to climate hazards, yet this work has not systematically acknowledged the relationships and cross-governance challenges of protecting cities from remote disruptions. An evidence base is developed through a structured literature review that identifies city infrastructure vulnerabilities to hinterland hazards. Findings highlight diverse pathways from the initial hazard to the final impact on an infrastructure, demonstrating that impacts to hinterland infrastructure assets from hazards can cascade to city infrastructure. Beyond the value of describing the impact of hinterland hazards on urban infrastructure, the identified pathways can assist in informing cross-governance mitigation strategies. It may be the case that to protect cities, local governments invest in mitigating hazards in their hinterlands and supply chains.
The resilience of Ukraine’s infrastructure in the face of both conventional and cyber warfare, as well as attacks on the knowledge systems that underpin its operations, is no doubt rooted in the country’s history. Ukraine has been living with the prospect of warfare and chaos for over a century. This “normal” appears to have produced an agile and flexible infrastructure system that every day shows impressive capacity to adapt.
This forum article discusses the 2023 Issues in Science & Technology article What Ukraine Can Teach the World About Resilience and Civil Engineering by Armanios, Skovrup, Christensen, and Tymoshenko.
Infrastructure systems have legacies that continue to define their priorities, goals, flexibility, and ability to make sense of their environments. These legacies may or may not align with future needs, but regardless of alignment they may restrict viable pathways forward. Infrastructure ‘lock-in’ has not been sufficiently confronted in infrastructure systems. Lock-in can loosely be interpreted as internal and external pressures that constrain a system, and it encourages self-reinforcing feedback where the system becomes resistant to change. By acknowledging and recognizing that lock-in exists at small and large scales, perpetuated by individuals, organizations, and institutions, infrastructure managers can critically reflect upon biases, assumptions, and decision-making approaches. In this article, six distinct domains of lock-in are described: technological, social, economic, individual, institutional, and epistemic. Following this description, strategies for unlocking lock-in, broadly and by domain, are explored before being contextualized to infrastructure systems. Ultimately, infrastructure managers must make a decision between a locked-in and faltering but familiar system or a changing and responsive but unfamiliar system, where both are, inevitably, accepting higher levels of risk than typically accustomed. An explicit reframing of perspectives around lock-in mechanisms is proposed to help guide infrastructure managers toward a mindset of transformation and promote management of lock-in.
Chapter in the Governing for Sustainability. Edited by John Dernbach and Scott Schang.
This chapter summarizes the United States’ progress toward meeting various targets associated with UN Sustainable Development Goal 9, as well as propose various legal, policy, and nonlegal actions that can be implemented to accelerate the development and maintenance of sustainable and resilient infrastructure within the United States. Overall, we posit that three of the most critical actions for accelerating progress toward Goal 9 are: (1) enhance the adaptability of infrastructure systems to growing complexity and uncertainty in the context of challenges like climate change and cybersecurity; (2) accelerate the widespread and equitable adoption of electric vehicles (EVs) and active mobility (e.g., walking, biking, public transit); and (3) improve access to reliable and affordable high-speed Internet connection—especially for marginalized and underserved communities.
Disruption of legacy infrastructure systems by novel digital and connected technologies represents not simply the rise of cyberphysical systems as hybrid physical and digital assets but, ultimately, the integration of legacy systems into a new cognitive ecosystem. This cognitive ecosystem, an ecology of massive data flows, artificial intelligence, institutional and intellectual structures, and connected technologies, is poised to alter how humans and artificial intelligence understand and control our world. Infrastructure managers need to be ready for this paradigm shift, recognizing their systems are increasingly being absorbed into an emerging suite of data, analytical tools, and decisionmaking technologies that will fundamentally restructure how legacy systems behave and are controlled, how decisions are made, and most importantly how workers interact with the systems. Infrastructure managers must restructure their organizations and engage in cross-organizational sensemaking if they are to be capable of navigating the complexity of the cognitive ecosystem. The cognitive ecosystem is fundamentally poised to change what infrastructures are, necessitating the need for managers to take a close look at the functions and actions of their own systems. The continuing evolution of the Anthropocene and the cognitive ecosystem has profound implications for infrastructure education. A sustained commitment to change is necessary that restructures and reorients infrastructure organizations within the cognitive ecosystem, where knowledge is generated, and control of services is wielded by myriad stakeholders.
The 2022 Southwest Airline Scheduling Crisis, resulting in approximately 15,000 flight cancellations, demonstrates the challenges of structuring infrastructure systems and their knowledge-making processes for increasingly disruptive conditions. While the point-to-point configuration was the focus of immediate assessments of the failure, it became rapidly evident that the crew-assignment software was unable to operate effectively due to the scale of disruption. Southwest Airlines failed to recognize environmental shifts associated with internal and external complexity, leaving operations vulnerable to a known potential risk: computer and telecommunications failures due to an extreme weather event resulting in knowledge systems failures. The cascading failures of the crisis emphasizes the necessity to invest in adaptive capacity prior to catastrophic events and provides a lesson to other infrastructure managers pursuing resilience in the face of increasingly uncertain environments.
Irrigation activities emit greenhouse gasses (GHGs) directly from soils or indirectly through the use of energy or construction of dams and irrigation infrastructure, while climate change affects irrigation demand, water availability and the GHG intensity of irrigation energy. Here, we present a scoping review to elaborate on these irrigation-climate linkages by synthesizing knowledge across different fields, emphasizing the growing role climate change may play in driving future irrigation expansion and reinforcing some of the positive feedbacks. This review underscores the urgent need to promote and adopt sustainable irrigation, especially in regions dominated by strong, positive feedbacks.
Complex adaptive systems – such as critical infrastructures (CI) – are defined by their vast, multi-level interactions and emergent behaviors, but this elaborate web of interactions often conceals relationships. For instance, CI is often reduced to technological components, ignoring that social and ecological components are also embedded, leading to unintentional consequences from disturbance events. Analysis of CI as social-ecological-technological systems (SETS) can support integrated decision-making and increase infrastructure’s capacity for resilience to climate change. We assess the impacts of an extreme precipitation event in Phoenix, AZ to identify pathways of disruption and feedback loops across SETS as presented in an illustrative causal loop diagram, developed through semi-structured interviews with researchers and practitioners and cross-validated with a literature review. The causal loop diagram consists of 19 components resulting in hundreds of feedback loops and cascading failures, with surface runoff, infiltration, and water bodies as well as power, water, and transportation infrastructures appearing to have critical roles in maintaining services. We found that pathways of disruptions highlight potential weak spots within the system that could benefit from climate adaptation, and feedback loops may serve as potential tools to divert failure at the root cause. This method of convergence research shows potential as a useful tool to illustrate a broader perspective of urban systems and address the increasing complexity and uncertainty of the Anthropocene.
Our urban systems and their underlying systems are designed to deliver only a narrow set of human-centered services, with little or no accounting or understanding of how actions undercut the resilience of social-ecological-technological systems (SETS). Embracing a SETS resilience perspective creates opportunities for novel approaches to adaptation and transformation in complex environments. We: i) frame urban systems as in need of a perspective shift from control to entanglement, ii) position SETS thinking as novel sensemaking to create repertoires of responses commensurate with environmental complexity (i.e., requisite complexity), and iii) describe modes of SETS sensemaking for urban system structures and functions as basic tenets to build requisite complexity. SETS sensemaking is an undertaking to reflexively bring sustained adaptation, anticipatory futures, loose-fit design, and co-governance into organizational decision-making and to help reimagine institutional structures and processes as entangled SETS.
Faced with destabilizing conditions in the Anthropocene, infrastructure resilience modeling remains challenged to confront increasingly complex conditions toward quickly and meaningfully advancing adaptation. Data gaps, increasingly interconnected systems, and accurate behavior estimation (across scales and as both gradual and cascading failure) remain challenges for infrastructure modelers. Yet novel approaches are emerging – largely independently – that, if brought together, offer significant opportunities for rapidly advancing how we understand vulnerabilities and surgically invest in resilience. Of particular promise are interdependency modeling, cascading failure modeling, and synthetic network generation. We describe a framework for integrating these three domains toward an integrated modeling framework to estimate infrastructure networks where no data exist, connect infrastructure to establish interdependencies, assess the vulnerabilities of these interconnected infrastructure to hazards, and simulate how failures may propagate across systems. We draw from the literature as an evidence base, provide a conceptual structure for implementation, and conclude by discussing the significance of such a framework and the critical tools it may provide to infrastructure researchers and managers.
As infrastructure confront rapidly changing environments, there is an immediate need to provide the flexibility to pivot resources and how infrastructures are prioritized. Yet infrastructures are often categorized based on static criticality framings. We describe dynamic criticality as the flexibility and reprioritization of infrastructure and resources during disturbances. We find that the most important prerequisite for dynamic criticality is organizational adaptive capacity through resilience in goals, structures, sensemaking, and strategies. Dynamic capabilities are increasingly important in the Anthropocene, where accelerating conditions, uncertainty, and growing complexity are challenging infrastructures. We review sectors that deployed dynamic management approaches amidst changing disturbances: leadership and organizational change, defense, medicine, manufacturing, and disaster response. We use an inductive thematic analysis to identify key themes and competencies and analyze capabilities that describe dynamic criticality. These competencies drive adaptive capacity and open up the flexibility to pivot what is deemed critical, depending on the particulars of the hazard. We map these competencies to infrastructure systems and describe how infrastructure organizations may build adaptive capacity toward flexible priorities.
Urban heat exposure is an increasing health risk among urban dwellers. Many cities are considering accommodating active mobility, especially walking and biking, to reduce urban-induced anthropogenic greenhouse gas (GHG) emissions. However, promoting active mobility without proper planning and transportation infrastructure to combat extreme heat exposure may cause more heat-related morbidity and mortality, particularly in the future with projected climate change. This study estimated the effectiveness of active trip heat exposure mitigation under built environment and travel behavior change. Simulations of the Phoenix metro region's 624,987 active trips on June 27th, 2012 were conducted using the Activity-based travel model (ABM), simulated Mean Radiant Temperature (TMRT), transportation network, Local Climate Zones, and supplemental data. Two cooling scenarios were designed to identify “cool corridors” with the lowest temperature. Travelers experienced TMRT heat exposure ranging from 29°C to 76°C (84°F to 168°F) on the simulation day. In the same cooling scenario, behavioral changes cooled up to ten times more trips than changes in the built environment. Active trips reduced TMRT by an average of 1.2°C to 3.7°C based on different scenarios when the built environment was changed via fully converting the networks to cool corridors. The marginal benefit of cooling decreased from over 1,000 trips/km when less than 10 km of corridors were converted to less than 1 trip/km when all corridors were transformed. The results revealed that heavily traveled corridors should be prioritized with limited resources, and the best cooling results come from environment and travel behavior change together. This study can help inform changes to urban design and planning by measuring the cooling benefits on active trips.
Post-wildfire debris flows represent a significant hazard for transportation infrastructure. The location and intensity of post-fire debris movements are difficult to predict, and threats can persist for several years until the watershed is restored to pre-fire conditions. This situation might worsen as climate change forecasts predict increasing numbers of wildfire burned areas and extreme precipitation intensity. New insights are needed to improve understanding of how roadways are vulnerable to post-fire flows and how to prioritize protective efforts. Using California as a case study, the vulnerability of transportation infrastructure to post-fire debris flow was assessed considering geologic conditions, vegetation conditions, precipitation, fire risk, and roadway importance under current and future climate scenarios. The results showed significant but uneven statewide increases in the number of vulnerable roadways from the present to future emission scenarios. Under current climate conditions, 0.97% of roadways are highly vulnerable. In the future, the ratio of vulnerable roadways is expected to increase 1.9-2.3 times in the Representative Concentration Pathways (RCP) 4.5 emission scenarios and 3.5-4.2 times in the RCP 8.5 emission scenarios. The threat of post-fire debris flow varies across the state, as precipitation changes are uneven. The vulnerability assessment is positioned to 1) identify, reinforce, and fortify highly vulnerable roadways, 2) prioritize watershed fire mitigation, and 3) guide future infrastructure site selection.
This publication advances the methods and results of our UCLA Institute of Transportation Studies report No. UC-ITS-2020-38, doi: 10.17610/T60W35.
Climate change is poised to significantly increase people’s heat exposure, yet there remain limited insights into how individuals experience heat in the conjunction of behavior and infrastructure. We developed a simulation platform - Icarus - to estimate traveler’s heat exposure at both personal and population scales at the interface of travel behavior, microclimate, and the built environment. Icarus is applied to the Phoenix metropolitan region as a case study using three different temperature measurements: air temperature (Tair), mean radiant temperature (TMRT), and wet bulb globe temperature (TWBGT). The case study analysis shows that travel patterns (such as trip duration and the trip start time) for different demographic groups affect personal and population heat exposure. Different temperature measures also resulted in widely varying estimates of personal heat exposure.
Over the last decade, the life cycle assessment (LCA) methodology has significantly advanced to enable more realistic impact simulations and predictions with spatial and temporal considerations. Nevertheless, knowledge created through LCA efforts is still largely used as an information source, rather than as a process to engage stakeholders with the implementation of recommendations and to foster prompt and adaptive decision-making and changes towards sustainability.
Increasingly frequent, intense, and consequential disasters necessitate building greater resilience into infrastructure systems. To enable systems to re-organize or adapt to changing future conditions, ensuring adaptive capacity in resilience planning is critical. This paper presents an approach to evaluate the long-term benefits of adaptive resilience investments in infrastructure systems under future uncertainty. The methodology builds on the existing work on resilience assessments and uses long timeframe assessment methods based on NPV and various approaches to quantifying different levels of uncertainty along with multi-criteria assessment methods. The application of the proposed methodology is demonstrated using three case studies, where investments have focused on different aspects of adaptive resilience enhancement in various infrastructure systems. The results from all three case studies demonstrate the increasing benefits of adaptive resilience strategies over extended time periods with ongoing learning and the evolving nature of the resilience strategies. The approach presented in the paper can be used by public and private agencies in multiple infrastructure sectors such as transportation, power, water, and communication. A flexible approach to evaluate the long-term benefits of building adaptive capacity to enhance resilience in the system, this methodology can be a useful tool for practitioners and policymakers.
If we were to try to define the molecules of infrastructure, how those molecules interact, and how their structures and functions result in the systems we rely on today, we’d have to unpack the myriad people, tasks, bureaucracies, and environments that define infrastructures. Infrastructures continue to function and reliably deliver services due to a plethora of mundane tasks performed every day by engineers, planners, inspectors, managers, and maintenance workers. This army of infrastructure workers goes largely unnoticed while navigating impossible constraints, bureaucracies, and increasingly challenging conditions (including underfunded systems increasingly in disrepair, changing environments, new players exerting influence in their systems, and hyperpolarized stakeholders). While attention and excitement are generally given to moonshot projects, these grind challenges define the success of infrastructures. Collectively these grind challenges define what infrastructures are, and affect what they are capable of, today and into the future.
Faced with the increasing severity and frequency of extreme climate events, coastal cities are seeking to invest in solutions for a resilient future. Investments in smart-city solutions have garnered both praise and criticism. In this paper, we develop a people-centered approach that re-conceptualizes smart cities as smart-city knowledge systems to understand, represent, and support cities’ social-ecological-technological systems (SETS). We operationalize this approach through a process of knowledge co-production that engages residents, data experts, and city stakeholders from multiple sectors in three US coastal cities through innovation spaces where they create a prototype for a visualization platform that integrates smart technologies with specific community knowledge, ideas, and networks. We present a prototype of a SETS visualization platform that resulted from this process. Our findings suggest that a people-centered approach can connect the knowledge assets of different communities with a robust and responsive smart system for supporting cross-sector and cross-city learning. We conclude with a new understanding of how smart-city innovation resides within the design process for co-produced smart-city knowledge systems and suggest future research directions to advance the field.
Heat and air pollution persist as major public health hazards in urban environments. Yet there are gaps in the quality of information about the hazards as conditions tend to be informed by limited stationary sensors providing information at large geographic scales. Here we present the results of a study that took place in Phoenix, Arizona, to assess the efficacy of low-cost mobile sensors on public transportation vehicles to monitor fine-scale on-road heat and PM10 concentrations. The goal of the study is to uncover the spatial and temporal variations of excessive heat and air pollution experienced by transit commuters, bicyclists, and pedestrians. The results show that the sensors on the buses complement the readings from stationary sensors and low-cost mobile sensors are effective for gaining fine-grained heat and air quality readings at different locations, thereby creating new insights into pockets of heat and air pollution that should be targeted for intervention.
With projected temperature increases and extreme events due to climate change for many regions of the world, characterizing the impacts of these emerging hazards on water distribution systems is necessary to identify and prioritize adaptation strategies for ensuring reliability. To aid decision-making, new insights are needed into how water distribution system reliability to climate-driven heat will change, and the proactive maintenance strategies available to combat failures. To this end, we present the model Perses, a framework that joins a water distribution network hydraulic solver with reliability models of physical assets or components to estimate temperature increase-driven failures and resulting service outages in the long term. A theoretical case study is developed using Phoenix, Arizona temperature profiles, a city with extreme temperatures and a rapidly expanding infrastructure. By end-of-century under hotter futures there are projected to be 1-5% more pump failures, 2-5% more PVC pipe failures, and 3-7% more iron pipe failures (RCP 4.5-8.5) than a baseline historical temperature profile. Service outages, which constitute inadequate pressure for domestic and commercial use are projected to increase by 16-26% above the baseline under maximum temperature conditions. The exceedance of baseline failures, when compounded across a large metro region, reveals potential challenges for budgeting, management, and maintenance. An exploration of the mitigation potential of adaptation strategies shows that expedited repair times are capable of offsetting the additional outages from climate change, but will come with a cost.
As heat waves become longer and more common, communities have to prepare—and prepare to fail.
The San Francisco Bay Area is one of the most progressive transportation regions in the deployment of high-capacity transit and the use of policies to encourage active transportation. Yet, there remains a dearth of knowledge on the abundance and location of parking infrastructure. The extent and location of parking supply, including on-street and off-street spaces, are estimated for the nine-county Bay Area by creating a federated database that joins land use, transportation, parcel, building, and parking code layers to estimate the number and characteristics of parking spaces at the census block scale. This bottom-up parking space inventory results in an estimated 15 million parking spaces in the region: 8.6 million on-street, and 6.4 million off-street. Residential parking dominates the share of supply at 70%, followed by commercial at 9.4%. Space density is greatest in downtown San Francisco, Oakland, and San Jose—largely attributed to high-rise structures. On-street parking is dominant in the North Bay, commanding 78% of total parking in Napa, 75% in Solano, 68% in Sonoma, and 67% in Marin County. Parking area constitutes 7.9% of the total incorporated area. Notably, when compared to other southwest cities (Phoenix Metropolitan Area and Los Angeles County), the Bay Area parking supply appears better utilized considering spaces per person, per car, and per job. The density and quantity of parking spaces in the Bay Area are critical insights towards developing targeted policies that encourage active mobility and support affordable housing.
This publication advances the report by the authors for the San Jose State University Mineta Transportation Institute.
As rates of urbanization and climatic changes soar around the globe, urban decision-makers are pressed for innovative solutions that simultaneously address climate change and ensure equity and quality of life. Cities have turned to nature-based solutions to help address these challenges. Nature-based solutions are actions to protect, sustainably manage, and restore ecosystems to improve urban sustainability and resilience. Nature-based solutions through ecosystem services can yield multiple benefits for people and address challenges simultaneously. Here we provide an interdisciplinary systems framework applied to nature-based solutions that builds on decades of urban ecosystem services research that explicitly integrates the many social, ecological, and technological factors that affect ecosystem services. We highlight the dynamic interactions of social-ecological-technological systems (SETS) dimensions, including differential contributions of people and institutions, climate and ecosystems, and technologies and infrastructure to ecosystem services, and offer testable hypotheses to accelerate future research.
Efficiency (i.e., optimized use of resources) and resilience principles (i.e., redundancy, diversity, etc.) are often at odds with one another. Despite being particularly acute within infrastructure systems, this tension appears to be under-explored. However, recent advances in ecological and social sciences provide some novel insights into navigating efficiency-resilience trade-offs. Overall, efficiency and resilience are both vital for a system’s longevity and striking a dynamic balance between the two appears to be crucial. Striking this balance in infrastructure systems can be catalyzed by the treatment of resilience as a public good, as well as incorporating exploratory models and stakeholder co-production in the design and implementation process. Ultimately, the dynamic balance between efficiency and resilience can play a central role in our infrastructure’s ability to successfully operate in environments that increasingly fluctuate between stable and unstable conditions.
Infrastructure systems must change to match the growing complexity of the environments they operate in. Yet the models of governance and the core technologies they rely on are structured around models of relative long-term stability that appear increasingly insufficient and even problematic. As the environments in which infrastructure function become more complex, infrastructure systems must adapt to develop a repertoire of responses sufficient to respond to the increasing variety of conditions and challenges. Whereas in the past infrastructure leadership and system design has emphasized organization strategies that primarily focus on exploitation (e.g., efficiency and production, amenable to conditions of stability), in the future they must create space for exploration, the innovation of what the organization is and does. They will need to create the abilities to maintain themselves in the face of growing complexity by creating the knowledge, processes, and technologies necessary to engage environment complexity. We refer to this capacity as infrastructure autopoiesis. In doing so infrastructure organizations should focus on four key tenets. First, a shift to sustained adaptation – perpetual change in the face of destabilizing conditions often marked by uncertainty – and away from rigid processes and technologies is necessary. Second, infrastructure organizations should pursue restructuring their bureaucracies to distribute more resources and decisionmaking capacity horizontally, across the organization’s hierarchy. Third, they should build capacity for horizon scanning, the process of systematically searching the environment for opportunities and threats. Fourth, they should emphasize loose fit design, the flexibility of assets to pivot function as the environment changes. The inability to engage with complexity can be expected to result in a decoupling between what our infrastructure systems can do and what we need them to do, and autopoietic capabilities may help close this gap by creating the conditions for a sufficient repertoire to emerge.
The San Francisco Bay Area is one of the most progressive transportation regions in the deployment of high-capacity transit and use of policies to encourage active transportation. Yet like many other metro regions, there remains a dearth of knowledge on the abundance and location of parking infrastructure supply. Parking infrastructure remains one of the least catalogued infrastructure but is perhaps the most spatially dominating set of assets. The extent and location of parking supply, including on-street and off-street spaces, are estimated for the nine-county Bay Area. This parking space inventory is the most detailed assessment of parking infrastructure produced for the Bay Area, and represents an important starting point for addressing the impacts of and crafting policy for future transportation goals. Key findings from the parking census include: (1) the nine-county Bay Area has 15 million parking spaces, enough parking to wrap around the planet 2.3 times; (2) almost half of the developable land in the region is devoted to storing vehicles; and (3) there are approximately 2.4 spaces for every car and approximately 1.9 parking spaces for every person in the Bay Area.
The work is documented as a San Jose State University Mineta Transportation Institute report (doi: 10.31979/mti.2022.2123) with the dataset (doi: 10.31979/mti.2022.2123.ds). The dataset is also available through ASU's Research Data Repository (doi: 10.48349/ASU/EV2GTF).
As the rehabilitation of infrastructure is outpaced by changes in the profile, frequency, and intensity of extreme weather events, infrastructure’s service disruptions and failures become increasingly likely. Safe-to-fail approaches for infrastructure planning and design improve the capacity of cities to adapt for uncertain climate futures by identifying social, ecological, and technological systems (SETS) capabilities to prepare for potential failure scenarios. In this paper, we argue for transforming infrastructure planning and governance to effectively utilize safe-to-fail approaches by navigating the opportunities and trade-offs of SETS resilience capabilities. From a technological vantage point, traditional infrastructure planning approaches account for social and ecological domains as external design conditions rather than embedded system characteristics. Safe-to-fail approaches directly challenge the isolation of the technological domain by necessitating a recognition that SETS domains are interconnected and interdependent in infrastructure systems, as such risks and system capabilities for resilience must be managed cohesively.
Leadership is a critical component in approaching infrastructure resilience. Leadership, the formal and informal governance within an organization, drives an infrastructure system’s ability to respond to changing circumstances. Due to the instability of the Anthropocene, infrastructure managers (individuals who design, build, maintain, and decommission infrastructure) can no longer rely on assumptions of stationarity, but instead that shifts are occurring at a faster rate than institutions and infrastructure organizations are adapting. Leadership and organizational change literature provide considerable insights into the ability of organizations to navigate uncertainty and complexity, and infrastructure organizations may be able to learn from this knowledge to avoid obsolescence. Therefore, this paper asks: what leadership capabilities do infrastructure organizations need to readily respond to stability and instability? An integrative leadership framework is proposed, exploring capabilities of collaboration, perception and exploration toward learning, and flexible informal and formal governance leveraged by leadership. These capabilities are driven by underlying tensions (e.g., climate change, emerging technologies) and managed through enabling leadership, a set of processes for pivoting between stability and instability. The framework is then applied to infrastructure organizations. Lack of market competition may make infrastructure organizations more open to collaboration and, therefore, learning. However, the need to provide specific services may cause risk adversity and an avoidance of failure, restricting flexibility and innovation. It is critical for infrastructure organizations to identify their strengths and weaknesses so they may develop an approach to change at pace with their external environments.
Extreme heat events, induced by climate change, present a growing risk to transit passenger comfort and health. To reduce exposure, agencies may consider changes to schedules that reduce headways on heavily trafficked bus routes serving vulnerable populations. This paper develops a schedule optimization model to minimize heat exposure and applies it to local bus services in Phoenix, Arizona, using agent-based simulation to inform travel demand and rider characteristics. Rerouting as little as 10% of a fleet is found to reduce network-wide exposure by as much as 35%, when operating at maximum fleet capacity. Outcome improvements are notably characterized by diminishing returns, owing to skewed ridership and the inverse relationship between fleet size and passenger wait time. Access to spare vehicles can also ensure significant reductions in exposure, especially under the most extreme temperatures. Rerouting, therefore, presents a low-cost, adaptable resilience strategy to protect riders from extreme heat exposure.
Water distribution networks (WDN) are one of the most critical infrastructures, providing water for potable consumption, industries, agriculture, and firefighting. Yet it is often the case that limited to no information is available about urban WDNs, the spatial layout, size of pipes, characteristics of water flows, and age of assets, due to a number of challenges including weak historical records, limited willingness to share data, and security concerns. This dearth of information limits researchers, particularly in resilience science to understand the criticality, adaptability, vulnerability, and interdependencies of water systems. To address this challenge, we develop a model and analysis tool entitled SyNF (Synthetic Infrastructure) for synthetic WDN generation. SyNF considers supply and demand information at neighborhood scales to estimate WDNs up to metro region scales. SyNF uses a roadway network, modeled water demand, and location of water sources as input to synthesize topology of the pipe network, pipe diameters, service year of pipes, location of pumps, and power requirement for pumps, where hydraulic pressure and pipe size are maintained for fire flow. A case study of the Phoenix metro area is developed to show SyNF’s capabilities. We start with a single municipality (City of Tempe) and scale the model to Phoenix metro’s seven major cities. We validate SyNF’s accuracy and find an average dissimilarity of 6% on pipe size distribution between the original and synthesized network. We discuss the value of SyNF in helping advance our understanding of the criticality, vulnerability, and resilience of water infrastructure and operations.
Urban heat is a growing concern in cities as a consequence of persisting urbanization exacerbated by climate change. To help understand how city planning and the transportation sector can influence and mitigate heat in cities, this research quantifies contributions to urban heat from pavement infrastructure and vehicle travel. A case study of metropolitan Phoenix, Arizona is chosen for its rapid growth, hot climate, and high automobile dependence. A one-dimensional model based on fundamental heat transfer is developed and validated using remotely sensed land surface temperatures for scenarios of local weather and pavement design. Simulated sensible heat emissions from pavements are applied to a regional roadway and parking pavement inventory and combined with road-level vehicle travel densities to quantify spatiotemporal sensible heat flux magnitudes. In metro Phoenix, total sensible heat from pavements and vehicles is comprised of 67% from roadway pavements, 29% from parking pavements, and 3.9% vehicles. Under typical Phoenix conditions, concrete and asphalt pavements emit an average of 15% and 37% more sensible heat compared to the bare ground, respectively. This added sensible heat from pavement infrastructure peaks during summer afternoons when heat emissions relative to the ground can increase to 26% for concrete and 46% for asphalt. These results indicate pavement infrastructure contributes significantly to Phoenix’s urban heat balance, and areas surrounding high capacity vehicle corridors may be undesirable for outdoor travel or activities during summer rush hours. Future research quantifying urban heat fluxes should consider quantifying added heat from pavement infrastructure in addition to traditional anthropogenic sources.
As climate change is emerging as a major challenge for man-made systems in the coming century, there has been significant effort to understand how to position infrastructure to adapt and deliver services reliably. Particularly, the climate is changing faster than the expected lifetime of critical infrastructure, resulting in situations well beyond the intended design conditions of a stationary climate. This study assesses how well existing infrastructure design approaches—traditional fail-safe, armoring, low regret, safe-to-fail, and adaptive management—account for climate-related complexity and uncertainty through an application of the Cynefin and Deep Uncertainty Frameworks. The results indicate that existing infrastructure design approaches have varying levels of validity for addressing climate change across spatial and temporal scales. The most common infrastructure design approaches undertake lower levels of complexity and uncertainty than climate change demands, indicating the potential of approaches that address complexity and deep uncertainty have not been fully realized.
Wildfires have grown in number, size and intensity in the American West over the last few decades and forecasts predict that these worsening trends will continue. The prevalent approach so far to understand infrastructure vulnerability to wildfires is spatial coincidence analysis, the overlaying of a fire hazard map onto infrastructure. But evidence mounts that post-fire debris flows pose a major hazard to infrastructure, particularly roadways, and are the result of many variables. Vulnerabilities of assets to post-fire flows requires consideration of geologic, vegetative, and hydrologic conditions that contribute to an infrastructure asset’s vulnerability. Towards improving the complexity of vulnerability analysis of infrastructure to wildfires and post-fire flows, a multi-domain model that considers environmental conditions, post-fire effects, and transportation asset use is developed. The model is developed around a case study of a fire prone region in Arizona. The results show that there are approximately 1,700 roadway points of conflict of concern for debris flows in the case study region. 17% of watersheds have a greater than 20% chance of post-fire debris movements and flooding under a 10 year 10 minute precipitation event. Additionally, there is a greater than 50% probability of post-fire debris flows and flooding where recent fires have occurred, validating the underlying model. In general the model highlights the sensitivity of vulnerability of infrastructure to environmental and technological variables, drawing attention to the need to manage the risk as a broader system.
The capacities of our infrastructure systems to respond to volatile, uncertain, and increasingly complex environments are increasingly recognized as vital for resilience. Pervasive across infrastructure literature and discourse are the concepts of centralized, decentralized, and distributed systems, and there appears to be growing interest in how these configurations support or hinder adaptive and transformative capacities towards resilience. There does not appear to be a concerted effort to align how these concepts are used, and what different configurations mean for infrastructure systems. This is problematic because how infrastructure are structured and governed directly affects their capabilities to respond to increasing complexity. We review framings of centralization, decentralization, and distributed (referred to collectively as de/centralization) across infrastructure sectors, revealing incommensurate usage leading to polysemous framings. De/centralized networks are often characterized by proximity to resources, capacity of distribution, volume of product, and number of connections. De/centralization of governance within infrastructure sectors is characterized by the number of actors who hold decision-making power. Notably, governance structures are often overlooked in infrastructure de/centralization literature. Next, we describe how de/centralization concepts are applied to emerging resilient infrastructure theory, identifying conditions under which they support resilience principles. While centralized systems are dominant in practice and decentralized systems are promoted in resilience literature, all three configurations—centralized, decentralized, and distributed—were found to align with resilience capacities in various contexts of stability and instability. Going forward, we recommend a multi-dimensional framing of de/centralization through a network-governance perspective where capabilities to shift between stability and instability are paramount and information is a critical mediator.
Flooding is the most common natural hazard, leading to property damage, injuries, and death. Despite the potential for major consequences, urban flooding remains difficult to forecast, largely due to a lack of data availability at fine spatial scales and associated predictive capabilities. Crowdsourcing of public webcams, social media, and citizen science represent potentially important data sources for obtaining fine-scale hydrological data, but also raise novel challenges related to data reliability and consistency. We provide a review of literature and analysis of existing databases regarding the availability and quality of these unconventional sources that then drives a discussion of their potential to support fine-grained urban flood modelling and prediction. Our review and analysis suggest that crowdsourced data are increasingly available in urban contexts and have considerable potential. Integration of crowdsourced data could help ameliorate quality and completeness issues in any one source. Yet, substantial weaknesses and challenges remain to be addressed.
In many disciplines, the resilience concept has applied to managing perturbations, challenges, or shocks in the system and designing an adaptive system. In particular, resilient infrastructure systems have been recognized as an alternative to traditional infrastructure, in which the systems are managed to be more reliable against unforeseen and unknown threats in urban areas. Perhaps owing to the malleable and multidisciplinary nature in the concept of resilience, there is no clear-cut standard that measures and characterizes infrastructure resilience nor how to implement the concept in practice for developing urban infrastructure systems. As a result, unavoidable subjective interpretation of the concept by practitioners and decision-makers occurs in the real world. We demonstrate the subjective perspectives on infrastructure resilience by asking practitioners working in governmental institutions within the metropolitan Phoenix area based on their interpretations of resilience, using Q-methodology. We asked practitioners to prioritize 19 key strategies for infrastructure resilience found in literature in three different decision contexts and recognized six discourses by analyzing the shared or discrete views of the practitioners. We conclude that, from the diverse perspectives on infrastructure resilience observed in this study, practitioners’ interpretation of resilience adds value to theoretical resilience concepts found in the literature by revealing why and how different resilience strategies are preferred and applied in practice.
We are still designing, managing, and governing transportation systems that came out of a bygone era. Our principles, technologies, and governing institutions, as well as the decisions we make, reflect modes of thinking rooted in transportation goals from the industrial age, when many of our now aging highways, railways, and ports were first developed.
New transportation systems must emphasize agility and flexibility, because today’s impossibilities may be tomorrow’s reality.
Humans are at the dawn of major shifts in the relationships among society, the environment, and technology. This transformation has profound implications for the design and management of the critical infrastructure that serves as the backbone for virtually every activity and service. Policymakers and the public have been largely able to ignore these systems, assuming that they’ll continue to function as they have in the past. This is no longer a reasonable assumption. It’s time to come to grips with the reality that the complexity of infrastructure is exploding, emerging and disruptive technologies are accelerating, history is no longer a reliable guide to the future—and education on these issues is insufficient. Infrastructure in the Anthropocene is a “timely and critical” (Chris Hendrickson, National Academy of Engineering) guide by two of the country’s leading scholars of sustainable engineering, adaptation, and innovation. This indispensable book provides “practical and implementable” (Emanuel Liban, American Society of Civil Engineers Committee on Sustainability Chair) insight into what modern infrastructure can and should do, and how it should function on a planet now dominated by humans.
Transparent methods of estimating CO2 emissions from transportation sources are necessary to evaluate mitigation strategies. This study proposes a framework to assess the regional impact of roadway designs on CO2 emissions. First, three roadway infrastructure configurations were designed to improve the traffic flow at intersections and interchanges. Second, the economical and environmental life-cycle cost assessments of constructing and maintaining new infrastructures were developed. Then, the effects of infrastructure on regional vehicle CO2 emissions were modeled using a simulation-based dynamic traffic assignment model coupled with the U.S. Environmental Protection Agency's Motor Vehicle Emission Simulator (MOVES) model. The case study estimated that converting 72 stop signs to roundabouts within El Paso, TX, reduced daily vehicular CO2 emissions by more than 50 tonnes, paying back the CO2 from construction and maintenance within 2.5 to 2.9 years. The roundabout modifications' cost-effectiveness ranged from $30 to $130 per tonne of CO2 over a 30-year assessment.
Pervasive and accelerating climatic, technological, social, economic, and institutional change dictate that the challenges of the future will likely be vastly different and more complex than they are today. As our infrastructure systems (and their surrounding environment) become increasingly complex and beyond the cognitive understanding of any group of individuals or institutions, artificial intelligence (AI) may offer critical cognitive insights to ensure that systems adapt, services continue to be provided, and needs continue to be met. This paper conceptually links AI to various tasks and leadership capabilities in order to critically examine potential roles that AI can play in the management and implementation of infrastructure systems under growing complexity and uncertainty. Ultimately, various AI techniques appear to be increasingly well-suited to make sense of and operate under both stable (predictable) and chaotic (unpredictable) conditions. The ability to dynamically and continuously shift between stable and chaotic conditions is critical for effectively navigating our complex world. Thus, moving forward, a key adaptation for engineers will be to place increasing emphasis on creating the structural, financial, and knowledge conditions for enabling this type of flexibility in our integrated human-AI-infrastructure systems. Ultimately, as AI systems continue to evolve and become further embedded in our infrastructure systems, we may be implicitly or explicitly releasing control to algorithms. The potential benefits of this arrangement may outweigh the drawbacks. However, it is important to have open and candid discussions about the potential implications of this shift and whether or not those implications are desirable.
Design storm criteria (i.e., the specific intensity and/or frequency to which infrastructure systems are designed to withstand) are a critical part of resilience efforts within urban and infrastructure systems. However, factors like climate change and increasing complexity within our urban systems call into question the viability of current approaches to and implementation of design storm criteria moving forward. This paper seeks to identify design practices and strategies that are well-suited for the increasingly complex and rapidly changing contexts in which our cities and infrastructure are operating. We posit that the advancement of a multi-scalar perspective on resilience will be increasingly necessary in response to the growing challenges our cities and infrastructure face. At the scale of single components/sub-systems, return periods (or similar criteria) will likely remain a necessary element of the design process. At the scale of the entire system(s), approaches like safe-to-fail, robust decision making, and enhanced sensing and simulation appear well suited for complementing existing approaches by more explicitly considering failure consequences in the design and management processes. Ultimately, this paper seeks to spur continual research and advancement of these topics in order to facilitate the evolution of the design storm process for an increasingly complex and non-stationary world.
Infrastructure are increasingly being recognized as too rigid to quickly adapt to a changing climate and a non-stationary future. This rigidness poses risks to infrastructure service delivery and public welfare. Adaptivity in infrastructure is critical for managing uncertainties to continue providing services, yet little is known about how infrastructure can be made more agile and flexible for improved adaptive capacity. A literature review identified approximately fifty examples of novel infrastructure and technologies which support adaptivity through one or more of ten theoretical characteristics of adaptive infrastructure. From these examples, several infrastructure forms and possible strategies for adaptivity emerged, including smart technologies, combined centralized/decentralized organizational structures, and renewable electricity generation. With institutional and cultural support, such novel structures and systems have the potential to transform infrastructure provision and management.
Infrastructure systems deliver basic and critical services. They are the pillars of civilization. In the twenty-first century, infrastructure will need to change to fit the needs of a new world. What shape will they take? What function will they provide? Who will they serve and why? In this book, forty experts from around the world share their reflections for infrastructure at 2100. The book is a series of science fiction short stories, essays, and poems. Climate change, sustainability, resilience, and technology are recurring themes in the reflections. Written in 2020, it is impossible to predict how infrastructure will be in 2100. The goal of this book is not to make accurate descriptions of the future. Instead, it is to provide a dialogue and visions of what we could hope for or fear. Only time will tell on which side of the balance we end up leaning.
Most cities have little to no idea how many parking spaces they have. Nevertheless, we know that most cities have a lot of parking, and that most of it is free. This free parking comes with significant economic, environmental, and social costs. A city full of parking is a city designed for cars. When cities are designed for cars, car use becomes necessary, which makes drivers call for more car-oriented design, even though such design leads to more driving and pollution, and creates landscapes that hinder walking, biking and transit use.
COVID-19 is a classic engineering problem that the United States has botched. Here’s how to reform the system in order to build in efficient resilience against both the current public health crisis and future ones.
The rapid progression of COVID-19 is revealing challenges for infrastructure as the institutions that manage and deliver critical and basic services are having to respond to changes in demand and new operating conditions.
Humanity’s social and economic development has been challenged by a range of adversities over the millennia that have caused widespread and unimaginable suffering. At the same time, these challenges have forced humans to evolve more wisely, overcoming adversity through creativity and leading to advancements in science and technology, medicine, ethics and legal systems, and socio-political systems. The dynamics of risks and opportunities caused by COVID-19, in the built, cyber, social and economic environments, present opportunities for deepening our understanding of resilient and sustainable development and infrastructure. This article reflects on five lessons that COVID-19 is teaching us about what it means to develop sustainably through the lens of transportation: (1) sustainable development planning and analytical frameworks must be comprehensive, for long-term sustainability; (2) multi-modal transportation is a superior vision for sustainable development than any one particular mode; (3) tele-activities are part of an effective infrastructure sustainability strategy; (4) economic capital is critically important to sustainable development even when it is not a critical existential threat, and, (5) effective social capital is essential in global disaster resistance and recovery, and can and must be leveraged between fast-moving and slow-moving disasters. Resilient and sustainable infrastructure will continue to be critical to addressing evolving natural and man-made hazards in the 21st Century.
Infrastructure must be resilient to both known and unknown disturbances. In the past, resilient infrastructure design efforts have tended to focus on principles of robustness and recovery against projected failures. This framing has developed independently from resilience principles in biological and ecological systems. As such, there are open questions as to whether the approaches of natural systems that lead to adaptation and transformation are relevant to engineered systems. To improve engineered system resilience, infrastructure managers may benefit from considering and applying a set of ‘Life’s Principles’ – design principles and patterns drawn from the field of biomimicry. Nature has long withstood disturbances within and beyond previous experience. Infrastructure resilience theory and practice are assessed against Life's Principles identifying alignments, contradictions¬, contentions, and gaps. Resilient infrastructure theory, which emphasizes a need for flexible and agile infrastructure, aligns well with Life’s Principles, addressing each principle and most sub-principles (excluding ‘breakdown products into benign components’ and ‘do chemistry in water’). Meanwhile, resilient infrastructure practice only occasionally aligns with Life’s Principles and contradicts five out of six principles. As resilience theory advances, Life’s Principles offer support in broadening how infrastructure managers approach resilience, and by using biomimicry, infrastructure managers can be better equipped to deploy resilience for complexity and uncertainty.
Modern infrastructure have been a relatively stable force for decades, ensuring that basic and critical services are met, without significantly changing their core designs or management principles. At the dawn of the Anthropocene it appears that accelerating and increasingly uncertain conditions are likely to result in a paradigm shift for infrastructure, where the environments in which they operate are changing faster than the systems themselves. New approaches are needed in the education, management/governance, and physical structures that constitute infrastructure systems that can respond in pace. Principles of agility and flexibility appear well-suited to help guide how we transform the management and design of infrastructure. These principles are relevant across centralized and decentralized, and short or long design life configurations. In changing how we approach infrastructure we will need to respond to the increasingly wicked challenges that infrastructure are often in the middle of. We will need to approach infrastructure as complex adaptive systems where knowledge generation and a commitment to an iterative process that constantly reassess the landscape and responsiveness of the institutions and technologies that make up infrastructure is primary. As such, infrastructure systems must become a Fifth Discipline, focused on learning about the rapidly changing environments and demands in which they operate, and agility and flexibility in both governance and technology reconfiguration, so as to avoid dysfunction.
Transitioning infrastructure governance for accelerating, increasingly uncertain, and increasingly complex environments is paramount for ensuring that critical and basic services are met during times of stability and instability. Yet the bureaucratic structures that dominate infrastructure organizations and their capacity to respond to increasing complexity remain poorly understood. To change infrastructure governance, it is critical to understand current conditions, the barriers to change, and the strategies needed to shift priorities and leadership strategy. Emergence of modern infrastructure bureaucratic and organizational structure is first explored. The need to rethink infrastructure as knowledge enterprises capable of making sense of changing conditions, and not simply as basic service providers, is discussed. Next, transformation of infrastructure governance is presented as both a challenge of organizational change as identity and power, and leadership capacity to shift between stable and unstable conditions. Infrastructure bureaucracies should create capabilities to shift between periods of stability and instability, emphasizing flexibility where ad hoc teams are given power to make sense of changing conditions and steer the organization appropriately. Additionally, several critical factors must be addressed within organizational power structures, identities, and processes to facilitate change. Allowing infrastructure governance to persist in its current form is likely increasingly problematic for the future, and may result an increasing inability to maintain relevance.
The COVID-19 pandemic has shocked infrastructure systems in unanticipated ways. Seemingly in the course of weeks, our demands for many basic and critical services have radically shifted. With expected long-term effects (i.e., years), COVID-19 is going to have profound impacts on every facet of infrastructure systems, and will shock these systems very differently than the hazards that we often focus on, such as extreme events, disrepair, and terrorist attacks. At the beginning of this pandemic, infrastructure managers are scrambling to respond to changes in demand, and to understand what the long-term effects are for how they operate and maintain their systems. We contend that COVID-19 is revealing several important limitations to how we approach and manage our infrastructure, that must be acknowledged and addressed as the pandemic persists, and in a future increasingly characterized by accelerating and increasingly uncertain conditions. These limitations are how i) we prepare for concurrent hazards, ii) frame criticality based on traditional infrastructure sectors and not human capabilities, iii) we emphasize efficiency at a cost to resilience, and iv) leadership is largely focused on stable conditions. Each of these challenges represents a call for major rethinking for how we approach infrastructure, and COVID-19 presents a window of opportunity for change.
The loss of infrastructure services under climate change hazards emerges from complex interactions between the social, environmental, and technological system variables which drive the behavior of infrastructure systems. The complexity of interactions causes failures to cascade in unpredictable ways, often between different infrastructure systems. A common approach to managing this unpredictability is to attempt to characterize the cause-and-effect relationships of infrastructure interdependencies, whether it be related to the resource flows, geographic proximity, logical connections, or the common use of cyber infrastructure. We posit that though a reductive approach toward characterization of interdependencies produces useful insights, it is an insufficient strategy by itself due to the complexity and unpredictability involved in the occurrence and magnitude of cascades of failure across systems. We present historical case studies which demonstrate that cascades from interdependencies display essential tenets of complexity—namely non-linearities, path dependence, and emergence. The Cynefin decision-making framework suggests that management of systems that are in the complex domain include strategies such as Decision Making Under Uncertainty and Safe-to-Fail, which address uncertainty by probing, testing, collecting and analyzing data, and lastly deploying solutions with a commitment to reassessing the systems as conditions change. We therefore recommend that in order to mitigate the surprise from cascades of failure across systems from climate hazards, infrastructure managers supplement their planning efforts with these types of strategies.
Hurricane evacuation has long been a difficult problem perplexing local government. Recent Hurricane Irma (2017) created the most extensive scale of evacuation in Florida’s history, involving about 6.5 million people in a mandatory evacuation order and 4 million evacuation vehicles. Traffic jams emerged in mid-Florida and rapidly spread to involve the entire state. To understand the hurricane evacuation process, the spatial and temporal evolution of the traffic flow is a critical piece of information, but it is usually not fully observed. Based on the game theory, this paper employs the available traffic observation of main highways to reconstruct the traffic flow on all highways in Florida during Irma. Our reconstructed data show that the evacuation rates for 5 representative cities -- Key West, Miami, Tampa, Orlando and Jacksonville-- in Florida were about 90.1%, 38.7%, 52.6%, 22.1%, and 7%, respectively. The peak evacuation traffic flow from Tampa and Miami arrived in the Orlando region at almost the same time, triggering the catastrophic congestion through the entire state. Also, the evacuation for Hurricane Irma was greater than predicted by an evacuation demand model developed based on previous event and survey data. The detailed evacuation traffic flow reanalysis accomplished in this article lays a foundation for evacuation demand studies as well as developing evacuation management policies.
Designing infrastructure for a changing climate remains a major challenge for engineers. In popular discourse a narrative has emerged that infrastructures are likely undersigned for the future. Weather-related hazards are directly embedded in the infrastructure design process. Yet the codes and standards that engineers use for this risk analysis have been changing for decades, sometimes increasing and other times decreasing design values. Further complicating the issue is that climate projections show increasing or decreasing intensities depending on the hazard and region. Thus, it is not clear that infrastructure is universally underdesigned. Here, analyses are developed at both regional and national scales using precipitation and roadway drainage systems to answer this question. First, it is shown that modeling uncertainty can pose challenges for using future projections to update region-specific standards. Second, the results show that depending on the historical design conditions and the direction of projections, roadway drainage infrastructure may be designed appropriately in some regions while in others they are possibly underdesigned. Given these uncertainties, the authors believe that there is a need for alternative design paradigms, and these needs are discussed.
As climate change research and efforts grow, and resilience takes hold to make sense of the growing complexity of the Anthropocene, there is opportunity for the Industrial ecology community to support adaptation and transformation of human systems. To do so Industrial ecology will need to embrace the growing complexity that is human systems and their relationships with natural systems, by creating innovative approaches that leverage the existing tools and frameworks that have dominated since the field’s creation. Towards catalyzing this response, this article describes the new challenges that is climate adaptation in a complex world, and synthesizes articles from a special issue focused on climate change adaptation. The special issue includes perspective and application articles, which together lay out new theory and techniques for Industrial ecology to address adaptation challenges. The special issue articles cover a diverse array of topics relevant to climate change adaptation including financing, migration, material use, and developing region considerations. Taken together the special issue is positioned to support first steps for the Industrial ecology community to transition tools and thinking for the complex challenges that is climate change adaptation and its concurrent considerations.
Characterizing infrastructure vulnerability to climate change is essential given the long asset lives, criticality of services delivered, and high costs of upgrading and maintaining these systems. Reconciling uncertainty from past infrastructure design decisions with future uncertainty of climate change will help prioritize limited resources to high risk assets.
The accelerating integration of cyber technologies into physical infrastructure systems has radical implications for the operation, management, and vulnerabilities of our critical systems. Viewing the embedding of smart technologies in infrastructure as simply an interconnectedness of systems is insufficient. The acceleration of the coupling may represent a profound shift in the relationships between humans and their services. It lays the groundwork for explosions of artificial intelligence, new capacities for services, radical changes in efficiency, and new vulnerabilities. Yet we continue to approach infrastructure design and management with principles that don’t reflect this new paradigm. To frame the challenges associated with modernizing infrastructure for accelerating cyberphysical relationships, we describe the new capabilities and vulnerabilities, and changes in approaches and thinking that are needed for the emerging complexity. We conclude by describing how infrastructure education and training will need to fundamentally shift from a focus on managing complicated physicals systems to working within complex cyberphysical systems that are likely to be governed by software.
This article is an announcement and overview of a special collection on infrastructure resilience to climate change in ASCE's Journal of Infrastructure Systems.
In many US cities, indoor exposure to heat continues to be the underlying cause of a considerable fraction (up to 80% during extreme events) of heat-related mortality and morbidity, even in locations where most citizens have air conditioning (AC). Nevertheless, the existing literature on indoor exposure to heat often regards AC as a binary variable and assumes that its presence inevitably results in a safe thermal environment. This is also reflected in heat vulnerability assessments that assign a binary attribute to AC. In this study, we used thermal simulation of buildings to investigate overheating in residential buildings in three US cities (Houston, Phoenix, and Los Angeles) and focused on scenarios where an AC system is present; yet not fully functional. Moreover, we identified the role of key building characteristics and investigated the sensitivity of indoor environment to the ambient temperature. Our results show that energy poverty and/or faulty systems can expose a considerable fraction of AC-owning elderly in Phoenix and Houston to excess heat for more than 50% of summer. This highlights the need to reevaluate AC as the primary protective factor against heat and introduces several implications that need to be considered in heat vulnerability assessments.
A common complaint against changing parking requirements is that parking is critical for businesses to survive. Such statements are generally taken as a statement of fact by planners and local officials, yet there is little empirical work in support of this claim. This research examines how online business reviews reflect customer sentiment toward parking, and how this sentiment is associated with the supply of parking. The Phoenix, Arizona region is used for this analysis. The parking supply at the parcel level is combined with data from user-generated Yelp business reviews to assess satisfaction or frustration with parking at different types of businesses in commercial districts across the region. Results suggest that parking is mentioned in about 5% of overall reviews, and when mentioned in reviews it is most often as a negative characteristic of the establishment. Reviews that mention parking also give significantly lower ratings to businesses. The analysis shows that parking sentiment may be associated in some cases with parking supply, e.g. districts with more parking spaces per business tend to have more positive parking sentiment. Additionally, in areas with shared parking facilities, parking was generally viewed more positively or mentioned less frequently. These findings suggest that parking supply is part of a customer's overall perception of a business, though not a major component, and that shared parking facilities are not associated with negative reviews. Implications for policy are that shared parking can be part of an overall package of parking reforms that satisfy businesses and customers alike.
Changing complexity in the increasingly integrated human, natural, and built systems within which our infrastructures are designed and operated make it necessary to examine how the role of engineering requires new competencies for satisficing. Several long-term trends appear to be shifting our infrastructures further away from the complicated domain where optimization and efficiency were the core approaches, to the domain of complexity, where rapidly changing environments and fragmentation of goals require fundamentally new approaches. While complexity in infrastructure has always existed in some form, making infrastructures agile and flexible for the Anthropocene will require us to acknowledge and work with the fact that infrastructure change now appears to be a wicked and complex process. Wicked complexity is the result of three competing forces that are inimical to rapid and sustained change of infrastructures in a future marked by acceleration and uncertainty: wicked problems, technical complexity including lock-in, and social complexity. The combination of these factors raises serious questions about whether rapidly changing demands, technologies, and perturbations (such as climate change, or cyber attacks) will affect our infrastructure’s capacity to provide services. What infrastructure managers need to do today is very different than in the past. Increased presence and polarization of viewpoints is becoming more common, where solutions are dictated not by technical performance measures but instead by “acceptable enough” to all parties. Adaptive management practices and associated competencies that have proven successful in managing complex socio-ecological systems may provide some guidance for how to manage infrastructure change. These competencies are i) promoting a shared understanding of what infrastructures can do, ii) managing infrastructures as systems with changing demands, iii) emphasizing experimentation over conventional approaches, and, iv) restructuring education and training for a complexity mindset that emphasizes what can be over what is, and relies on satisficing, not optimization.
Motivated by the need for cities to prepare for and adapt to climate change, we advance the paradigm of safe-to-fail by focusing on the decision dilemmas and the consideration of infrastructure failure consequences in developing infrastructure. Infrastructure are largely designed as fail-safe, i.e., they are not intended to fail, and when failure happens the consequences are severe. Safe-to-fail has been recently presented as the antithesis of fail-safe, without any specific guidance of what the paradigm is or how to apply it. There is an emerging need for stakeholders, including policy makers, planners, engineers, utilities, and communities to understand infrastructure failures, bring their knowledge into the infrastructure development process, and help adapt cities to unpredictable and changing climate risks. We frame safe-to-fail as an infrastructure development paradigm that internalizes the consequences of infrastructure failure in the development process. This framing of safe-to-fail further reveals an emerging “infrastructure trolley problem” where the adaptive capacity of some regions is improved at the expense of others. We demonstrate practical dilemmas in developing infrastructure under non-stationary climate and guide managing trade-offs in the prioritization of different consequences of infrastructure failure.
Anthropogenic climate change poses risks to transport infrastructure that include disrupted operations, reduced lifespan and increased reconstruction and maintenance costs. Efforts to decrease the vulnerability of transport networks have been largely limited to understanding projected risks through governance and administrative efforts. Where physical adaptation measures have been implemented, these have typically aligned with a traditional ‘engineering resilience’ approach of increasing the strength and rigidity of assets to withstand the impacts of climate change and maintain a stable operating state. Such systems have limited agility and are susceptible to failure from ‘surprise events’. Addressing these limitations, this paper considers an alternate approach to resilience, inspired by natural ecosystems that sense conditions in real-time, embrace multi-functionality and evolve in response to changing environmental conditions. Such systems embrace and thrive on unpredictability and instability. This paper synthesises key literature in climate adaptation and socio-ecological resilience theory to propose a shift in paradigm for transport infrastructure design, construction and operation, towards engineered systems that can transform, evolve and internally manage vulnerability. The authors discuss the opportunity for biomimicry (innovation inspired by nature) as an enabling discipline for supporting resilient and regenerative infrastructure, introducing three potential tools and frameworks. The authors conclude the importance of leveraging socio-ecological resilience theory, building on the achievements in engineering resilience over the past century. These findings have immediate practical applications in redefining resilience approaches for new transport infrastructure projects and transport infrastructure renewal.
For centuries man-made infrastructure has been viewed as separate from natural systems. Yet in the past few centuries, as the scale and scope of human activities has dramatically increased, there is accumulating evidence that natural systems are becoming increasingly, and in some cases entirely managed by humans. The dichotomy between infrastructure and the environment is narrowing, and natural systems are increasingly becoming human design spaces. This is already apparent with the management of hydrologic systems for urban water supply, wildlife, agriculture, forests and even the atmosphere, and we can expect management of the environment to become more so as human activities grow. Yet our infrastructure largely remain obdurate. They are designed to last for long times even as changes in the environment and technology accelerate. As such, our current infrastructure paradigms fail at the level of the complex, integrated systems and behaviors that characterize the anthropogenic Earth. Infrastructure in the future will need to be designed for adaptive capacity and the complexities associated with techno-environmental systems.
Hot, semi-arid cities in the American Southwest are on the front line of stressed water resources, urban heat islands, and population growth, and are projected to be increasingly burdened by anthropogenic climate change. Some have portrayed these challenges as insurmountable. We propose an alternative hypothesis: southwestern cities are testbeds for developing adaptation and mitigation strategies that cities with less extreme climates may need before the turn of the next century. Here we highlight some initiatives, plans, and needs of local governments, universities, and businesses to inform a more complete narrative of how southwestern cities are addressing livability and sustainability challenges.
As human enterprises accelerate through the twenty-first century, the infrastructures that support all kind of activities, and are the backbone of development, are becoming ever more tightly linked to our progress. The term infrastructure is often used loosely and can have different meanings but in general refers to the physical structures and corresponding institutional arrangements that enable human activity. More broadly, they are the designed and built set of technological systems that mediate between humans, their communities, and their broader environment, enabling human capabilities. In policy and practice, infrastructures are often discussed in terms of the physical assets that provide and enable services: power generation facilities, water treatment plants, roads, schools, hospitals, among others. The significance of reliable infrastructures has become apparent. Investments in infrastructures feed through to economic growth and access to infrastructure services is positively related to a number of well-being measures. Developing countries often look to countries with mature infrastructures as models of how they should be deploying infrastructure into the future. But hard questions remain as to the impacts of infrastructure investments on progress.
This article is in response to: Scott Thacker, Daniel Adshead, Marianne Fay, Stéphane Hallegatte, Mark Harvey, Hendrik Meller, Nicholas O’Regan, Julie Rozenberg, Graham Watkins and Jim Hall, 2019, "Infrastructure for Sustainable Development", Nature Sustainability, 2, pp. 324-331, doi: 10.1038/s41893-019-0256-8.
There is little knowledge of how much parking infrastructure exists in cities despite mounting evidence that abundant and underpriced parking creates economic, environmental, and social problems. Urban parking requirements are very precise and routinely enforced despite the fact that most cities have little to no knowledge about their own parking supply. To further explore these issues, a parking inventory for metropolitan Phoenix, Arizona, USA is developed by cross-referencing geospatial cadastral and roadway data with minimum parking requirements. Metropolitan Phoenix is chosen because it is relatively young, rapidly growing, highly sprawled, and car dependent. Historical growth of parking is also estimated by linking year of property development to required off-street and nearby on-street parking spaces. As of 2017, we estimate that there are 12.2 million parking spaces in the metropolitan region with 4.04 million inhabitants, 2.81 million registered personal vehicles, and 1.84 million jobs. Growth of parking in metro Phoenix has also been significant; since 1960, 10.9 million spaces have been added to the region compared to a population growth of 3.41 million, vehicle fleet growth of 2.63 million, and employment growth of 1.56 million. Since the 2008 recession, parking growth in metro Phoenix has significantly slowed, but continued urban growth combined with substantial minimum parking requirements may promote more parking infrastructure than is needed. Planners and policy makers should value quantifying the growth and supply of parking in urban areas and consider reforming parking standards to promote consistent and unambiguous pathways to sustainable urban growth.
Heat waves have posed serious challenges to electricity infrastructure, including a major blackout in Southern California in 2011 and emergency curtailment in 2014. Understanding how future temperature change might impact future electric power systems is critically important to ensure the future operational reliability. Using existing spatial projections of peak hour electricity demand for Los Angeles County (LAC)—with consideration for rising air temperatures under RCP 4.5 and RCP 8.5 at 2 square km grid cell resolution, two population growth scenarios, new residential and commercial buildings, higher air conditioner (AC) penetration, and improved AC efficiency—we estimated vulnerabilities in the LAC’s electricity infrastructure to 2060. Results were that generators, substations, and transmission lines, except those near the Santa Monica Beach, could lose 2—20% of safe operating capacity due to air temperatures above 40°C (104°F). We further allocated spatial forecasts of peak demand to substations—using a Voronoi polygon method and the components’ de-rated capacities during heat waves—and identified where and by how much substations could exceed thermal limits and be automatically tripped by protection gear. Based on recent historical load factors for substations in the Southern California Edison service territory, an additional 848—6,724 MW (4-32%) of delivery system capacity will be needed by 2060 to maintain reliable operations, not including the Los Angeles Department of Water and Power (~1/3 of the county). Some of that system capacity may be met more cost-effectively by investment in distributed energy resources than in new central generation plants and/or substations and/or transmission lines. If increases in peak load cannot be mitigated and/or additional infrastructure capacity cannot be added in the corresponding locations, then LAC’s electricity infrastructure will be vulnerable to outages and cascading failures during heat waves.
The long-term reliability and functioning of the transportation system will increasingly need to consider and plan for climate change and extreme weather events. Transportation systems have largely been designed and operated for historical climate conditions that are now frequently exceeded. Emerging knowledge of how to plan for climate change largely embraces risk-based thinking favoring more robust infrastructure designs. However, there remain questions about whether this approach is sufficient given the uncertainty and non-stationarity of the climate, and many other driving factors affecting transportation systems (e.g., funding, rapid technological change, population and utilization shifts, etc.). This paper examines existing research and knowledge related to the vulnerability of the transportation system to climate change and extreme weather events and finds that there are both direct and indirect “pathways of disruption.” Direct pathways of disruption consist of both abrupt impacts to physical infrastructure and impacts on non-physical factors such as human health, behavior, and decision making. Similarly, indirect pathways of disruption result from interconnectedness with other critical infrastructure and social systems. Currently, the direct pathways appear to receive considerably more focus and assessment than the indirect pathways, and the predominant approach for addressing these pathways of disruption emphasizes strengthening and armoring infrastructure (robustness) guided by risk analysis. However, our analysis reveals that indirect pathways of disruption can have meaningful impacts, while also being less amenable to risk/robustness-based approaches. As a result, we posit that concepts like flexibility and agility appear to be well suited to complement the status quo of robustness by addressing the indirect and non-physical pathways of disruption that often prove challenging - thereby improving the resilience of transportation systems.
Los Angeles County (LAC) is a large urbanized region with 9.7 million residents (as of 2010) and aging infrastructure. Population forecasts indicate that LAC will become home to an additional 1.2–3.1 million residents through 2060. Additionally, climate forecasts based upon representative concentration pathway (RCP) scenarios 4.5 and 8.5 indicate that average air temperatures will increase by 1—4°C (2—7 °F) in the region. Both of these factors are expected to result in higher summertime peak electricity demand due to growth in the number of buildings, the percentage of installed air conditioners (ACs), and the additional cooling load on those air conditioners. In order to understand potential power reliability issues, and support infrastructure planning efforts, a long-term peak demand forecast was developed using hourly residential and commercial (R&C) building energy models. Peak hour electricity demand was estimated to increase from 9.5—12.8 GW for R&C sectors, to 13.0—17.3 GW (2—36%) and 14.7—19.2 GW (16—51%) by 2060 for the population forecasts from the California Department of Finance and the Southern California Association of Governments respectively. While marginal increases in ambient air temperature due to climate change accounted for only 4—8% of future increases in peak demand, differences in annual maximum temperatures within the 20-year periods affected results by 40%—66% indicating a high sensitivity to heat waves. Population growth of at least 1 million people is anticipated to occur mostly in the northern cities of Palmdale, Lancaster, and Santa Clarita, bringing an additional 0.4—1 GW of peak demand in those regions. Building and AC efficiency are anticipated to improve as national and state efficiency standards increase, and as older, less efficient units are replaced; this could offset some of the projected increases in peak demand. Additionally, development of shared wall, multi-family dwelling units could enable population growth of up to 3 million people without increasing peak demand.
Continued growth in the American Southwest depends on the reliable delivery of services by critical infrastructure systems, including water, power, and transportation. As these systems age, they are increasingly vulnerable to extreme heat events that both increase infrastructure demands and reveal complex interdependencies that amplify stressors. While the traditional analytic approach to preparing for such hazards is risk analysis, the experience of Hurricane Katrina provides a warning of the limitations of risk-based approaches for confronting complexity, and the potential scale and impact that can result from cascading failures under extreme stress. By contrast, this research is the first to apply resilience theory to understanding complex infrastructure interdependencies during an extreme heat event in Phoenix, AZ and the role of sensing, anticipating, adapting, and learning (SAAL) for mitigating catastrophe.
As technologies rapidly progress, there is growing evidence that our civil infrastructure do not have the capacity to adaptively and reliably deliver services in the face of rapid changes in demand, conditions of service, and environmental conditions. Infrastructure are facing multiple challenges including inflexible physical assets, unstable and insufficient funding, maturation, utilization, increasing interdependencies, climate change, social and environmental awareness, changes in coupled technology systems, lack of transdisciplinary expertise, geopolitical security, and wicked complexity. These challenges are interrelated and several produce non-stationary effects. Successful infrastructure in the twenty-first century will need to be flexible and agile. Drawing from other industries, we provide recommendations for competencies to realize flexibility and agility: roadmapping, focus on software over hardware, resilience-based thinking, compatibility, connectivity, and modularity of components, organic and change-oriented management, and transdisciplinary education. First, we will need to understand how non-technical and technical forces interact to lock in infrastructure, and create path dependencies.
Traditional infrastructure adaptation to extreme weather events (and now climate change) has typically been techno-centric and heavily grounded in robustness – the capacity to prevent or minimize disruptions via a risk-based approach that emphasizes control, armoring, and strengthening (e.g., raising the height of levees). However, climate and non-climate challenges facing infrastructure are not purely technological. Ecological and social systems also warrant consideration to manage issues of overconfidence, inflexibility, interdependence, and resource utilization - among others. As a result, techno-centric adaptation strategies can result in unwanted tradeoffs, unintended consequences, and under-addressed vulnerabilities. Techno-centric strategies that “lock-in” today’s infrastructure systems to vulnerable future design, management, and regulatory practices may be particularly problematic by exacerbating these ecological and social issues rather than ameliorating them. Given these challenges, we develop a conceptual model and infrastructure adaptation case studies to argue the following: 1) infrastructure systems are not simply technological, and should be understood as complex and interconnected social, ecological, and technological systems (SETS); 2) infrastructure challenges, like lock-in, stem from SETS interactions that are often overlooked and underappreciated; 3) framing infrastructure with a “SETS lens” can help identify and prevent maladaptive issues like lock-in; and 4) a SETS lens can also highlight effective infrastructure adaptation strategies that may not traditionally be considered. Ultimately, we find that treating infrastructure as SETS shows promise for increasing the adaptive capacity of infrastructure systems by highlighting how lock-in and vulnerabilities evolve, and how multidisciplinary strategies can be deployed to address these challenges by broadening the options for adaptation.
Environmental heat is a growing public health concern in cities. Urbanization and global climate change threaten to exacerbate heat as an already significant environmental cause of human morbidity and mortality. Despite increasing risk, very little is known regarding determinants of outdoor urban heat exposure. To provide additional evidence for building community and national-scale resilience to extreme heat, we assess how US outdoor urban heat exposure varies by city, demography, and activity. We estimate outdoor urban heat exposure by pairing individual-level data from the American Time Use Survey (2004–2015) with corresponding meteorological data for 50 of the largest metropolitan statistical areas in the US. We also assess the intersection of activity intensity and heat exposure by pairing metabolic intensities with individual-level time-use data. We model an empirical relationship between demographic indicators and daily heat exposure with controls for spatiotemporal factors. We find higher outdoor heat exposure among the elderly and low-income individuals, and lower outdoor heat exposure in females, young adults, and those identifying as Black race. Traveling, lawn and garden care, and recreation are the most common outdoor activities to contribute to heat exposure. We also find individuals in cities with the most extreme temperatures do not necessarily have the highest outdoor heat exposure. The findings reveal large contrasts in outdoor heat exposure between different cities, demographic groups, and activities. Resolving the interplay between exposure, sensitivity, adaptive capacity, and behavior as determinants of heat-health risk will require advances in observational and modeling tools, especially at the individual scale.
The most recent international report on climate change paints a picture of disruption to society unless there are drastic and rapid cuts in greenhouse gas emissions.
Although it’s early days, some cities and municipalities are starting to recognize that past conditions can no longer serve as reasonable proxies for the future.
This is particularly true for the country’s infrastructure. Highways, water treatment facilities and the power grid are at increasing risk to extreme weather events and other effects of a changing climate.
In the coming decades, ambient temperature increase from climate change threatens to reduce not only the availability of water, but also the operational reliability of engineered water systems. Relatively little is known about how temperature stress can increasingly cause hardware components to fail, quality to be affected, and service outages to occur. Changes to the estimated-time- to-failure of major water system hardware and the probability of quality non-compliance were estimated for a modern potable water system that experiences hot summer temperatures, similar to Phoenix, Arizona and Las Vegas, Nevada. A fault tree model was developed to estimate the probability that consequential service outages in quantity and quality will occur. Component failures are projected to have a percent increase of 10 - 89% in scenarios where peak summer temperature has increased from 36 to 44 deg C, which create the conditions for service outages to have a percent increase of 13 - 89%. Increased service outages due to multiple pumping unit failures and water quality non-compliances are the most notable concerns for water utilities. The most effective strategies to prevent temperature-related component failure should focus on maintaining sufficient chlorine residual and cooling pumping unit motors and electronics.
As a consequence of the U.S. effort to increase infrastructure security and resilience, the Department of Homeland Security (DHS) and other U.S. federal agencies have identified 16 critical infrastructure sectors that are considered vital to the nation’s well-being in terms of economic security, public health, and safety. However, there remains no articulated set of values that justify this particular list of infrastructure systems or how decision-makers might prioritize investments towards one critical sector over another during a crisis. To offer a more integrated and holistic approach to critical infrastructure resilience, this research employs the Capabilities Approach to human development, which offers an alternative view of critical infrastructure that focuses on the services that infrastructure provides rather than its physical condition or vulnerability to threats. This service-based perspective of infrastructure emphasizes the role of infrastructure in enabling and supporting central human capabilities that build adaptive capacity and improve human well-being. We argue that the most critical infrastructure systems are those that are essential for providing and/or supporting central human capabilities. This paper examines the DHS designation of criticality from a capabilities perspective and argues for a capabilities basis for making distinctions between those systems that should be considered most critical and those that might be temporarily sacrificed. A key implication of this work is that an across sector approach is required to reorganize existing critical infrastructure efforts around the most valuable infrastructure services.
A fundamental shift is afoot in the relationship between human and natural systems. It requires a new understanding of what we mean by infrastructure, and thus dramatic changes in the ways we educate the people who will build and manage that infrastructure. Similar shifts have occurred in the past, as when humanity transitioned from building based on empirical methods developed from trial-and-error experience, which was sufficient to construct the pyramids and European cathedrals, to the science-based formal engineering design methods and processes necessary for the electric grid and jet aircraft. But just as the empirical methods were inadequate to meet the challenges of nineteenth- and twentieth-century developments, today’s methods are inadequate to meet the needs of the twenty-first century. Now that we have entered the Anthropocene period in which human activities affect natural systems such as climate, engineers face far more complex design challenges.
Public cooling centers are a recommended component of heat management plans aimed at reducing morbidity and mortality during extreme heat events. Although access to air conditioned space is known to reduce health risks associated with heat exposure, it is not known if these facilities are well positioned to serve those that are the most vulnerable to heat. Additionally, other air-conditioned public spaces are also recommended as options by weather and public health agencies. Public cooling centers may provide redundant coverage in some areas with an excess of alternatives. We explored the distribution of two public cooling center networks (Los Angeles County, CA and Maricopa County, AZ) and found that significant fractions of the networks, 46% in Los Angeles and 75% in Maricopa, were located in areas with abundant, publically available, air-conditioned spaces. To locate these facilities more effectively, underlying socio-economic characteristics that contribute to heat vulnerability and access to existing public cooled spaces should be considered. Using a maximal covering location problem framework and household-scale geospatial data, we show that the existing facility locations were suboptimal. Using a new iterative method of aggregating household level data and ArcGIS location analysis tools, we identified sets of facilities that improve access to cooling centers for those who are more susceptible to heat without access to potential alternatives. Generally, the results suggest shifting cooling centers from mixed-use urban cores where numerous public air-conditioned spaces exist to dense inner- and outer-suburbs where homogenous land use patterns potentially isolate residents from cooling center alternatives.
Extreme events are of interest worldwide given their potential for substantial impacts on social, ecological, and technical systems. Many climate-related extreme events are increasing in frequency and/or magnitude due to anthropogenic climate change, and there is increased potential for impacts due to the location of urbanization and the expansion of urban centers and infrastructures. Many disciplines are engaged in research and management of these events. However, a lack of coherence exists in what constitutes and defines an extreme event across these fields, which impedes our ability to holistically understand and manage these events. Here, we review 10 years of academic literature and use text analysis to elucidate how six major disciplines--climatology, earth sciences, ecology, engineering, hydrology, and social sciences--define and communicate extreme events. Our results highlight critical disciplinary differences in the language used to communicate extreme events. Additionally, we found a wide range in definitions and thresholds, with more than half of examined papers not providing an explicit definition, and disagreement over whether impacts are included in the definition. We urge distinction between extreme events and their impacts, so that we can better assess when responses to extreme events have actually enhanced resilience. Additionally, we suggest that all researchers and managers of extreme events be more explicit in their definition of such events as well as be more cognizant of how they are communicating extreme events. We believe clearer and more consistent definitions and communication can support transdisciplinary understanding and management of extreme events.
Commonly adopted engineering pedagogy tends to be lecture based, and places students in a passive and often secondary role in the classroom. Research in the field of engineering education highlights the ineffectiveness of such strategies and advocates adopting strategies that actively engage learners. Various pedagogical techniques promote student engagement; this paper focuses on two specific techniques: problem-based learning (PBL) and vertical integration. The authors created engaging classroom environments through vertically integrated courses that implemented PBL through shared course projects. Specifically, the authors created a framework for pairing two different student bodies across two disciplines, integrating a graduate civil engineering course (32 students) and an undergraduate construction management course (22 students). Implementation of the Spring 2016 framework improves student performance on course projects and students’ self-reported professional skill level and confidence in said skills, developed in part through participation in the framework. Further, the framework has a positive impact on undergraduate students’ intention to stay in their major and both student bodies report more interest in completing an additional advanced degree after participating in the vertically integrated courses. Finally, students report that the experience teaches professional skills they expect will be required in their own future careers. It is notable that undergraduates recognized more benefits of this implementation, especially that they have more potential for improvement than advanced graduate students. This paper contributes to the engineering education body of knowledge by delivering a proof of concept that PBL through vertical integration of different disciplines across undergraduate and graduate students supports improved performance and encourages professional skill development and confidence. The paper presents the framework itself, as well as evaluative results from framework implementation.
The United States is at an infrastructural crossroads. First, the climate is changing faster than built infrastructure and the institutions that manage and maintain it. Recent extreme weather events highlight the precarious state of the nation’s infrastructure and the ability of cities to adapt to climate change. After the nation in 2016 broiled through its hottest summer on record, 2017 began with one of the wettest winters on record for California and the Pacific Northwest. The 2017 hurricane season proved to be the most devastating and costly in the nation’s history. Hurricanes Harvey in Texas and Irma in Florida inflicted as much as $290 billion in damages. In the past 60 years, there has never been an Atlantic hurricane as intense as Maria was over the US territory of Puerto Rico. Two months after the hurricane, fewer than half of Puerto Rico’s 3.4 million residents had regained electric power. According to some estimates, Maria may have set the Puerto Rican economy back by a quarter century in just 12 hours. And adding to the list of miseries, a series of wildfires starting during volatile weather conditions in October devastated large areas of northern California and claimed at least 43 lives.
Treatment of drinking water decreases human health risks by reducing pollutants, but the required materials, chemicals, and energy emit pollutants and increase health risks. We explored human carcinogenic and non-carcinogenic disease tradeoffs of water treatment by comparing pollutant dose-response curves against life cycle burden using USEtox methodology. An illustrative wellhead sorbent groundwater treatment system removing hexavalent chromium or pentavalent arsenic serving 3200 people was studied. Reducing pollutant concentrations in drinking water from 20 micrograms/L to 10 micrograms/L avoided 37 potential cancer cases and 64 potential non-cancer disease cases. Human carcinogenicity embedded in treatment was 0.2-5.3 cases, and non-carcinogenic toxicity was 0.2-14.3 cases, depending on technology and degree of treatment. Embedded toxicity impacts from treating Cr(VI) using strong-base anion exchange were <10% of those from using weak base anion exchange. Acidification and neutralization contributed >90% of the toxicity impacts for treatment options requiring pH control. In scenarios where benefits exceeded burdens, tradeoffs still existed. Benefits are experienced by a local population but burdens are born externally where the materials and energy are produced, thus exporting the health risks. Even when burdens clearly exceeded benefits, cost considerations may still drive selecting a detrimental treatment level or technology.
Chapter in the Parking and the City.
The environmental impacts of parking and the driving it promotes are often borne by local populations and not the trip-takers themselves. Because abundant free parking encourages solo driving and thus discourages walking, biking, and the use of public transit, it greatly contributes to urban congestion. To determine the full social cost of parking, authors develop a range of estimates of the United States (US) parking space inventory and determine the energy use and environmental effects of constructing and maintaining this parking. The authors find that for many vehicle trips the environmental cost of the parking infrastructure sometimes equals or exceeds the environmental cost of the vehicles themselves. Evaluating life-cycle effects, including health care and environmental damage costs, we determine that emissions from parking infrastructure cost the US between $4 and $20 billion annually, or between $6 and $23 per space per year.
Chapter in the Routledge Handbook of Sustainable and Resilient Infrastructure.
Climate change creates new challenges for those who design, manage, and use infrastructure. There is increasing evidence that our civil infrastructure are vulnerable to climate change. We start by summarizing the evidence for how infrastructure are vulnerable to climate change hazards (including heat, precipitation, wildfires, and flooding). We focus on power, water, and transportation systems but discuss generalizable challenges for any major infrastructure. Next, we discuss how the interdependencies between infrastructure systems create challenges for mitigating climate change vulnerabilities. To date, infrastructure have largely been planned to withstand particular design storms, or environmental hazards that occur with a particular frequency and intensity (e.g., the 100 year storm). We discuss the challenges of this risk-based approach in a future marked by climate non-stationarity and the need for resilience-based design and operation that embraces this uncertainty. Given that infrastructure are long-lasting and climate is changing quickly, we describe the need for agile and flexible infrastructure as central to resilience strategies.
As climate change affects precipitation patterns, urban infrastructure may become more vulnerable to flooding. Flooding mitigation strategies must be developed such that the failure of infrastructure does not compromise people, activities, or other infrastructure. Safe-to-fail is an emerging paradigm that broadly describes adaptation scenarios that allow infrastructure to fail but control or minimize the consequences of the failure. Traditionally, infrastructure is designed as “fail-safe” where they provide robust protection when the risks are accurately predicted within a designed safety factor. However, the risks and uncertainties faced by urban infrastructures are becoming so great due to climate change that the “fail-safe” paradigm should be questioned. We propose a framework to assess potential flooding solutions based on multiple infrastructure resilience characteristics using a multi-criteria decision analysis (MCDA) analytic hierarchy process algorithm to prioritize “safe-to-fail” and “fail-safe” strategies depending on stakeholder preferences. Using urban flooding in Phoenix, Arizona as a case study, we first estimate flooding intensity and evaluate roadway vulnerability using the Storm Water Management Model for a series of downpours that occurred on September 8, 2014. Results show the roadway types and locations that are vulnerable. Next, we identify a suite of adaptation strategies and characteristics of these strategies, and attempt to more explicitly categorize flooding solutions as “safe-to-fail” and “fail-safe” with these characteristics. Lastly, we use MCDA to show how adaptation strategy rankings change when stakeholders have different preferences for particular adaptation characteristics.
Before Hurricane Harvey made landfall on Aug. 25, there was little doubt that its impact would be devastating and wide-ranging.
Unfortunately, Harvey delivered and then some with early estimates of the damage at over US$190 billion, which would make it the costliest storm in U.S. history. The rain dumped on the Houston area by Harvey has been called “unprecedented,” making engineering and floodplain design standards look outdated at best and irresponsible at worst.
But to dismiss this as a once-in-a-lifetime event would be a mistake. With more very powerful storms forming in the Atlantic this hurricane season, we should know better. We must listen to those telling a more complicated story, one that involves decades of land use planning and poor urban design that has generated impervious surfaces at a fantastic pace.
Climate non-stationarity is a challenge for electric power infrastructure reliability; recordbreaking heat waves significantly affect peak demand, lower contingency capacities, and expose cities to risk of blackouts due to component failures and security threats. The United States’ electric grid operates safely for a wide range of load, weather, and power quality conditions. Projected increases in ambient air temperatures could, however, create operating conditions that place the grid outside the boundaries of current reliability tolerances. Advancements in long-term forecasting, including projections of rising air temperatures and more severe heat waves, present opportunities to advance risk management methods for long-term infrastructure planning. This is particularly evident in the US Southwest—a relatively hot region expected to experience significant temperature increases affecting electric loads, generation, and delivery systems. Generation capacity is typically built to meet the 90th percentile (T90) hottest peak demand, plus an additional reserve margin of least 15%, but that may not be sufficient to ensure reliable power services if air temperatures are higher than expected. The problem with this T90 planning approach is that it requires a stationary climate to be completely effective. In reality, annual temperature differences can have more than a 15% effect on system performance. Current long-term infrastructure planning and risk management processes are biased climate data choices that can significantly underestimate peak demand, overestimate generation capacity, and result in major power outages during heat waves.
This study used downscaled global climate models (GCMs) to evaluate the effects of non-stationarity on air temperature forecasts, and a new high-level statistical approach was developed to consider the subsequent effects on peak demand, power generation, and local reserve margins (LRMs) compared to previous forecasting methods. Air temperature projections in IPCC RCPs 4.5 and 8.5 are that increases up to 6 °C are possible by the end of century, with highs of 58 °C and 56 °C in Phoenix, Arizona and Los Angeles, California respectively. In the hottest scenarios, we estimated that LRMs for the two metro regions would be on average 30% less than at respective T90s, which in the case of Los Angeles (a net importer) would require 5 GW of additional power to meet electrical demand. We calculated these values by creating a structural equation model (SEM) for peak demand based on the physics of common AC units; physics-based models are necessary to predict demand under unprecedented conditions for which historical data do not exist. The SEM forecasts for peak demand were close to straight-line regression methods as in prior literature from 25–40 °C (104 °F), but diverged lower at higher temperatures. Power plant generation capacity derating factors were also modeled based on the electrical and thermal performance characteristics of different technologies. Lastly, we discussed several strategic options to reduce the risk of LRM shortages; including implementing technology, market incentives, and urban forms that reduce peak load and load variance per capita as well as their tradeoffs with several other stakeholder objectives.
It is increasingly acknowledged that urban areas can play a key role in the profound shift that is required in humankind’s ways of understanding and responding to climate and sustainability challenges. These new ways of understanding and responding, however, will require bringing together urban planners, social scientists, business leaders, engineers and other diverse knowledge and power domains - an undertaking that creates its own set of seemingly intractable complications. As documented by scholars studying diverse fields of human endeavor, from pure scientific inquiry of urban weather and climate to governmental planning and to the private sector construction of urban infrastructures, one of the most difficult problems in creating change lies in moving people beyond the mental models, ways of knowing, tools and analytical systems they learn during their academic training and professionalization.
Scholarship on urban risk and vulnerability offers an example of this trend. While research on risk and vulnerability has grown considerably during recent years, it has consisted primarily of case studies based on the assumption that both depend on context. Furthermore, scholars and practitioners working on urban risk and vulnerability have offered often conflicting theories and interpretations that tend to shed light on only certain aspects of the problem, while other areas remain in the dark. This has political, equity and sustainability implications. For instance, the vast majority of epidemiological studies on health risks from heat waves have quantified the relationship between heat waves and health outcomes, while controlling for age and other factors. However, these studies have omitted underlying historical processes of socio-spatial segregation (e.g., land use planning) explaining urban populations’ differentiated access to green areas, air conditioning, health services, and other assets and options, and thus, differences in exposure to temperature and populations capacity to adapt to heat stress and mitigate heat risks. The development of approaches that can explain these differences may help move towards cohesive and policy relevant narratives.
This chapter starts with a brief discussion of existing definitions and approaches to understanding the interactions among urbanization, urban risk, and vulnerability. We outline the necessary components of an interdisciplinary understanding of how environmental and societal processes such as global warming and urbanization contribute to intra and inter-urban vulnerability to heat waves, floods, droughts and other climatic hazards. We highlight some of the mechanisms by which vulnerability and risk are shaped by the dynamics of urbanization, acting upon urban centers as places with unique social and environmental histories, opportunities and constraints. And we close with some concluding remarks on ways forward to reducing risk and enhancing populations’ capacity within and across urban areas.
Public agencies, particularly those responsible for infrastructure and its use, are increasingly being asked to reduce their system’s greenhouse gas emissions. Cities are growing increasingly concerned of the impacts of anthropogenic climate change, and how critical services, economic growth, and social well-being can be maintained with increasing environmental stressors. According to the US EPA, transportation can contribute up to 27% (2013) of all greenhouse gas emissions across the country and often consists of a majority share of greenhouse gas emissions from urban activities. These emissions are largely attributed to the combustion of gasoline and diesel fuels for automobile and truck travel. To reduce greenhouse gas emissions, cities often focus on strategies that increase biking, walking, and public transit ridership, by shifting travelers from single passenger vehicle travel, enhancing congestion reduction strategies, or increasing transit-land use co-benefits. Particularly, these can include building new public transit systems, increasing service of existing lines, increasing multi-modal transportation share and use, and encouraging growth in mixed residential and commercial land uses.
How do transportation systems emit greenhouse gas emissions? The emissions assessment of internal combustion engine vehicles such as automobiles, trucks, or diesel trains, typically focuses on the so-called tailpipe emissions. Gasoline or diesel fuels are combusted and work, the movement of the vehicle and its passenger, is performed. During the combustion of carbon-based fuels (such as gasoline or diesel), the vast majority of the carbon in the fuel is turned into carbon dioxide (CO2), a potent greenhouse gas. In the past, the greenhouse gas intensity (for example, CO2 emissions per mile of travel) of transportation modes focused largely on tailpipe emissions. But electrified modes such as light rail have no tailpipe emissions. Recognizing the challenges of identifying who should be responsible for greenhouse gas emissions, the US EPA created a “scopes” classification. Scope 1 emissions are from fuels directly. Scope 2 emissions are from the generation of electricity, heating and cooling, or steam generated offsite but purchased by the entity. Scope 3 emissions result from sources that indirectly support an activity. As such, an electric train’s greenhouse gas footprint would be associated with electricity generation, i.e., Scope 2. However, the transportation “system” is much more than vehicle propulsion. Infrastructure must be constructed and maintained, vehicles must be manufactured and maintained, energy must be produced, and supply chains must exist to support all of these activities. Each of these processes can emit greenhouse gases.
High Speed Rail and Sustainability explores the environmental, economic and social effects of developing a HSR system, presenting new evaluations of the proposed system in California in the US as well as lessons from international experience. Drawing upon the accumulated experience from past HSR system development around the world, leading experts present a diverse set of perspectives as well as diverse contexts of implementation. Assessments of the California case as well as cases from Japan, France, Germany, Italy, Spain, Taiwan, China, and the UK show how governments and stakeholders have bridged the gap between the vision and the realities of connecting metropolitan regions through HSR.
This is a valuable resource for academics, researchers and policy-makers in the areas of urban planning, civil engineering, transportation and environmental design.
Access to air conditioned space is critical for protecting urban populations from the adverse effects of heat exposure. Yet there remains fairly limited knowledge of penetration of private (home air conditioning) and distribution of public (cooling centers and commercial space) cooled space across cities. Furthermore, the deployment of government-sponsored cooling centers is not based on the location of existing cooling resources (residential air conditioning and air conditioned public space), raising questions of the equitability of access to heat refuges. Using Los Angeles County, California and Maricopa County, Arizona (whose county seat is Phoenix) we explore the distribution of private and public cooling resources and access inequities at the household level. We do this by evaluating the presence of in-home air conditioning and developing a walking-based accessibility measure to air conditioned public space using a combined cumulative opportunities-gravity approach. We find significant inequities in the distribution of residential air conditioning across both regions which are largely attributable to building age and inter/intra-regional climate differences. There are also regional disparities in walkable access to public cooled space. At average walking speeds, we find that official cooling centers are only accessible to a small fraction of households (3% in Los Angeles, 2% in Maricopa) while a significantly higher number of households (80% in Los Angeles, 39% in Maricopa) have access to at least one other type of public cooling resource which includes libraries and commercial establishments. Aggregated to a neighborhood level, we find that there are areas within each region where access to cooled space (either public or private) is limited which may increase the health risks associated with heat.
Angelenos voted overwhelmingly last month to expand L.A.’s transit network. Measure M passed by a 4-point margin in large part due to Metro’s promise to reduce traffic congestion throughout the county. But while the proposed new rail lines and bus routes will offer additional mobility choices, they will do little to reduce congestion unless Los Angeles also addresses its abundance of parking spaces.
Nanocomposite sorbents are an emerging technology for drinking water treatment of multiple pollutants. Here we used anticipatory life cycle assessment to proactively inform sustainable development by comparing synthesis methods and treatment options, identifying critical steps in deployment, and reducing the environmental and human health impacts such that the nanocomposite sorbents become favorable over the existing technology of using two different materials in a mixed bed (MB). We studied iron (hydr)oxide or titanium dioxide nanoparticles precipitated in an anion exchange resin (Fe-AX and Ti-AX) for targeted removal of chromium and arsenic from drinking water. The Ti-AX had the lowest environmental and human health impacts compared to Fe-AX and MB for nine TRACI (Tool for the Reduction and Assessment of Chemical and Environmental Impacts) categories. The synthesis phase for each sorbent contributed 50% – 100% of the total impacts. The greatest opportunity to improve Ti-AX synthesis was reducing oven heating time for nanoparticle hydrolysis. Reducing from 24 to 4 hours had only a small loss in sorbent capacity but reduced impacts by 3%– 31%. Fe-AX synthesis was improved by increasing pollutant removal capacity to require less sorbent to treat the functional unit. This reduced impacts by 26%–42%, making it favorable or equal with MB for six of nine categories. Future development of nanocomposite sorbent synthesis methods should focus on optimizing sorbent capacity, decreasing heating energy demand, and efficiently reusing metal precursors and solvents. This study showed that benefits of treating drinking water involve environmental and human health tradeoffs, and that impacts associated with treatment are on the same order of magnitude as distribution pressurization.
Climate change may constrain future electricity supply adequacy by simultaneously reducing electric transmission capacity and increasing electricity demand. This study estimates potential climate impacts to electric transmission capacity and peak electricity load in the United States using downscaled global climate model (GCM) projections. Electric power cables suffer decreased transmission capacity under hotter ambient air temperatures; similarly, during the summer peak period, electricity demand typically increases with hotter air temperatures due to increased cooling loads. As atmospheric carbon concentrations increase, higher ambient air temperatures may strain power infrastructure by simultaneously reducing transmission capacity and increasing peak electricity demand. Taken together, these coincident impacts may adversely affect electric power supply adequacy. We estimate the impacts of climate change on both the rated capacity of transmission infrastructure and expected electricity demand for 1,044 electrical utilities across the United States. We estimate climate-attributable capacity reductions to transmission lines by constructing thermal models of representative conductors, then forcing these models with future temperature projections to determine the relative change in rated ampacity. Next, we assess the impact of climate change on electricity demand by using historical relationships between ambient temperature and utility-scale summertime peak load to estimate the extent to which climate change will incur additional peak load increases. We find that by mid-century (2040-2060), climate change may reduce average summertime transmission capacity by 1.9-5.8% relative to the 1990-2010 reference period. At the same time, peak summertime loads may rise by 4.2-15% on average due to increases in ambient air temperature. In the absence of energy efficiency gains, demand-side management programs and transmission infrastructure upgrades, these load increases have the potential to upset current assumptions about future electricity supply adequacy.
Vehicle border crossings between Mexico and the United States generate significant amounts of air pollution, which can pose health threats to personnel at the ports of entry (POEs) as well as local communities. Using the Mariposa POE in Nogales, Arizona as a case study, light-duty and heavy-duty vehicle emissions are analyzed with the objective of identifying effective emission reduction strategies such as inspection streamlining, physical infrastructure improvements, and fuel switching. Historical vehicle volumes as well as field data were used to establish a simulation model of vehicle movement in VISSIM. Four simulation scenarios with varied congestion levels were considered to represent real-world seasonal changes in traffic volume. Four additional simulations captured varying levels of expedited processing procedures. The VISSIM output was analyzed using the EPA's MOVES emission simulation software for conventional air pollutants. For the highest congestion scenario, which includes a 200% increase in vehicle volume, total emissions increase by around 460% for PM2.5 and NOx, and 540% for CO, SO2, GHGs, and NMHC over uncongested conditions. Expedited processing and queue reduction can reduce emissions in this highest congestion scenario by as much as 16% for PM2.5, 18% for NOx, 20% for NMHC, 7% for SO2 and 15% for GHGs and CO. Adoption of some or all of these changes would not only reduce emissions at the Mariposa POE, but would have air-quality benefits for nearby populations in both the US and Mexico. Fleet-level changes could have far-reaching improvements in air quality on both sides of the border.
Electric vehicles are an emerging technology with significant potential for reducing carbon dioxide emissions. Yet strategies to minimize carbon dioxide emissions by strategically charging during different times of day have not been rigorously explored. To identify possibilities for minimizing emissions from plug-in electric vehicle use, daily optimized charging strategies over each electricity reliability region of the United States are explored. Optimized schedules of plug-in electric vehicle charging for standard and vehicle-to-grid use were compared with pre-timed charging schedules to characterize the potential for carbon dioxide emission reductions across charging characteristics, regional driving, and marginal energy generation trends. It was found that optimized charging can reduce carbon dioxide emissions over pre-timed charging by as much as 31% for standard use and 59% for vehicle-to-grid use. However, some scenarios of vehicle-to-grid participation were found to increase carbon dioxide emissions by up to 396 g carbon dioxide per mile by displacing stored energy from more carbon-intense energy generation periods. Results also indicate that plug-in electric vehicle charging emissions can vary widely for a given energy efficiency rating. Current energy efficiency ratings may lead to incorrect assumptions of plug-in electric vehicles emissions compared to conventional gasoline vehicles due to varying regional and temporal emissions. To coincide with the push for lower greenhouse gas emissions from transportation, charging times for plug-in electric vehicles should target periods where charging promotes carbon dioxide reductions, and electric vehicle energy efficiency ratings should be reconsidered in order to promote sustainable plug-in electric vehicle use moving forward.
In an extreme heat event, people can go to air-conditioned public facilities if residential air-conditioning is not available. Residences that heat slowly may also mitigate health effects, particularly in neighborhoods with social vulnerability. We explored the contributions of social vulnerability and these infrastructures to heat mortality in Maricopa County and whether these relationships are sensitive to temperature. Using Poisson regression modeling with heat-related mortality as the outcome, we assessed the interaction of increasing temperature with social vulnerability, access to publicly available air conditioned space, home air conditioning and the thermal properties of residences. As temperatures increase, mortality from heat-related illness increases less in census tracts with more publicly accessible cooled spaces. Mortality from all internal causes of death did not have this association. Building thermal protection was not associated with mortality. Social vulnerability was still associated with mortality after adjusting for the infrastructure variables. To reduce heat-related mortality, the use of public cooled spaces might be expanded to target the most vulnerable.
Current life cycle assessment (LCA) interpretation practices typically emphasize hotspot identification and improvement assessment. However, these interpretation practices fail in the context of a decision-driven comparative LCA where the goal is to select the best option from a set of dissimilar alternatives. Interpretation of comparative LCA results requires understanding of the trade-offs between alternatives—instances in which one alternative performs better or worse than another—to identify the environmental implications of a specific decision. In this case, analysis must elucidate relative trade-offs between decision alternatives, rather than absolute description of the alternatives individually. Here, typical practices fail. This article introduces a probability distribution-based approach to assess the significance of performance differences among alternatives that allows LCA practitioners to focus analyses on those aspects most influential to the decision, identify the areas that would benefit the most from data refinement given the level of uncertainty, and complement existing hotspot analyses. In a case study of a comparative LCA of five photovoltaic technologies, findings show that thin-film cadmium telluride and amorphous silicon cell panels are most likely to perform better than other alternatives. Additionally, the impact categories highlighted by the new approach are different than those highlighted by typical nal normalization practices, suggesting that a decision-driven approach to interpretation would redirect environmental research efforts.
Heat vulnerability of urban populations is becoming a major issue of concern. The American Southwest is predicted to experience greater numbers of heat days as well as significant impacts to water supply due to climate change impacts. We point to the importance of coupled socio technical system analysis to understand how people maybe be impacted by heat and how these systems themselves may structure unequal exposure to heat impacts and result in vulnerability. We suggest it is important to go beyond examining urban heat island effects to a broader scale of analysis that includes power systems and their need for water, urban morphology (e.g. buildings, access to cooling centers, parks, open space and surface albedo), and vulnerable populations. Rules, regulations, codes all structure the urban environment (street widths, building insulation, location of cooling centers and so forth) and form the social regulatory side of the socio technical system. The rules and regulations are manifest in the hard infrastructures, or technical systems, that then create the built environment, pipes, wires and power supply. Together they structure the socio technical system that may have significant unequal impacts on human populations. In this case we discuss the impacts of a warming climate in the Southwest United States and the potential implications for vulnerable populations.
Within residential electricity consumption there exists significant variability from home to home due to the differences in the thermal properties of buildings, appliances used, and activities of the inhabitants. Electricity analyses at sub-city scales that use predefined geographies, such census tracts, might artificially split areas that have homogenous socio-technical characteristics and thus different patterns of electricity consumption. We investigate the spatial relationships between demographic variables, types of buildings, and electricity consumption by forming new geographies for residential building energy use with a max-p clustering algorithm. Using Los Angeles and New York City as case studies, we compare the differences in variability in energy use within predefined geographies (e.g., census tract) and geographies defined by clustering on socio-technical characteristics. We find that using a socio-technical clustering approach, regardless of the chosen subset of variables, reduces the variability over pre-defined geopolitical boundaries with high statistical significance. By defining geospatial regions of energy analysis, we reduce intra-regional variability by 13% in Los Angeles and 29% in New York, thereby improving opportunities for prediction and forecasting. To our knowledge, this is the first study to examine the role of spatial boundaries in urban energy assessment. The creation of socio-technical geographies for building electricity assessment creates opportunities for improving predictions and forecasts for future sub- and cross-city energy studies.
As local governments plan to expand airport infrastructure and build air service, monetized estimates of damages from air pollution are important for balancing environmental impacts. While it is well known that aircraft emissions near airports directly affect nearby populations, it is less clear how the airport-specific aircraft operations and impacts result in monetized damages to human health and the environment. We model aircraft and ground support equipment emissions at major US airports and estimate the monetized human health and environmental damages of near airport (within 60 miles) emissions. County-specific unit damage costs for PM, SOx, NOx, and VOCs and damage valuations for CO and CO2 are used along with aircraft emissions estimations at airports to determine impacts. We find that near-airport emissions at major US airports caused a total of $1.9 billion in damages in 2013, with airports contributing between $720 thousand and $190 million each. These damages vary by airport from $1 to $9 per seat per one-way flight and costs per passenger are often greater than airport landing fees. As the US aviation system grows, it is possible to minimize human and environmental costs by shifting airplane technologies and expanding service into airports where fewer impacts are likely to occur.
Passenger and freight transport are one of the world’s leading contributors of anthropogenic carbon dioxide and other greenhouse gas (GHG) emissions. It has been suggested that the world can reduce the GHG-intensity of the transportation sector in the future through the adoption of new fuel-saving technologies, switching demand between modes, and large-scale implementation of alternative fuels. The future scenarios presented in this study assess the GHG reduction potentials of policies related to these three strategies for major passenger and freight modes across 8 regions of the world. We find that new fuel-saving technologies can significantly reduce the life-cycle GHG footprint of both passenger and freight vehicles. However, this improved fuel efficiency has negative feedbacks to the effectiveness of mode-switching and alternative fuel adoption policies through 2050. Our results suggest that improvements in the fuel efficiency of vehicles alone may cause the marginal benefits of GHG abatement policies to diminish over time. However, this trend may be opposite if the rate at which alternative fuel pathways decarbonize at faster rates than conventional transportation fuels (e.g., petroleum based). Overall, we find that the largest opportunities for GHG reductions occur in non-OECD countries. Given the many factors that distinguish transportation systems within these countries from the rest of the world (e.g., individual access to financial resources, control over infrastructure, systems to maintain new technologies, etc.), many benefits could be gained through interregional cooperation.
Emissions assessment of goods movement should include local and remote life-cycle effects as well as barriers for mode shifting to effectively reduce future impacts. Using uni- and multi-modal freight movements by truck, rail, and ocean-going vessel within California, a life-cycle assessment is developed to estimate the local and remote emissions that occur from freight activity associated with the state and the potential for reducing emissions for legislated goals. California Assembly Bill 32 calls for a greenhouse gas emission reduction of 3.5 million tonnes from the goods movement sector by 2020. Long-run average mode-specific results show that ocean going vessels emit the fewest emissions per tonne-km of shipment, followed by rail, then by trucks, and that the inclusion of life-cycle processes can increase impacts by up 36% for energy and GHG emissions and by over 5,000% for conventional air pollutants. Efforts to reduce emissions by mode shifting should recognize that infrastructure and market configurations may preclude the substitution of one mode for another. A uni- and multi-modal shipping emissions assessment is developed for intrastate and California-associated freight movements to illustrate the life-cycle impacts of typical trips for certain types of goods. The trip-based assessment shows how emissions are the result of activities that span multiple geopolitical boundaries. When targeting greenhouse gas reductions in California it should be recognized that heavy-duty trucks are responsible for 98% of emissions within the state and little opportunity exists to shift freight to other modes. Thus, an assessment of future freight truck technology improvements is performed to estimate effective strategies to meet 2050 greenhouse gas reduction goals. Emission reductions are found to be most-sensitive to fuel economy, rather than the rate of adoption of alternative fuel vehicles. One future scenario, where a new hybrid-electric truck fleet is adopted over 35 years and experiences an aggressive 6% annual fuel economy increase produces emission reductions that are 82% below the projected business-as-usual emissions, thereby meeting long-term climate goals for the freight sector.
The environmental impacts and economic costs associated with passenger transportation are the result of complex interactions between people, infrastructure, urban form and underlying activities. When it comes to roadway infrastructure, the ongoing resource commitments (which can be measured as embedded impacts) enables vehicle travel which is a dominant source of air emissions in regional inventories. The relationship between infrastructure and the environmental impacts it enables are not often considered dynamic. Furthermore, the environmental impacts of roadway infrastructure are typically assessed at a fine geospatial and temporal scale (i.e., a short distance of roadway over a short period of time) and there is generally poor knowledge of how the growth of a roadway network over time creates a need for long-term maintenance commitments that create environmental impacts and lock-in vehicle travel. A framework and operational life-cycle assessment (LCA) tool (City Road Network (CiRN) LCA) are developed to assess the extent to which roadway commitments result in ongoing and increasing environmental and economic impacts. Known for its extensive road network and automobile reliance, Los Angeles County is used as a case study to explore the relationship between historic infrastructure deployment decisions and the emergent behavior of vehicle travel. The results show that every kilogram of greenhouse gas emissions (GHG) resulting from construction and maintenance has led to 47 kg of GHG emissions in fuel combustion. Similarly, every public dollar invested into the network has created $126-288 in private user spending. As states and regions grapple with financing the upkeep of aging infrastructure, a solid understanding of the relationship between upfront infrastructure capital costs, long-term maintenance costs, and associated long-term environmental effects are critical. In Los Angeles, the infrastructure that exists was largely deployed by 1987. Since then, maintenance costs are estimated to have exceeded city budgets despite minimal growth in infrastructure. The research demonstrates how infrastructure matures (i.e., its stages of growth towards completion), it becomes locked-in leading to transitions from a capital financing focus to foci on securing rehabilitation and maintenance costs, and the share of environmental impacts changing from being somewhat balanced between embedded infrastructure construction impacts and vehicle use to today where vehicle use creates impacts several orders of magnitude greater than those associated with rehabilitation.
As decision-makers increasingly embrace life-cycle assessment (LCA) and target transportation services for regional environmental goals, it becomes imperative that outcomes from changes to complex systems are accurately communicated. California’s greenhouse gas (GHG) reduction policies have created interest in better understanding how public transit systems reduce emissions. An LCA is developed of the Los Angeles Expo line and a competing car trip that includes vehicle, infrastructure, and energy production processes, in addition to propulsion. Energy use, GHG emissions, and the potential for photochemical smog formation and respiratory impacts are assessed. When results are normalized per passenger kilometer traveled (PKT), life-cycle processes increase impacts by up to 83% for energy use and GHG emissions, and up to 690% for smog and respiratory impact potentials. However, the use of a non-time-based PKT normalization obfuscates a decision-maker’s ability to understand whether the deployment of a transit system reduces emissions below a future year policy target (e.g., 80% of 1990 emissions by 2050). The year-by-year marginal effects of the decision to deploy the Expo line are developed including reduction in automobile travel. The time-based marginal results provide clearer explanations for how environmental effects in a region change and the critical life-cycle processes that should be targeted to achieve policy targets. The line can be expected to breakeven on GHG emissions within two decades but its ability meet long-run policy targets is most sensitive to infrastructure construction emissions, mode shifting, a changing electricity mix, and improving automobile fuel economy.
Many cities have adopted minimum parking requirements but we have relatively poor information about how parking infrastructure has grown. We estimate how parking has grown in Los Angeles County from 1900 to 2010 and how parking infrastructure evolves, affects urban form, and relates to changes in automobile travel, using building and roadway growth models. We find that since 1975 the ratio of residential offstreet parking spaces to automobiles in Los Angeles County is close to 1.0 and the greatest density of parking spaces is in the urban core while most new growth in parking occurs outside of the core. 14% of incorporated land in Los Angeles County is committed to parking. Uncertainty in our space inventory is attributed to our building growth model, onstreet space length, and the assumption that parking spaces were created as per the requirements.
The continued use of minimum parking requirements is likely to encourage automobile use at a time when metropolitan areas are actively seeking to manage congestion and increase transit use, biking, and walking. Widely discussed ways to reform parking policies may be less than effective if planners do not consider the remaining incentives to auto use created by the existing parking infrastructure. Planners should encourage the conversion of existing parking facilities to alternative uses.
Cities are increasingly developing greenhouse gas (GHG) mitigation plans and reduction targets based on a growing body of knowledge about climate change risks, and changes to passenger transportation are often at the center of these efforts. Yet little information exists for characterizing how quickly or slowly GHG emissions reductions will accrue given changes in urban form around transit, and whether benefits will accrue quickly enough to meet policy year targets (such as reaching 20% of 1990 GHG emissions levels by 2050). Even more complicated is when cities focus on achieving GHG reductions through integrated transportation and land use planning, as changes in emissions can occur across many sectors (such as transportation, building energy use, and electricity generation). Using the Los Angeles Expo line, a framework is developed to assess how financing schemes change the rate of redevelopment and resulting life-cycle GHG emissions from travel and building energy use. The framework leverages an integrated transportation and land use life-cycle assessment model that captures upfront construction of new development near transit and the long-term changes in household energy use for travel and buildings. The results show that for the same amount of development around the Expo line it is possible to either meet (if aggressive redevelopment happens early) or not meet (if redevelopment starts decades out) state GHG goals by 2050. The time-based approach reveals how specific redevelopment schedules are needed for a city to reduce GHG emissions at a rate that meets future targets.
Cities need to understand and manage their carbon footprint at the level of streets, buildings and communities, urge Kevin Robert Gurney and colleagues.
As California establishes its greenhouse gas emissions cap-and-trade program and considers options for using the new revenues produced under the program, the public and decision-makers have access to tenuous information on the relative cost-effectiveness of passenger transportation investment options. Towards closing this knowledge gap, the cost-effectiveness of greenhouse gas reductions forecast from High-Speed Rail are compared with those estimated from recent urban transportation projects (specifically light rail, bus rapid transit, and a bicycling/pedestrian pathway) in California. Life-cycle greenhouse gas emissions are joined with full cost accounting to better understand the benefits of cap-and-trade investments. Results are largely dependent on the economic cost allocation method used. Considering only public subsidy for capital, none of the projects appear to be a cost-effective means to reduce greenhouse gas emissions (i.e., relative to the current price of greenhouse gas emissions in California’s cap-and-trade program at $11.50 per tonne). However, after adjusting for the change in private costs users incur when switching from the counterfactual mode (automobile or aircraft) to the mode enabled by the project, all investments appear to reduce greenhouse gas emissions at a net savings to the public. Policy and decision-makers who consider only the capital cost of new transportation projects can be expected to incorrectly assess alternatives and indirect benefits (i.e., how travelers adapt to the new mass transit alternative) should be included in decision-making processes.
This manuscript supersedes our UCLA Institute of Transportation Studies report Cost-Effectiveness of Reductions in Greenhouse Gas Emissions from California High-Speed Rail and Urban Transportation Projects.
At the core of the intensifying debate over LCA modeling of the environmental impacts of biofuels is doubt that biofuels can mitigate climate change. Two types of LCA, attributional and consequential, have been applied to answer this question with competing results. These results turn on system boundary design, including feedstock considerations and assumptions of indirect land-use impacts. The broadening of the system boundary to include large scale land-use change of biofuel production has challenged the viability of biofuels to meet climate change goals. This paper reviews some of the latest literature in biofuels LCA exemplary of this debate, and discusses the distinctions between attributional and consequential models in biofuels. We also present a generalized boundary map that can be used to convey LCA system boundaries clearly and succinctly within both attributional and consequential LCA.
As new transportation technologies, travel patterns, and fuels emerge there is opportunity to proactively assess environmental impacts to ensure that reductions occur and unintended tradeoffs are avoided. This article summarizes the goals, scope, and findings of a special issue on Transportation Sustainability. The special issue provides an overview of recent research and policies on autonomous vehicles, electric vehicles, on-demand mobility (including carsharing), intelligent transportation systems, and biofuels, and their expected environmental effects. The reviews show that there are efforts underway to understand the environmental impacts of changes in transportation systems that may lead to technology designs and deployment strategies for environmental sustainability.
Building stocks constitute enduring components of urban infrastructure systems, but little research exists on their residence time or changing environmental impacts. Using Los Angeles County, California as a case study, a framework is developed for assessing the changes of building stocks in cities (i.e., a generalizable framework for estimating the construction and deconstruction rates), the residence time of buildings and their materials, and the associated embedded environmental impacts. In Los Angeles, previous land use decisions prove not easily reversible, and past building stock investments may continue to constrain the energy performance of buildings. The average age of the building stock has increased steadily since 1920, and more rapidly after the post-WWII construction surge in the 1950s. Buildings will likely endure for 60 years or longer, making this infrastructure a quasi-permanent investment. The long residence time, combined with the physical limitations on outward growth, suggests that the Los Angeles building stock is unlikely to have substantial spatial expansion in the future. The construction of buildings requires a continuous investment in material, monetary, and energetic resources, resulting in environmental impacts. The long residence time of structures implies a commitment to use and maintain the infrastructure, potentially creating barriers to an urban area’s ability to improve energy efficiency. The immotility of buildings, coupled with future environmental goals, indicates that urban areas will be best positioned by instituting strategies that ensure reductions in life-cycle (construction, use, and demolition) environmental impacts.
Project website: urbantransitions.org/losangelesgrowth/.
Climate change may constrain future electricity generation capacity by increasing the incidence of extreme heat and drought events. We estimate reductions to generating capacity in the Western United States based on long-term changes in streamflow, air temperature, water temperature, humidity and air density. We simulate these key parameters over the next half century by joining downscaled climate forcings with a hydrologic modeling system. For vulnerable power stations (46% of existing capacity), climate change may reduce average summertime generating capacity by 1.1-3.0%, with reductions of up to 7.2-8.8% under a ten-year drought. Currently, power providers do not account for climate impacts in their development plans, meaning that they could be overestimating their ability to meet future electricity needs.
Metropolitan greenhouse gas and air emissions inventories can better account for the variability in vehicle movement, fleet composition, and infrastructure that exists within and between regions, to develop more accurate information for environmental goals. With emerging access to high quality data, new methods are needed for informing transportation emissions assessment practitioners of the relevant vehicle and infrastructure characteristics that should be prioritized in modeling to improve the accuracy of inventories. The sensitivity of light and heavy-duty vehicle greenhouse gas (GHG) and conventional air pollutant (CAP) emissions to speed, weight, age, and roadway gradient are examined with second-by-second velocity profiles on freeway and arterial roads under free-flow and congestion scenarios. For GHG and CAP upper and lower bounds of each factor show the potential variability which could exist in emissions assessments across U.S. cities. When comparing the effects of changes in these characteristics across U.S. cities against average characteristics of the U.S. fleet and infrastructure, significant variability in emissions is found to exist. GHGs from light-duty vehicles could vary by -2%-11% and CAP by -47%-228% when compared to the baseline. For heavy-duty vehicles the variability is -21%-55% and -32%-174%, respectively. The results show that cities should more aggressively pursue the integration of emerging big data into regional transportation emissions modeling, and the integration of these data is likely to impact GHG and CAP inventories and how aggressively policies should be implemented to meet reductions. A web-tool (available at transportationlca.org/urbanemissions) is developed to aide cities in improving emissions uncertainty.
The expected urbanization of the planet in the coming century coupled with aging infrastructure in developed regions, increasing complexity of man-made systems, and pressing climate change impacts have created opportunities for reassessing the role of infrastructure and technologies in cities and how they contribute to greenhouse gas (GHG) emissions. Modern urbanization is predicated on complex, increasingly coupled infrastructure systems, and energy use continues to be largely met from fossil fuels. Until energy infrastructures evolve away from carbon-based fuels, GHG emissions are critically tied to the urbanization process. Further complicating the challenge of decoupling urban growth from GHG emissions are lock-in effects and interdependencies. This paper synthesizes state-of-the-art thinking for transportation, fuels, buildings, water, electricity, and waste systems and finds that GHG emissions assessments tend to view these systems as static and isolated from social and institutional systems. Despite significant understanding of methods and technologies for reducing infrastructure-related GHG emissions, physical, institutional, and cultural constraints continue to work against us, pointing to knowledge gaps that must be addressed. This paper identifies seven challenges to improve our understanding of the role of infrastructure and technologies during urban development and positioning these increasingly complex systems for low-carbon growth in both high-income and low- to middle-income regions. The challenges emphasize how we reimagine the role of infrastructure in the future and how people, institutions, and ecological systems interface with infrastructure.
Carbon capture and storage (CCS) for coal power plants reduces carbon dioxide emissions, but also affects other air emissions on and offsite. This research assesses the net societal benefits and costs of Monoethanolamine (MEA) CCS, valuing changes in supply chain emissions of CO2, SO2, NOX, NH3 and particulate matter (PM). Geographical variability and stochastic uncertainty for 407 coal power plant locations in the U.S. are analyzed. The main result is that the net environmental benefits and costs of MEA CCS depend critically on location. For a few favorable sites of power plant and upstream processes, CCS realizes a net benefit (benefit-cost ratio > 1) if the social cost of carbon exceeds $51/ton. For much of the U.S. however, the social cost of carbon must be much higher to realize net benefits from CCS, up to $910/ton. While the social costs of carbon are uncertain, typical estimates are in the range of $33-221/ton, much lower than the threshold value for many potential CCS locations. The method developed has broad applications to assess geographic variability in benefits of energy technologies.
Independent lines of research on urbanization, urban areas and the carbon cycle have advanced our understanding of some of the processes through which energy and land uses affect carbon. This synthesis paper synthesizes some of these diverse viewpoints as a first step towards a co-produced, integrated framework for understanding urbanization processes, urban areas and their relationships to the carbon cycle. It suggests the need for approaches that complement and combine the plethora of existing insights into interdisciplinary and transdisciplinary explorations of how different urbanization processes, and socio-ecological and technological components of urban areas, affect the spatial and temporal patterns of carbon emissions differentially over time and within and across cities. It also calls for a more holistic approach to examining the carbon implications of urbanization and urban areas based on such interconnected features of urban development pathways as urban form, economic function, economic growth policies and other governance arrangements. It points to a wide array of uncertainties around urbanization processes, including urban socio-institutional and built-environment systems and their impact on the exchange of carbon flows within and outside urban areas. We must also understand in turn how carbon feedbacks, including carbon impacts and potential impacts of climate change, can affect urbanization processes. Finally, the paper explores options, barriers and limits to transitioning to low-carbon urbanization trajectories, and suggests the development of an end-to-end, co-produced and integrated scientific understanding that can more effectively inform the navigation of transitional journeys and the avoidance of obstacles along the way.
Current PV Life Cycle Assessment (LCA) literature rely on past data to quantify the environmental impact of PV technology based on parameters like Energy Return on Investment (EROI), Greenhouse gas (GHG) emissions and Energy Pay Back Time (EPBT). The net environmental benefit of a PV module, influenced by energy intensive manufacturing processes, is allocated at the time of installation. However, the environmental benefits of a PV system do not accrue immediately after installation but accrue over the entire life cycle of the PV module. This inter-temporal trade-off depends on the magnitude of upfront PV manufacturing GHG emissions and the year-on-year GHGs avoided when PV electricity displaces electricity generated from fossil fuels. Moreover, environmental impact assessments of PV systems based on retrospective LCAs precludes the inclusion of four dynamic factors: the choice of PV technology, varying rates of technology improvements for different PV technologies, electricity mixes of the deployment location and location where PV systems are manufactured.
By not incorporating inter-temporal trade-off analysis and dynamic factors that influence net environmental impacts of PV deployments, PV capacity additions can inadvertently become counter-productive by increasing net GHG emissions over the short term. Also, policy makers forgo an opportunity to optimize PV capacity additions for minimal short term GHG impacts. This project designs and implements an optimization model to minimize the short term CRF impacts of PV system deployments by incorporating the inter-temporal trade-offs involved. When integrated with PV LCAs this model can help policy makers to minimize short term impacts along with fulfilling long term GHG reduction goals. The results show that the optimal PV deployment strategy for the three states - California, Wyoming and Arizona - varies depending on the electricity mixes of these states. The optimal PV deployment strategy is sensitive to the state of technology and the choice of PV technology to fulfill the targets. Also, adopting an sub-optimal PV deployment strategy to meet California's PV policy targets by importing Silicon PV modules from China will increase the CO2 emissions over the short term.
The environmental and economic assessment of neighborhood-scale transit-oriented urban form changes should include initial construction impacts through long-term use to fully understand the benefits and costs of smart growth policies. The long-term impacts of moving people closer to transit require the coupling of behavioral forecasting with environmental assessment. Using new light rail and bus rapid transit in Los Angeles, California as a case study, a life-cycle environmental and economic assessment is developed to assess the potential range of impacts resulting from mixed-use infill development. An integrated transportation and land use life-cycle assessment framework is developed to estimate energy consumption, air emissions, and economic (public, developer, and user) costs. Residential and commercial buildings, automobile travel, and transit operation changes are included and a 60-year forecast is developed that compares transit-oriented growth against growth in areas with only predominantly local bus service. The results show that commercial developments create the greatest potential for impact reductions followed by residential commute shifts to transit, both of which may be effected by access to high-capacity transit, reduced parking requirements, and developer incentives. Greenhouse gas emission reductions between 3-540 Gg CO2-equivalents per year can be achieved for as low as $10.50 per tonne. Potential respiratory impacts (PM10-equivalents) and smog formation can be reduced by 26-36%. The shift from business-as-usual growth to transit-oriented development may increase development costs by as much as $610 million, but can decrease user costs by $4,000 per household per year over the building lifetime.
This synthesis article presents an overview of an urban metabolism approach using mixed methods and multiple sources of data to Los Angeles. We examine electric energy use in buildings, GHG emissions from electricity, calculate embedded infrastructure life cycle inputs, water use and solid waste streams in an attempt to better understand the urban flows and sinks in the Los Angeles region (City and County).,. This quantification is being conducted to assist policy-makers better target energy conservation and efficiency programs, pinpoint best locations for distributed solar generation and develop policies for greater sustainability. It provides a framework to which many more UM flows can be added to create greater understanding of the County’s resource dependencies. Going forward, together with policy analysis, UM can help untangle the complex intertwined resource dependencies that cities must address as they attempt to become more sustainable.
Water and energy resources are intrinsically linked, yet they are managed separately-even in the water-scarce American Southwest. This study develops a spatially explicit model of water-energy interdependencies in Arizona and assesses the potential for cobeneficial conservation programs. The interdependent benefits of investments in eight conservation strategies are assessed within the context of legislated renewable energy portfolio and energy efficiency standards. The cobenefits of conservation are found to be significant. Water conservation policies have the potential to reduce statewide electricity demand by 0.82–3.1%, satisfying 4.1–16% of the state’s mandated energy-efficiency standard. Adoption of energy-efficiency measures and renewable generation portfolios can reduce nonagricultural water demand by 1.9–15%. These conservation cobenefits are typically not included in conservation plans or benefit-cost analyses. Many cobenefits offer negative costs of saved water and energy, indicating that these measures provide water and energy savings at no net cost. Because ranges of costs and savings for water-energy conservation measures are somewhat uncertain, future studies should investigate the cobenefits of individual conservation strategies in detail. Although this study focuses on Arizona, the analysis can be extended elsewhere as renewable portfolio and energy efficiency standards become more common nationally and internationally.
The comprehensiveness of environmental assessments of future long-distance travel that include high-speed rail (HSR) are constrained by several methodological, institutional, and knowledge gaps that must and can be addressed. These gaps preclude a robust understanding of the changes in environmental, human health, resource, and climate change impacts that result from the implementation of HSR in the United States. The gaps are also inimical to an understanding of how HSR can be positioned for 21st century sustainability goals. Through a synthesis of environmental studies, the gaps are grouped into five overarching grand challenges. They include a spatial incompatibility between HSR and other long-distance modes that is often ignored, an environmental review process that obviates modal alternatives, siloed interest in particular environmental impacts, a dearth of data on future vehicle and energy sources, and a poor understanding of secondary impacts, particularly in land use. Recommendations are developed for institutional investment in multimodal research, knowledge and method building around several topics. Ultimately, the environmental assessment of HSR should be integrated in assessments that seek to understand the complementary and competitive configurations of transportation services, as well as future accessibility.
Purpose: Comparative Life Cycle Assessments (LCAs) today lack robust methods of interpretation that help decision makers understand and identify tradeoffs in the selection process. Truncating the analysis at characterization is misleading and existing practices for normalization and weighting may unwittingly oversimplify important aspects of a comparison. This paper introduces a novel approach based on a multi-criteria decision analytic method known as Stochastic Multi-attribute Analysis for Life Cycle Impact Assessment (SMAA-LCIA) that uses internal normalization by means of outranking and exploration of feasible weight spaces.
Methods: To contrast different valuation methods, this study performs a comparative LCA of liquid and powder laundry detergents using three approaches to normalization and weighting: (1) characterization with internal normalization and equal weighting, (2) Typical valuation consisting of external normalization and weights, and (3) SMAA-LCIA using outranking normalization and stochastic weighting. Characterized results are often represented by LCA software with respect to their relative impacts normalized to 100%. Typical valuation approaches rely on normalization references, single value weights and utilizes discrete numbers throughout the calculation process to generate single scores. Alternatively, SMAA-LCIA is capable of exploring high uncertainty in the input parameters, normalizes internally by pair-wise comparisons (outranking) and allows for the stochastic exploration of weights. SMAA-LCIA yields probabilistic, rather than discrete comparisons that reflect uncertainty in the relative performance of alternatives.
Results and Discussion: All methods favored liquid over powder detergent. However, each method results in different conclusions regarding the environmental tradeoffs. Graphical outputs at characterization of comparative assessments portray results in a way that is insensitive to magnitude and thus can be easily misinterpreted. Typical valuation generates results that are oversimplified and unintentionally biased towards a few impact categories due to the use of normalization references. Alternatively, SMAA-LCIA avoids the bias introduced by external normalization references, includes uncertainty in the performance of alternatives and weights, and focuses the analysis on identifying the mutual differences most important to the eventual rank ordering.
Conclusions and recommendations: SMAA is particularly appropriate for comparative LCAs because it evaluates mutual differences and weights stochastically. This allows for tradeoff identification and the ability to sample multiple perspectives simultaneously. SMAA-LCIA is a robust tool that can improve understanding of comparative LCA by decision- or policy-makers.
The architecture-engineering-construction (AEC) industry faces increasing demands on its projects while budgets appear to be shrinking. Building owners and operators seem to want their buildings to do more for less cost. Although this may seem counterintuitive, it aligns nicely with a sustainable-architecture approach of less is more. Moreover, in a shift from exclusively considering first costs for a project, the AEC industry seems to be moving in the direction of life-cycle cost considerations, furthering the opportunity for a more sustainable built environment. Often sustainable is synonymous with achieving certification [e.g., Leadership in Energy and Environmental Design (LEED) and Infrastructure Voluntary Evaluation Sustainability Tool (INVEST) certification]. Whereas the authors acknowledge that certification can improve particular aspects of sustainability, it is necessary to take a broader approach and consider economic, environmental, and social dimensions of sustainability. In this paper, the authors explore each of these dimensions and present examples of how the AEC industry can measure, balance, and monetize them.
The environmental outcomes of urban form changes should couple life-cycle and behavioral assessment methods to better understand urban sustainability policy outcomes. Using Phoenix, Arizona light rail as a case study, an integrated transportation and land use life-cycle assessment (ITLU-LCA) framework is developed to assess the changes to energy consumption and air emissions from transit-oriented neighborhood designs. Residential travel, commercial travel and building energy use are included and the framework integrates household behavior change assessment to explore the environmental and economic outcomes of policies that affect infrastructure. The results show that upfront environmental and economic investments are needed (through more energy-intense building materials for high-density structures) to produce long run benefits in reduced building energy use and automobile travel. The annualized life-cycle benefits of transit-oriented developments in Phoenix can range from 1.7-230 Gg CO2e depending on the aggressiveness of residential density. Midpoint impact stressors for respiratory effects and photochemical smog formation are also assessed and can be reduced by 1.2-170 Mg PM10e and 41-5,200 Mg O3e annually. These benefits will come at an additional construction cost of up to $410 million resulting in a cost of avoided CO2e at $16-29 and household cost savings.
There is significant interest in reducing urban growth impacts yet little information exists to comprehensively estimate the energy and air quality tradeoffs. An integrated transportation and land-use life-cycle assessment framework is developed to quantify the long-term impacts from residential infill, using the Phoenix light rail system as a case study. The results show that (1) significant reductions in life-cycle energy use, greenhouse gas emissions, respiratory, and smog impacts are possible; (2) building construction, vehicle manufacturing, and energy feedstock effects are significant; and (3) marginal benefits from reduced automobile use and potential household behavior changes exceed marginal costs from new rail service.
Public transportation systems are often part of strategies to reduce urban environmental impacts from passenger transportation, yet comprehensive energy and environmental life-cycle measures, including upfront infrastructure effects and indirect and supply chain processes, are rarely considered. Using the new bus rapid transit and light rail lines in Los Angeles, near-term and long-term life-cycle impact assessments are developed, including consideration of reduced automobile travel. Energy consumption and emissions of greenhouse gases and criteria pollutants are assessed, as well the potential for smog and respiratory impacts. Results show that life-cycle infrastructure, vehicle, and energy production components significantly increase the footprint of each mode (by 48–100% for energy and greenhouse gases, and up to 6200% for environmental impacts), and emerging technologies and renewable electricity standards will significantly reduce impacts. Life-cycle results are identified as either local (in Los Angeles) or remote, and show how the decision to build and operate a transit system in a city produces environmental impacts far outside of geopolitical boundaries. Ensuring shifts of between 20–30% of transit riders from automobiles will result in passenger transportation greenhouse gas reductions for the city, and the larger the shift, the quicker the payback, which should be considered for time-specific environmental goals.
This paper examines the potential for incorporation of life-cycle assessment (LCA) into transportation planning and policy, by drawing upon analysis and precedent-setting policy structures from California. The paper first summarizes a case study of a transportation-system LCA for Los Angeles County, and briefly describes the existing structure of transportation policy, emissions regulation, and the existing partial precedents for the incorporation of LCA and decision criteria into transportation policy and planning. Using standard criteria for good policy, the paper then identifies and describes six possible policy mechanisms for incorporating LCA into the transportation planning process. These include legislative requirements for project planning, a preferential finance program, a planning standard for Regional Transportation Plans, an environmental impact assessment criterion, a criterion for selection of Transportation Control Measures under the federal Clean Air Act, and a cap-and-trade system for transportation-related life-cycle emissions. The advantages and disadvantages of each approach are identified, with an ultimate recommendation to refine and pursue a blended approach focusing on the regional planning scale.
The current institutional process for project-level environmental review, the government-required Environmental Impact Statement (EIS), requires assessment of the proposed project, the no-build alternative, and alternatives to the proposed project. Despite growing academic research to compare the environmental impacts of air and high-speed rail (HSR) infrastructure, there are few instances of multimodal alternatives analysis in airport and HSR EIS documents. In this paper, examples of EISs for air and HSR capacity-enhancement projects are chronicled to identify key challenges to completing modal alternative analysis in the EIS: the spatial heterogeneity of the physical infrastructure for air and HSR, the framing of EIS purpose and need statements, and the complicated interpretations of environmental impact significance thresholds. The paper concludes by proposing strategies to incentivize modal alternative assessments and highlight methods that are needed to perform high-quality comparative analysis to inform decision makers, whether in the context of the EIS or in upstream planning processes.
Urban sustainability assessment should integrate urban metabolism and life-cycle impact assessment to develop an integrated multi-scale framework for evaluating resource depletion and damages to human health and environmental quality. A streamlined framework can be developed by employing emerging neighborhood-scale data, improving resource depletion and damage to human health and environmental quality characterizations, including socio-demographic characteristics, and integrating methods for making decisions with uncertainty. Foundational elements and an analytical path exist to integrate urban metabolism and life-cycle impact assessment in a streamlined manner. Urban sustainability practitioners must eventually develop new methods for integrating social, institutional, and cultural forces instead of focusing on physical systems.
Government subsidies that favor all-electric travel might seem to be the obvious strategy to reduce vehicle air emissions and fossil fuel use, but short range plug-in hybrids (with long range gasoline backup) offer more benefits at lower cost.
Sustainable mobility policy for long-distance transportation services should consider emerging automobiles and aircraft as well as infrastructure and supply chain life-cycle effects in the assessment of new high-speed rail systems. Using the California corridor, future automobiles, high-speed rail and aircraft long-distance travel are evaluated, considering emerging fuel-efficient vehicles, new train designs and the possibility that the region will meet renewable electricity goals. An attributional per passenger-kilometer-traveled life-cycle inventory is first developed including vehicle, infrastructure and energy production components. A consequential life-cycle impact assessment is then established to evaluate existing infrastructure expansion against the construction of a new high-speed rail system. The results show that when using the life-cycle assessment framework, greenhouse gas footprints increase significantly and human health and environmental damage potentials may be dominated by indirect and supply chain components. The environmental payback is most sensitive to the number of automobile trips shifted to high-speed rail, and for greenhouse gases is likely to occur in 20–30 years. A high-speed rail system that is deployed with state-of-the-art trains, electricity that has met renewable goals, and in a configuration that endorses high ridership will provide significant environmental benefits over existing modes. Opportunities exist for reducing the long-distance transportation footprint by incentivizing large automobile trip shifts, meeting clean electricity goals and reducing material production effects.
Carbon dioxide capture and storage (CCS) is increasingly seen as a way for society to enjoy the benefits of fossil fuel energy sources while avoiding the climate disruption associated with fossil CO2 emissions. A decision to deploy CCS technology at scale should be based on robust information on its overall costs and benefits. Life-cycle assessment (LCA) is a framework for holistic assessment of the energy and environmental footprint of a system, and can provide crucial information to policy-makers, scientists, and engineers as they develop and deploy CCS systems. We identify seven key issues that should be considered to ensure that conclusions and recommendations from CCS LCA are robust: energy penalty, functional units, scale-up challenges, non-climate environmental impacts, uncertainty management, policy-making needs, and market effects. Several recent life-cycle studies have focused on detailed assessments of individual CCS technologies and applications. While such studies provide important data and information on technology performance, such case-specific data are inadequate to fully inform the decision making process. LCA should aim to describe the system-wide environmental implications of CCS deployment at scale, rather than a narrow analysis of technological performance of individual power plants.
Automobile air emissions are a well-recognized problem and have been subject to considerable regulation. An increasing concern for greenhouse gas emissions draws additional considerations to the externalities of personal vehicle travel. This paper provides estimates of the costs for automobile air emissions for 86 U.S. metropolitan areas based on county-specific external air emission morbidity, mortality, and environmental costs. Total air emission costs in the urban areas are estimated to be 145 million/day, with Los Angeles, California, and New York City (each23 million per day) having the highest totals. These external costs average 0.64 per day per person and0.03 per vehicle mile traveled. Total air emission cost solely due to traffic congestion for the same 86 U.S. metropolitan areas was also estimated to be $24 million per day. These estimates are compared with others in the literature and are found to be generally consistent. These external automobile air emission costs are important for social benefit and cost assessment of transportation measures to reduce vehicle use. However, this study does not include any abatement costs associated with automobile emission controls or government investments to reduce emissions such as traffic signal setting.
We assess the economic value of life-cycle air emissions and oil consumption from conventional vehicles, hybrid-electric vehicles (HEVs), plug-in hybrid-electric vehicles (PHEVs), and battery electric vehicles in the US. We find that plug-in vehicles may reduce or increase externality costs relative to grid-independent HEVs, depending largely on greenhouse gas and SO2 emissions produced during vehicle charging and battery manufacturing. However, even if future marginal damages from emissions of battery and electricity production drop dramatically, the damage reduction potential of plug-in vehicles remains small compared to ownership cost. As such, to offer a socially efficient approach to emissions and oil consumption reduction, lifetime cost of plug-in vehicles must be competitive with HEVs. Current subsidies intended to encourage sales of plug-in vehicles with large capacity battery packs exceed our externality estimates considerably, and taxes that optimally correct for externality damages would not close the gap in ownership cost. In contrast, HEVs and PHEVs with small battery packs reduce externality damages at low (or no) additional cost over their lifetime. Although large battery packs allow vehicles to travel longer distances using electricity instead of gasoline, large packs are more expensive, heavier, and more emissions intensive to produce, with lower utilization factors, greater charging infrastructure requirements, and life-cycle implications that are more sensitive to uncertain, time-sensitive, and location-specific factors. To reduce air emission and oil dependency impacts from passenger vehicles, strategies to promote adoption of HEVs and PHEVs with small battery packs offer more social benefits per dollar spent.
The US parking infrastructure is vast and little is known about its scale and environmental impacts. The few parking space inventories that exist are typically regionalized and no known environmental assessment has been performed to determine the energy and emissions from providing this infrastructure. A better understanding of the scale of US parking is necessary to properly value the total costs of automobile travel. Energy and emissions from constructing and maintaining the parking infrastructure should be considered when assessing the total human health and environmental impacts of vehicle travel. We develop five parking space inventory scenarios and from these estimate the range of infrastructure provided in the US to be between 105 million and 2 billion spaces. Using these estimates, a life-cycle environmental inventory is performed to capture the energy consumption and emissions of greenhouse gases, CO, SO2, NOX, VOC (volatile organic compounds), and PM10 (PM: particulate matter) from raw material extraction, transport, asphalt and concrete production, and placement (including direct, indirect, and supply chain processes) of space construction and maintenance. The environmental assessment is then evaluated within the life-cycle performance of sedans, SUVs (sports utility vehicles), and pickups. Depending on the scenario and vehicle type, the inclusion of parking within the overall life-cycle inventory increases energy consumption from 3.1 to 4.8 MJ by 0.1–0.3 MJ and greenhouse gas emissions from 230 to 380 g CO2e by 6–23 g CO2e per passenger kilometer traveled. Life-cycle automobile SO2 and PM10 emissions show some of the largest increases, by as much as 24% and 89% from the baseline inventory. The environmental consequences of providing the parking spaces are discussed as well as the uncertainty in allocating paved area between parking and roadways.
The state of California is expected to have significant population growth in the next half-century resulting in additional passenger transportation demand. Planning for a high-speed rail system connecting San Diego, Los Angeles, San Francisco, and Sacramento as well as many population centers between is now underway. The considerable investment in California high-speed rail has been debated for some time and now includes the energy and environmental tradeoffs. The per-trip energy consumption, greenhouse gas emissions, and other emissions are often compared against the alternatives (automobiles, heavy rail, and aircraft), but typically only considering vehicle operation. An environmental life-cycle assessment of the four modes was created to compare both direct effects of vehicle operation and indirect effects from vehicle, infrastructure, and fuel components. Energy consumption, greenhouse gas emissions, and SO2, CO, NOX, VOC, and PM10 emissions were evaluated. The energy and emission intensities of each mode were normalized per passenger kilometer traveled by using high and low occupancies to illustrate the range in modal environmental performance at potential ridership levels. While high-speed rail has the potential to be the lowest energy consumer and greenhouse gas emitter, appropriate planning and continued investment would be needed to ensure sustained high occupancy. The time to environmental payback is discussed highlighting the ridership conditions where high-speed rail will or will not produce fewer environmental burdens than existing modes. Furthermore, environmental tradeoffs may occur. High-speed rail may lower energy consumption and greenhouse gas emissions per trip but can create more SO2 emissions (given the current electricity mix) leading to environmental acidification and human health issues. The significance of life-cycle inventorying is discussed as well as the potential of increasing occupancy on mass transit modes.
A comparative life-cycle energy and emissions (greenhouse gas, CO, NOX, SO2, PM10, and VOCs) inventory is created for three U.S. metropolitan regions (San Francisco, Chicago, and New York City). The inventory captures both vehicle operation (direct fuel or electricity consumption) and non-operation components (e.g., vehicle manufacturing, roadway maintenance, infrastructure operation, and material production among others). While urban transportation inventories have been continually improved, little information exists identifying the particular characteristics of metropolitan passenger transportation and why one region may differ from the next. Using travel surveys and recently developed transportation life-cycle inventories, metropolitan inventories are constructed and compared. Automobiles dominate total regional performance accounting for 86–96% of energy consumption and emissions. Comparing system-wide averages, New York City shows the lowest end-use energy and greenhouse gas footprint compared to San Francisco and Chicago and is influenced by the larger share of transit ridership. While automobile fuel combustion is a large component of emissions, diesel rail, electric rail, and ferry service can also have strong contributions. Additionally, the inclusion of life-cycle processes necessary for any transportation mode results in significant increases (as large as 20 times that of vehicle operation) for the region. In particular, emissions of CO2 from cement production used in concrete throughout infrastructure, SO2 from electricity generation in non-operational components (vehicle manufacturing, electricity for infrastructure materials, and fuel refining), PM10 in fugitive dust releases in roadway construction, and VOCs from asphalt result in significant additional inventory. Private and public transportation are disaggregated as well as off-peak and peak travel times. Furthermore, emissions are joined with healthcare and greenhouse gas monetized externalities to evaluate the societal costs of passenger transportation in each region. Results are validated against existing studies. The dominating contribution of automobile end-use energy consumption and emissions is discussed and strategies for improving regional performance given private travel's disproportionate share are identified.
To appropriately mitigate environmental impacts from transportation, it is necessary for decision makers to consider the life-cycle energy use and emissions. Most current decision-making relies on analysis at the tailpipe, ignoring vehicle production, infrastructure provision, and fuel production required for support. We present results of a comprehensive life-cycle energy, greenhouse gas emissions, and selected criteria air pollutant emissions inventory for automobiles, buses, trains, and airplanes in the US, including vehicles, infrastructure, fuel production, and supply chains. We find that total life-cycle energy inputs and greenhouse gas emissions contribute an additional 63% for onroad, 155% for rail, and 31% for air systems over vehicle tailpipe operation. Inventorying criteria air pollutants shows that vehicle non-operational components often dominate total emissions. Life-cycle criteria air pollutant emissions are between 1.1 and 800 times larger than vehicle operation. Ranges in passenger occupancy can easily change the relative performance of modes.
As cellulosic ethanol technologies advance, states could use the organic content of municipal solid waste as a transportation fuel feedstock and simultaneously reduce externalities associated with waste disposal. We examine the major processes required to support a lignocellulosic (employing enzymatic hydrolysis) municipal solid waste-to-ethanol infrastructure computing cost, energy, and greenhouse gas effects for California. The infrastructure is compared against the Business As Usual case where the state continues to import most of its ethanol needs from the Midwest. Assuming between 60% and 90% practical yields for ethanol production, California could produce between 1.0 and 1.5 billion gallons per year of ethanol from 55% of the 40 million metric tonnes of waste currently sent to landfills annually. The classification of organic wastes and ethanol plant operation represent almost the entire system cost (between $3.5 and $4.5 billion annually) while distribution has negligible cost effects and savings from avoided landfilling is small. Fossil energy consumption from Business As Usual decreases between 82 and 130 PJ largely due to foregone gasoline consumption. The net greenhouse gas impacts are ultimately dependent on how well landfills control their emissions of decomposing organics. Based on the current landfill mix in the state, the cellulosic infrastructure would experience a slight gain in greenhouse gas emissions. However, net emissions can rise if organics diversion releases carbon that would otherwise be flared and sequestered. Emissions would be avoided if landfills are not capable of effectively controlling emissions during periods of active waste decay. There is currently considerable uncertainty surrounding the recovery efficiency of landfill emissions controls. In either case, burying lignin appears to be better than burning lignin because of its decay properties, energy and carbon content. We estimate the breakeven price for lignocellulosic ethanol between $2.90 and $3.47/gal (μ = $3.13/gal).
Curbside recycling programs can be more cost-effective than landfilling and lead to environmental benefits from the recovery of materials. Significant reductions in energy and emissions are derived from the decrease of energy-intensive production with virgin materials. In many cities, competing priorities can lead to limited consideration given to system optimal collection and processing strategies that can drive down costs and increase revenue while simultaneously reducing system energy consumption and greenhouse gas (GHG) emissions. We evaluate three alterations to a hypothetical California city’s recycling network to discern the conditions under which the changes constitute system improvements to cost, energy, and emissions. The system initially operates with a collection zoning scheme that does not mitigate the impact of seasonal variations in consumer tonnage. In addition, two collection organizations operate redundantly, collecting recyclables from different customer types on the same street network. Finally, the system is dual stream, meaning recyclables are separated at the curbside. In some scenarios, this practice can limit the consumer participation rate leading to lower collection quantities. First, we evaluate a “business as usual” (BAU) scenario and find that the system operates at a $1.7 M/yr loss but still avoids a net 18.7 GJ and 1700 kg of greenhouse gas equivalent (GGE) per ton of material recycled. Second, we apply an alternative zoning scheme for collection that creates a uniform daily pickup demand throughout the year reducing costs by $0.2 M/yr, energy by 30 MJ/ton, and GHG emissions by 2 kg GGE/ton. Next, the two collection organizations are consolidated into a single entity further reducing vehicle fleet size and weekly vehicle miles traveled resulting in savings from BAU of $0.3 M/yr, 100 MJ/ton, and 8 kg GGE/ton. Lastly, we evaluate a switch to a single-stream system (where recyclables are commingled). We show that single-stream recycling can increase the total amount of material collected to a degree that lowers overall net cost ($0.2 M/yr) and leads to further reductions in energy use (210 MJ/ton) and emissions (16 kg GGE/ton). However, there can be circumstances in which maintaining a consolidated dual stream system is preferred over single stream. A sensitivity analysis is also performed and a discussion is presented addressing the applicability of this city network to others.
Construction mismanagement results in multiple problems that can cascade throughout the work force chain, affecting the schedule and leading to damages to multiple parties. Although the problem may start with a single subcontractor, it can result in all contractors feeling some impact to their work. In this paper, a case study is presented of a project with seven different mismanagement scenarios. A description of each scenario is provided as well as a quantification of the damages that result from the problem. A construction claims section is also included that addresses many of the issues that could result from a claim for each of the seven scenarios. A discussion is presented outlining possible preventative steps to minimize the damages from the problems presented.