Category

The Research University (TRU)

Home / The Research University (TRU)
NSFOpen Education ResourcesThe Research University (TRU)

Collaborative Research: CIBR: Cyberinfrastructure Enabling End-to-End Workflows for Aquatic Ecosystem Forecasting: University of Florida

Renato Figueiredo

[email protected]

Aquatic ecosystems in the United States and around the globe are experiencing increasing variability due to human activities. Provisioning drinking water in the face of rapid change in environmental conditions motivates the need to develop forecasts of future water quality. Near-term water quality forecasts can guide management actions over day to week time scales to mitigate potential disruptions in drinking water and other essential freshwater ecosystem services. To maximize the utility of water quality forecasts for managers and decision-makers, the forecasts must be accessible in near-real time, reliable, and continuously updated with environmental sensor data. However, developing iterative, near-term ecological forecasts requires complex cyber-infrastructure that is widely distributed, from sensors and computers collecting information at freshwater lakes and reservoirs to cloud computing services where forecast models are executed. Consequently, significant software challenges still remain for environmental scientists to easily and effectively deploy forecasting workflows. This project will address this need by designing, implementing, and deploying open-source software ? FLARE: Forecasting Lake And Reservoir Ecosystems ? that will enable the creation of flexible, scalable, robust, and near-real time iterative ecological forecasts. This software will be tested and widely disseminated to water utilities, drinking water managers, and many other decision-makers. FLARE will greatly advance the capability of the ecological research community to perform near-real time aquatic forecasts.<br/><br/>The FLARE forecasting system is novel in its architecture, as it integrates a software-defined virtual distributed infrastructure spanning resources from sensor gateway devices at the edge of the network to cloud computing and storage. FLARE will support the flexible deployment of software in close proximity to water quality sensors in lakes and reservoirs, and in cloud resources for end-to-end data acquisition and processing. FLARE interconnects its distributed resources through a virtual private network to ensure data integrity and privacy in communications, and supports a flexible model applicable across a variety of lakes and reservoirs. Reusing best-of-breed technologies, FLARE builds upon and integrates several contemporary, widely-used open-source software frameworks in a manner that lowers the barrier to the deployment and management of ecological forecasting workflows by ecologists. Importantly, this project?s development of scalable and open-source cyberinfrastructure tools and end-to-end workflows for creating iterative aquatic forecasts will provide a critical resource for advancing the ecological forecasting research community, as well as provide a template for forecasting in other ecosystems. This project will build on and expand an existing program for cross-disciplinary teaching tools and research exchanges of undergraduate and graduate students to provide training at the intersection of computer science, freshwater science, and ecosystem modeling. Ultimately, this project will develop scalable, robust, secure workflows that will advance the capacity, practice, and training opportunities for ecological forecasting worldwide. Results from this project can be found at http://flare-forecast.org<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

NSFOpen Education ResourcesThe Research University (TRU)

CAREER: A Novel and Fast Open-Source Code for Global Simulation of Stratified Convection and Magnetohydrodynamics of the Sun: Clarkson University

Chunlei Liang

[email protected]

Non-technical: <br/>The goal of this project is to create a unique capability for predicting density-stratified magnetohydrodynamics of the Sun. This research is expected to lay a foundation for developing methods for predicting extreme space weather, e.g. the event of a "super solar flare" followed by an extreme geomagnetic storm. Scientific results of this research can help resolve several contradictory predictions from previous studies of the solar convection zone. The Principal Investigator (PI) will develop and disseminate a powerful open-source software package to the space weather and solar physics communities. The success of predicting severe space weather events has significant societal and economic impacts. PI will design high-order accurate computational algorithms suitable for exascale simulations that can perform a billion billion calculations per second. This software will run on massively parallel distributed-memory computers to predict coupled global and local dynamics of the sun. PI will reach out to K-12 students and demonstrate that science of the sun and high-performance computing are exciting and important to society. Furthermore, PI will leverage outreach efforts with the High Altitude Observatory of the National Center for Atmospheric Research and other research centers. This project, thus, serves the national interest as stated by NSF's mission: to promote the progress of science and to advance the national welfare.<br/><br/>Technical: <br/>The goal of this research program is to develop a novel, fully compressible model and an open-source community code for global simulations of the solar convection zone that includes the top near surface shear layer of the Sun. Current leading global simulations use an elastic approximation whose computational domains extend from the base of the solar convection zone and must stop at about 0.96 solar radius, stopping short of the top near surface shear layer where Mach number could reach unity. This research program will create a powerful open-source community code CHORUS++ to simulate magnetohydrodynamics of the solar convection zone. CHORUS stands for Compressible High-ORder Unstructured-grid Spectral difference code which has been co-developed by the PI for hydrodynamics of the solar convection zone. CHORUS++ will be equipped with variable mesh resolution capability to focus on targeted regions of interests. A fast local time-stepping algorithm will be designed and equipped for CHORUS++ for long-period time integration on massively parallel computers. These technical accomplishments can accelerate the original CHORUS code by a factor over 100. The PI will conduct a series of global simulations of magnetohydrodynamics of the solar convection zone with unprecedented resolutions for predicting the differential rotation, meridional circulation, giant cells, and super-granulation of the sun.

NSFOpen Education ResourcesThe Research University (TRU)

Empirical Macrofinance: Open-source Textbook and Data-sharing Platform: Princeton University

Atif Mian

[email protected]

Abstract<br/><br/>The importance of the linkages between finance, debt, and macroeconomic outcomes, such as growing inequality, become evident since the Great Financial Crisis. To date, no single initiative has organized this recent empirical literature into a meaningful whole. Similarly, to date, data sources used to address macro-finance research questions are not integrated. This project fills these gaps by disseminating the new techniques and datasets used in macro-financial research and by creating an open-source textbook that grounds each topic to its theoretical foundations. <br/><br/>The project will develop an open-source textbook and data-sharing platform to promote teaching and research about empirical finance and macroeconomics (macro-finance), filling the gap for an integrated approach in this important area. Data and instructional resources will be freely available to students and researchers via the project website. The codebase will be accessible on a platform for open-source software development. In addition to standardizing currently available data on macro-financial variables, the project will digitize new data on (i) US County Business Patterns, (ii) Global Sectoral Credit and National Accounts, (iii) International Loan Contracts.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

NSFOpen Education ResourcesThe Research University (TRU)

Collaborative Research: Empowering Open Law and Science: University of Washington

Nicholas Weber

[email protected]

Research transparency provides immense value across all areas of scholarly inquiry by helping to reveal the rigor, reliability, and relevance of, and to make more evaluable, all types of research. Scholars who engage in qualitative inquiry sometimes find it difficult to make their work transparent, i.e. to clearly communicate the meticulous and systematic research procedures and practices that they employ to generate and analyze qualitative data, and to clearly portray the evidentiary value of those data. Annotation for Transparent Inquiry (ATI), an emerging approach to increasing the transparency of published qualitative and multi-method social science, helps to address those challenges. This project aims to develop and test a new software tool that will empower scholars to use ATI to reveal the procedures they followed to generate data, explicate the logic of their analysis, and directly link to underlying data such as interviews or archival documents. The tool will thus help researchers and the public to better understand and evaluate qualitative research and provide easier access to the rich data underlying such work. The partnership between researchers, academic data repositories, and creators of open-source software that the project represents should make a significant contribution to infrastructure for research and education. The project also encourages intellectual democratization, enhancing access to transparency practices, to key insights and findings in social science and legal scholarship, and to research data. <br/><br/>ATI empowers authors to annotate their publications using interoperable web-based annotations that add valuable details about their work?s evidentiary basis and analysis, excerpts from data sources that underlie claims, and potentially links to the data sources themselves. The prototype for a new open-source tool that the project will develop will allow scholars to Restructure, Edit and Package Annotations (Anno-REP). Anno-REP will empower scholars to create and curate web-based annotations at any point in the writing process; signal their motivation; and publish those annotations on a web page in tandem with the scholarly work that they accompany. These innovations will significantly ease the use of ATI and facilitate and encourage its seamless integration into the writing and publishing processes, promoting scientific progress through qualitative inquiry. The project will solicit feedback for Anno-REP?s continued development from ten scholars with familiarity with ATI, and will also evaluate the tool through a workshop including 20 legal scholars (faculty and graduate students). In order to promote the use and hasten the scholarly adoption of both ATI and Anno-REP, the project will encourage and help scholars to propose work that has been annotated using ATI and Anno-REP for presentation at disciplinary conferences. In addition, it will organize a symposium of articles that use, and analyze the use of, ATI for submission to, review by, and publication in a top legal journal.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

NSFOpen Education ResourcesThe Research University (TRU)

Collaborative Research: Empowering Open Law and Science: Syracuse University

Sebastian Karcher

[email protected]

Research transparency provides immense value across all areas of scholarly inquiry by helping to reveal the rigor, reliability, and relevance of, and to make more evaluable, all types of research. Scholars who engage in qualitative inquiry sometimes find it difficult to make their work transparent, i.e. to clearly communicate the meticulous and systematic research procedures and practices that they employ to generate and analyze qualitative data, and to clearly portray the evidentiary value of those data. Annotation for Transparent Inquiry (ATI), an emerging approach to increasing the transparency of published qualitative and multi-method social science, helps to address those challenges. This project aims to develop and test a new software tool that will empower scholars to use ATI to reveal the procedures they followed to generate data, explicate the logic of their analysis, and directly link to underlying data such as interviews or archival documents. The tool will thus help researchers and the public to better understand and evaluate qualitative research and provide easier access to the rich data underlying such work. The partnership between researchers, academic data repositories, and creators of open-source software that the project represents should make a significant contribution to infrastructure for research and education. The project also encourages intellectual democratization, enhancing access to transparency practices, to key insights and findings in social science and legal scholarship, and to research data. <br/><br/>ATI empowers authors to annotate their publications using interoperable web-based annotations that add valuable details about their work?s evidentiary basis and analysis, excerpts from data sources that underlie claims, and potentially links to the data sources themselves. The prototype for a new open-source tool that the project will develop will allow scholars to Restructure, Edit and Package Annotations (Anno-REP). Anno-REP will empower scholars to create and curate web-based annotations at any point in the writing process; signal their motivation; and publish those annotations on a web page in tandem with the scholarly work that they accompany. These innovations will significantly ease the use of ATI and facilitate and encourage its seamless integration into the writing and publishing processes, promoting scientific progress through qualitative inquiry. The project will solicit feedback for Anno-REP?s continued development from ten scholars with familiarity with ATI, and will also evaluate the tool through a workshop including 20 legal scholars (faculty and graduate students). In order to promote the use and hasten the scholarly adoption of both ATI and Anno-REP, the project will encourage and help scholars to propose work that has been annotated using ATI and Anno-REP for presentation at disciplinary conferences. In addition, it will organize a symposium of articles that use, and analyze the use of, ATI for submission to, review by, and publication in a top legal journal.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

NSFOpen Education ResourcesThe Research University (TRU)

III: Medium: Spatial Sound Scene Description: New York University

Juan Bello

[email protected]

Sound is rich with information about the surrounding environment. If you stand on a city sidewalk with your eyes closed and listen, you will hear the sounds of events happening around you: birds chirping, squirrels scurrying, people talking, doors opening, an ambulance speeding, a truck idling. In addition, you will also likely be able to perceive the location of each sound source, where it?s going, and how fast it?s moving. This project will build innovative technologies to allow computers to extract this rich information out of sound. By not only identifying which sound sources are present but also estimating the spatial location and movement of each sound source, sound sensing technology will be able to better describe our environments with microphone-enabled everyday devices, e.g. smartphones, headphones, smart speakers, hearing-aids, home camera, and mixed-reality headsets. For hearing impaired individuals, the developed technologies have the potential to alert them to dangerous situations in urban or domestic environments. For city agencies, acoustic sensors will be able to more accurately quantify traffic, construction, and other activities in urban environments. For ecologists, this technology can help them more accurately monitor and study wildlife. In addition, this information complements what computer vision can sense, as sound can include information about events that are not easily visible, such as sources that are small (e.g., insects), far away (e.g., a distant jackhammer), or simply hidden behind another object (e.g., an incoming ambulance around a building's corner). This project also includes outreach activities involving over 100 public school students and teachers, as well as the training and mentoring of postdoctoral, graduate and undergraduate students.<br/> <br/>This project will develop computational models for spatial sound scene description: that is, estimating the class, spatial location, direction and speed of movement of living beings and objects in real environments by the sounds they make. The investigators aim for their solutions to be robust across a wide range of sound scenes and sensing conditions: noisy, sparse, natural, urban, indoors, outdoors, with varying compositions of sources, with unknown sources, with moving sources, with moving sensors, etc. While current approaches show promise, they are still far from robust in real-world conditions and thus unable to support any of the above scenarios. These shortcomings stem from important data issues such as a lack of spatially annotated real-world audio data, and an over-reliance on poor quality, unrealistic synthesized data; as well as methodological issues such as excessive dependence on supervised learning and a failure to capture the structure of the solution space. This project plans an approach mixing innovative data collection strategies with cutting-edge machine learning solutions. First, it advances a novel framework for the probabilistic synthesis of soundscape datasets using physical and generative models. The goal is to substantially increase the amount, realism and diversity of strongly-labeled spatial audio data. Second, it collects and annotates new datasets of real sound scenes via a combination of high-quality field recordings, crowdsourcing, novel VR/AR multimodal annotation strategies and large-scale annotation by citizen scientists. Third, it puts forward novel deep self-supervised representation learning strategies trained on vast quantities of unlabeled audio data. Fourth, these representation modules are paired with hierarchical predictive models, where the top/bottom levels of the hierarchy correspond to coarser/finer levels of scene description. Finally, the project includes collaborations with three industrial partners to explore applications enabled by the proposed solutions. The project will result in novel methods and open source software libraries for spatial sound scene generation, annotation, representation learning, and sound event detection/localization/tracking; and new open datasets of spatial audio recordings, spatial sound scene annotations, synthesized isolated sounds, and synthesized spatial soundscapes.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

NSFOpen Education ResourcesThe Research University (TRU)

Collaborative Research: Elements: EdgeVPN: Seamless Secure Virtual Networking for Edge and Fog Computing: University of Florida

Renato Figueiredo

[email protected]

Edge computing encompasses a variety of technologies that are poised to enable new applications across the Internet that support data capture, storage, processing and communication near the edge of the Internet. Edge computing environments pose new challenges, as devices are heterogeneous, widely distributed geographically, and physically closer to end users, such as mobile and Internet-of-Things (IoT) devices. This project develops EdgeVPN, a software element that addresses a fundamental challenge of networking for edge computing applications: establishing Virtual Private Networks (VPNs) to logically interconnect edge devices, while preserving privacy and integrity of data as it flows through Internet links. More specifically, the EdgeVPN software developed in this project addresses technical challenges in creating virtual networks that self-organize into scalable, resilient systems that can significantly lower the barrier to entry to deploying a private communication fabric in support of existing and future edge applications. There are a wide range of applications that are poised to benefit from EdgeVPN; in particular, this project is motivated by use cases in ecological monitoring and forecasting for freshwater lakes and reservoirs, situational awareness and command-and-control in defense applications, and smart and connected cities. Because EdgeVPN is open-source and freely available to the public, the software will promote progress of science and benefit society at large by contributing to the set of tools available to researchers, developers and practitioners to catalyze innovation and future applications in edge computing.<br/><br/>Edge computing applications need to be deployed across multiple network providers, and harness low-latency, high-throughput processing of streams of data from large numbers of distributed IoT devices. Achieving this goal will demand not only advances in the underlying physical network, but also require a trustworthy communication fabric that is easy to use, and operates atop the existing Internet without requiring changes to the infrastructure. The EdgeVPN open-source software developed in this project is an overlay virtual network that allows seamless private networking among groups of edge computing resources, as well as cloud resources. EdgeVPN is novel in how it integrates: 1) a flexible group management and messaging service to create and manage peer-to-peer VPN tunnels grouping devices distributed across the Internet, 2) a scalable structured overlay network topology supporting primitives for unicast, multicast and broadcast, 3) software-defined networking (SDN) as the control plane to support message routing through the peer-to-peer data path, and 4) network virtualization and integration with virtualized compute/storage endpoints with Docker containers to allow existing Internet applications to work unmodified. EdgeVPN self-organizes an overlay topology of tunnels that enables encrypted, authenticated communication among edge devices connected across disparate providers in the Internet, possibly subject to mobility and constraints imposed by firewalls and Network Address Translation, NATs. It builds upon standard SDN interfaces to implement packet manipulation primitives for virtualization supporting the ubiquitous Ethernet and IP-layer protocols.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

NSFOpen Education ResourcesThe Research University (TRU)

Collaborative Research: Elements: EdgeVPN: Seamless Secure VirtualNetworking for Edge and Fog Computing: Virginia Polytechnic Institute and State University

Cayelan Carey

[email protected]

Edge computing encompasses a variety of technologies that are poised to enable new applications across the Internet that support data capture, storage, processing and communication near the edge of the Internet. Edge computing environments pose new challenges, as devices are heterogeneous, widely distributed geographically, and physically closer to end users, such as mobile and Internet-of-Things (IoT) devices. This project develops EdgeVPN, a software element that addresses a fundamental challenge of networking for edge computing applications: establishing Virtual Private Networks (VPNs) to logically interconnect edge devices, while preserving privacy and integrity of data as it flows through Internet links. More specifically, the EdgeVPN software developed in this project addresses technical challenges in creating virtual networks that self-organize into scalable, resilient systems that can significantly lower the barrier to entry to deploying a private communication fabric in support of existing and future edge applications. There are a wide range of applications that are poised to benefit from EdgeVPN; in particular, this project is motivated by use cases in ecological monitoring and forecasting for freshwater lakes and reservoirs, situational awareness and command-and-control in defense applications, and smart and connected cities. Because EdgeVPN is open-source and freely available to the public, the software will promote progress of science and benefit society at large by contributing to the set of tools available to researchers, developers and practitioners to catalyze innovation and future applications in edge computing.<br/><br/>Edge computing applications need to be deployed across multiple network providers, and harness low-latency, high-throughput processing of streams of data from large numbers of distributed IoT devices. Achieving this goal will demand not only advances in the underlying physical network, but also require a trustworthy communication fabric that is easy to use, and operates atop the existing Internet without requiring changes to the infrastructure. The EdgeVPN open-source software developed in this project is an overlay virtual network that allows seamless private networking among groups of edge computing resources, as well as cloud resources. EdgeVPN is novel in how it integrates: 1) a flexible group management and messaging service to create and manage peer-to-peer VPN tunnels grouping devices distributed across the Internet, 2) a scalable structured overlay network topology supporting primitives for unicast, multicast and broadcast, 3) software-defined networking (SDN) as the control plane to support message routing through the peer-to-peer data path, and 4) network virtualization and integration with virtualized compute/storage endpoints with Docker containers to allow existing Internet applications to work unmodified. EdgeVPN self-organizes an overlay topology of tunnels that enables encrypted, authenticated communication among edge devices connected across disparate providers in the Internet, possibly subject to mobility and constraints imposed by firewalls and Network Address Translation, NATs. It builds upon standard SDN interfaces to implement packet manipulation primitives for virtualization supporting the ubiquitous Ethernet and IP-layer protocols.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

NSFOpen Education ResourcesThe Research University (TRU)

IIBR Informatics: Advancing Bioinformatics Methods using Ensembles of Profile Hidden Markov Models: University of Illinois at Urbana-Champaign

Tandy Warnow

[email protected]

Many steps in biological research pipelines involve the use of machine learning models, and these have become standard tools for many basic problems. Elaborations on basic machine learning models ("ensembles" of machine learning models) can provide improvements in accuracy compared to standard usage, for various biological questions. However, the design of these ensembles has been fairly ad hoc, and their use can be computationally intensive, which reduces their appeal in practice. This project will advance this technology by developing statistically rigorous techniques for building ensembles of machine learning models, with the goal of improving accuracy. The project will also develop methods that use these ensembles for new biological problems, including protein structure and function prediction. Broader impacts include software school, engagement with under-represented groups, and open-source software.<br/> <br/>Profile Hidden Markov Models (i.e., profile HMMs) are probabilistic graphical models that are in wide use in bioinformatics. Research over the last decade has shown that ensembles of profile HMMs (e-HMMs) can provide greater accuracy than a single profile HMM for many applications in bioinformatics, including phylogenetic placement, multiple sequence alignment, and taxonomic identification of metagenomic reads. This project will advance the use of e-HMMs by developing statistically rigorous techniques for building e-HMMs with the goal of improving accuracy and improving understanding of e-HMMs, and will also develop methods that use e-HMMs for protein structure and function prediction. Broader impacts include software schools, engagement with under-represented groups, and open-source software. Project software and papers are available at http://tandy.cs.illinois.edu/eHMMproject.html.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

NSFOpen Education ResourcesThe Research University (TRU)

Data-Enabled Acceleration of Stochastic Computational Experiments: University of Washington

Youngjun Choe

[email protected]

This project will advance the ability to accelerate stochastic computational experiments with the aid of heterogeneous data (for example, empirical observations, multi-fidelity simulations, and expert knowledge). This work is motivated by the trend of computational experiments in science and engineering. These experiments increasingly rely on probabilistic models to represent epistemic uncertainties (such as those in physics-based model specification) and aleatory uncertainties (noise in experiments and observational data). To date crude Monte Carlo simulation dominates such stochastic computational experiments mainly due to its simplicity. Efforts to accelerate the experiments have generally been ad-hoc and narrowly applicable to a particular science or engineering problem. This project will produce methods and tools for domain scientists and engineers with a potential to expedite or even enable breakthroughs based on stochastic computational experiments. These methods will help overcome the computational challenge associated with investigating unusual strings of events (for example, nuclear meltdown, cascading blackout, and epidemic outbreak) that are critical to the nation's economy, security, and health. To maximally reach out to domain scientists and engineers, this project will design and implement an open-source software package of the methods. An online workshop will be designed and conducted to demonstrate the software and train researchers and practitioners. To build the capacity of the next generation of researchers and practitioners, the project team will recruit and engage with college and high-school students, especially those from underrepresented backgrounds, through a partnership with diversity enhancement programs in the university. Graduate students will be directly involved in designing and executing research, while undergraduate students will participate in software development and testing, being mentored and trained as data-enabled computational researchers.<br/><br/>Even though comprehensive consideration of uncertainties in a scientific or engineering study is commendable, an unguided computational investment on crude Monte Carlo simulation often results in an enormous waste of time and resources. Furthermore, to attain a required accuracy of probabilistic analysis, the associated computational burden can be a major bottleneck or even a barrier to scientific and engineering discovery, especially when the event of interest is extreme, rare, or peculiar. To address this challenge, this project will innovate a unified methodological framework that leverages heterogeneous data for speeding up stochastic computational experiments without compromising the accuracy of probabilistic analysis. The framework will include methods for identifying and exploiting a low-dimensional manifold (naturally appearing in science and engineering) of high-dimensional simulation input space to speed up stochastic computational experiments by addressing the curse of dimensionality. For the accelerated probabilistic analysis, asymptotically valid confidence bounds will be constructed to ensure the desired analysis accuracy. The framework will prescribe how to adaptively allocate computational resources for exploring the simulation input space while exploiting the important input manifold to minimize the computational expenditure while maintaining the desired analysis accuracy. The project will validate the methods and verify the open-source software developed for broader impacts, based on two engineering simulation case studies, namely, structural reliability evaluation of a wind turbine and cascading failure analysis of a power grid.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

1 2 433 434 435 436
About Exponent

Exponent is a modern business theme, that lets you build stunning high performance websites using a fully visual interface. Start with any of the demos below or build one on your own.

Get Started
Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google
Spotify
Consent to display content from Spotify
Sound Cloud
Consent to display content from Sound