Category

The Research University (TRU)

Home / The Research University (TRU)
NSFOpen Education ResourcesThe Research University (TRU)

Collaborative Research: Empowering Open Law and Science: University of Washington

Nicholas Weber

[email protected]

Research transparency provides immense value across all areas of scholarly inquiry by helping to reveal the rigor, reliability, and relevance of, and to make more evaluable, all types of research. Scholars who engage in qualitative inquiry sometimes find it difficult to make their work transparent, i.e. to clearly communicate the meticulous and systematic research procedures and practices that they employ to generate and analyze qualitative data, and to clearly portray the evidentiary value of those data. Annotation for Transparent Inquiry (ATI), an emerging approach to increasing the transparency of published qualitative and multi-method social science, helps to address those challenges. This project aims to develop and test a new software tool that will empower scholars to use ATI to reveal the procedures they followed to generate data, explicate the logic of their analysis, and directly link to underlying data such as interviews or archival documents. The tool will thus help researchers and the public to better understand and evaluate qualitative research and provide easier access to the rich data underlying such work. The partnership between researchers, academic data repositories, and creators of open-source software that the project represents should make a significant contribution to infrastructure for research and education. The project also encourages intellectual democratization, enhancing access to transparency practices, to key insights and findings in social science and legal scholarship, and to research data. <br/><br/>ATI empowers authors to annotate their publications using interoperable web-based annotations that add valuable details about their work?s evidentiary basis and analysis, excerpts from data sources that underlie claims, and potentially links to the data sources themselves. The prototype for a new open-source tool that the project will develop will allow scholars to Restructure, Edit and Package Annotations (Anno-REP). Anno-REP will empower scholars to create and curate web-based annotations at any point in the writing process; signal their motivation; and publish those annotations on a web page in tandem with the scholarly work that they accompany. These innovations will significantly ease the use of ATI and facilitate and encourage its seamless integration into the writing and publishing processes, promoting scientific progress through qualitative inquiry. The project will solicit feedback for Anno-REP?s continued development from ten scholars with familiarity with ATI, and will also evaluate the tool through a workshop including 20 legal scholars (faculty and graduate students). In order to promote the use and hasten the scholarly adoption of both ATI and Anno-REP, the project will encourage and help scholars to propose work that has been annotated using ATI and Anno-REP for presentation at disciplinary conferences. In addition, it will organize a symposium of articles that use, and analyze the use of, ATI for submission to, review by, and publication in a top legal journal.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

NSFOpen Education ResourcesThe Research University (TRU)

Collaborative Research: Empowering Open Law and Science: Syracuse University

Sebastian Karcher

[email protected]

Research transparency provides immense value across all areas of scholarly inquiry by helping to reveal the rigor, reliability, and relevance of, and to make more evaluable, all types of research. Scholars who engage in qualitative inquiry sometimes find it difficult to make their work transparent, i.e. to clearly communicate the meticulous and systematic research procedures and practices that they employ to generate and analyze qualitative data, and to clearly portray the evidentiary value of those data. Annotation for Transparent Inquiry (ATI), an emerging approach to increasing the transparency of published qualitative and multi-method social science, helps to address those challenges. This project aims to develop and test a new software tool that will empower scholars to use ATI to reveal the procedures they followed to generate data, explicate the logic of their analysis, and directly link to underlying data such as interviews or archival documents. The tool will thus help researchers and the public to better understand and evaluate qualitative research and provide easier access to the rich data underlying such work. The partnership between researchers, academic data repositories, and creators of open-source software that the project represents should make a significant contribution to infrastructure for research and education. The project also encourages intellectual democratization, enhancing access to transparency practices, to key insights and findings in social science and legal scholarship, and to research data. <br/><br/>ATI empowers authors to annotate their publications using interoperable web-based annotations that add valuable details about their work?s evidentiary basis and analysis, excerpts from data sources that underlie claims, and potentially links to the data sources themselves. The prototype for a new open-source tool that the project will develop will allow scholars to Restructure, Edit and Package Annotations (Anno-REP). Anno-REP will empower scholars to create and curate web-based annotations at any point in the writing process; signal their motivation; and publish those annotations on a web page in tandem with the scholarly work that they accompany. These innovations will significantly ease the use of ATI and facilitate and encourage its seamless integration into the writing and publishing processes, promoting scientific progress through qualitative inquiry. The project will solicit feedback for Anno-REP?s continued development from ten scholars with familiarity with ATI, and will also evaluate the tool through a workshop including 20 legal scholars (faculty and graduate students). In order to promote the use and hasten the scholarly adoption of both ATI and Anno-REP, the project will encourage and help scholars to propose work that has been annotated using ATI and Anno-REP for presentation at disciplinary conferences. In addition, it will organize a symposium of articles that use, and analyze the use of, ATI for submission to, review by, and publication in a top legal journal.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

NSFOpen Education ResourcesThe Research University (TRU)

III: Medium: Spatial Sound Scene Description: New York University

Juan Bello

[email protected]

Sound is rich with information about the surrounding environment. If you stand on a city sidewalk with your eyes closed and listen, you will hear the sounds of events happening around you: birds chirping, squirrels scurrying, people talking, doors opening, an ambulance speeding, a truck idling. In addition, you will also likely be able to perceive the location of each sound source, where it?s going, and how fast it?s moving. This project will build innovative technologies to allow computers to extract this rich information out of sound. By not only identifying which sound sources are present but also estimating the spatial location and movement of each sound source, sound sensing technology will be able to better describe our environments with microphone-enabled everyday devices, e.g. smartphones, headphones, smart speakers, hearing-aids, home camera, and mixed-reality headsets. For hearing impaired individuals, the developed technologies have the potential to alert them to dangerous situations in urban or domestic environments. For city agencies, acoustic sensors will be able to more accurately quantify traffic, construction, and other activities in urban environments. For ecologists, this technology can help them more accurately monitor and study wildlife. In addition, this information complements what computer vision can sense, as sound can include information about events that are not easily visible, such as sources that are small (e.g., insects), far away (e.g., a distant jackhammer), or simply hidden behind another object (e.g., an incoming ambulance around a building's corner). This project also includes outreach activities involving over 100 public school students and teachers, as well as the training and mentoring of postdoctoral, graduate and undergraduate students.<br/> <br/>This project will develop computational models for spatial sound scene description: that is, estimating the class, spatial location, direction and speed of movement of living beings and objects in real environments by the sounds they make. The investigators aim for their solutions to be robust across a wide range of sound scenes and sensing conditions: noisy, sparse, natural, urban, indoors, outdoors, with varying compositions of sources, with unknown sources, with moving sources, with moving sensors, etc. While current approaches show promise, they are still far from robust in real-world conditions and thus unable to support any of the above scenarios. These shortcomings stem from important data issues such as a lack of spatially annotated real-world audio data, and an over-reliance on poor quality, unrealistic synthesized data; as well as methodological issues such as excessive dependence on supervised learning and a failure to capture the structure of the solution space. This project plans an approach mixing innovative data collection strategies with cutting-edge machine learning solutions. First, it advances a novel framework for the probabilistic synthesis of soundscape datasets using physical and generative models. The goal is to substantially increase the amount, realism and diversity of strongly-labeled spatial audio data. Second, it collects and annotates new datasets of real sound scenes via a combination of high-quality field recordings, crowdsourcing, novel VR/AR multimodal annotation strategies and large-scale annotation by citizen scientists. Third, it puts forward novel deep self-supervised representation learning strategies trained on vast quantities of unlabeled audio data. Fourth, these representation modules are paired with hierarchical predictive models, where the top/bottom levels of the hierarchy correspond to coarser/finer levels of scene description. Finally, the project includes collaborations with three industrial partners to explore applications enabled by the proposed solutions. The project will result in novel methods and open source software libraries for spatial sound scene generation, annotation, representation learning, and sound event detection/localization/tracking; and new open datasets of spatial audio recordings, spatial sound scene annotations, synthesized isolated sounds, and synthesized spatial soundscapes.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

NSFOpen Education ResourcesThe Research University (TRU)

Collaborative Research: Elements: EdgeVPN: Seamless Secure Virtual Networking for Edge and Fog Computing: University of Florida

Renato Figueiredo

[email protected]

Edge computing encompasses a variety of technologies that are poised to enable new applications across the Internet that support data capture, storage, processing and communication near the edge of the Internet. Edge computing environments pose new challenges, as devices are heterogeneous, widely distributed geographically, and physically closer to end users, such as mobile and Internet-of-Things (IoT) devices. This project develops EdgeVPN, a software element that addresses a fundamental challenge of networking for edge computing applications: establishing Virtual Private Networks (VPNs) to logically interconnect edge devices, while preserving privacy and integrity of data as it flows through Internet links. More specifically, the EdgeVPN software developed in this project addresses technical challenges in creating virtual networks that self-organize into scalable, resilient systems that can significantly lower the barrier to entry to deploying a private communication fabric in support of existing and future edge applications. There are a wide range of applications that are poised to benefit from EdgeVPN; in particular, this project is motivated by use cases in ecological monitoring and forecasting for freshwater lakes and reservoirs, situational awareness and command-and-control in defense applications, and smart and connected cities. Because EdgeVPN is open-source and freely available to the public, the software will promote progress of science and benefit society at large by contributing to the set of tools available to researchers, developers and practitioners to catalyze innovation and future applications in edge computing.<br/><br/>Edge computing applications need to be deployed across multiple network providers, and harness low-latency, high-throughput processing of streams of data from large numbers of distributed IoT devices. Achieving this goal will demand not only advances in the underlying physical network, but also require a trustworthy communication fabric that is easy to use, and operates atop the existing Internet without requiring changes to the infrastructure. The EdgeVPN open-source software developed in this project is an overlay virtual network that allows seamless private networking among groups of edge computing resources, as well as cloud resources. EdgeVPN is novel in how it integrates: 1) a flexible group management and messaging service to create and manage peer-to-peer VPN tunnels grouping devices distributed across the Internet, 2) a scalable structured overlay network topology supporting primitives for unicast, multicast and broadcast, 3) software-defined networking (SDN) as the control plane to support message routing through the peer-to-peer data path, and 4) network virtualization and integration with virtualized compute/storage endpoints with Docker containers to allow existing Internet applications to work unmodified. EdgeVPN self-organizes an overlay topology of tunnels that enables encrypted, authenticated communication among edge devices connected across disparate providers in the Internet, possibly subject to mobility and constraints imposed by firewalls and Network Address Translation, NATs. It builds upon standard SDN interfaces to implement packet manipulation primitives for virtualization supporting the ubiquitous Ethernet and IP-layer protocols.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

NSFOpen Education ResourcesThe Research University (TRU)

Collaborative Research: Elements: EdgeVPN: Seamless Secure VirtualNetworking for Edge and Fog Computing: Virginia Polytechnic Institute and State University

Cayelan Carey

[email protected]

Edge computing encompasses a variety of technologies that are poised to enable new applications across the Internet that support data capture, storage, processing and communication near the edge of the Internet. Edge computing environments pose new challenges, as devices are heterogeneous, widely distributed geographically, and physically closer to end users, such as mobile and Internet-of-Things (IoT) devices. This project develops EdgeVPN, a software element that addresses a fundamental challenge of networking for edge computing applications: establishing Virtual Private Networks (VPNs) to logically interconnect edge devices, while preserving privacy and integrity of data as it flows through Internet links. More specifically, the EdgeVPN software developed in this project addresses technical challenges in creating virtual networks that self-organize into scalable, resilient systems that can significantly lower the barrier to entry to deploying a private communication fabric in support of existing and future edge applications. There are a wide range of applications that are poised to benefit from EdgeVPN; in particular, this project is motivated by use cases in ecological monitoring and forecasting for freshwater lakes and reservoirs, situational awareness and command-and-control in defense applications, and smart and connected cities. Because EdgeVPN is open-source and freely available to the public, the software will promote progress of science and benefit society at large by contributing to the set of tools available to researchers, developers and practitioners to catalyze innovation and future applications in edge computing.<br/><br/>Edge computing applications need to be deployed across multiple network providers, and harness low-latency, high-throughput processing of streams of data from large numbers of distributed IoT devices. Achieving this goal will demand not only advances in the underlying physical network, but also require a trustworthy communication fabric that is easy to use, and operates atop the existing Internet without requiring changes to the infrastructure. The EdgeVPN open-source software developed in this project is an overlay virtual network that allows seamless private networking among groups of edge computing resources, as well as cloud resources. EdgeVPN is novel in how it integrates: 1) a flexible group management and messaging service to create and manage peer-to-peer VPN tunnels grouping devices distributed across the Internet, 2) a scalable structured overlay network topology supporting primitives for unicast, multicast and broadcast, 3) software-defined networking (SDN) as the control plane to support message routing through the peer-to-peer data path, and 4) network virtualization and integration with virtualized compute/storage endpoints with Docker containers to allow existing Internet applications to work unmodified. EdgeVPN self-organizes an overlay topology of tunnels that enables encrypted, authenticated communication among edge devices connected across disparate providers in the Internet, possibly subject to mobility and constraints imposed by firewalls and Network Address Translation, NATs. It builds upon standard SDN interfaces to implement packet manipulation primitives for virtualization supporting the ubiquitous Ethernet and IP-layer protocols.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

NSFOpen Education ResourcesThe Research University (TRU)

IIBR Informatics: Advancing Bioinformatics Methods using Ensembles of Profile Hidden Markov Models: University of Illinois at Urbana-Champaign

Tandy Warnow

[email protected]

Many steps in biological research pipelines involve the use of machine learning models, and these have become standard tools for many basic problems. Elaborations on basic machine learning models ("ensembles" of machine learning models) can provide improvements in accuracy compared to standard usage, for various biological questions. However, the design of these ensembles has been fairly ad hoc, and their use can be computationally intensive, which reduces their appeal in practice. This project will advance this technology by developing statistically rigorous techniques for building ensembles of machine learning models, with the goal of improving accuracy. The project will also develop methods that use these ensembles for new biological problems, including protein structure and function prediction. Broader impacts include software school, engagement with under-represented groups, and open-source software.<br/> <br/>Profile Hidden Markov Models (i.e., profile HMMs) are probabilistic graphical models that are in wide use in bioinformatics. Research over the last decade has shown that ensembles of profile HMMs (e-HMMs) can provide greater accuracy than a single profile HMM for many applications in bioinformatics, including phylogenetic placement, multiple sequence alignment, and taxonomic identification of metagenomic reads. This project will advance the use of e-HMMs by developing statistically rigorous techniques for building e-HMMs with the goal of improving accuracy and improving understanding of e-HMMs, and will also develop methods that use e-HMMs for protein structure and function prediction. Broader impacts include software schools, engagement with under-represented groups, and open-source software. Project software and papers are available at http://tandy.cs.illinois.edu/eHMMproject.html.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

NSFOpen Education ResourcesThe Research University (TRU)

Data-Enabled Acceleration of Stochastic Computational Experiments: University of Washington

Youngjun Choe

[email protected]

This project will advance the ability to accelerate stochastic computational experiments with the aid of heterogeneous data (for example, empirical observations, multi-fidelity simulations, and expert knowledge). This work is motivated by the trend of computational experiments in science and engineering. These experiments increasingly rely on probabilistic models to represent epistemic uncertainties (such as those in physics-based model specification) and aleatory uncertainties (noise in experiments and observational data). To date crude Monte Carlo simulation dominates such stochastic computational experiments mainly due to its simplicity. Efforts to accelerate the experiments have generally been ad-hoc and narrowly applicable to a particular science or engineering problem. This project will produce methods and tools for domain scientists and engineers with a potential to expedite or even enable breakthroughs based on stochastic computational experiments. These methods will help overcome the computational challenge associated with investigating unusual strings of events (for example, nuclear meltdown, cascading blackout, and epidemic outbreak) that are critical to the nation's economy, security, and health. To maximally reach out to domain scientists and engineers, this project will design and implement an open-source software package of the methods. An online workshop will be designed and conducted to demonstrate the software and train researchers and practitioners. To build the capacity of the next generation of researchers and practitioners, the project team will recruit and engage with college and high-school students, especially those from underrepresented backgrounds, through a partnership with diversity enhancement programs in the university. Graduate students will be directly involved in designing and executing research, while undergraduate students will participate in software development and testing, being mentored and trained as data-enabled computational researchers.<br/><br/>Even though comprehensive consideration of uncertainties in a scientific or engineering study is commendable, an unguided computational investment on crude Monte Carlo simulation often results in an enormous waste of time and resources. Furthermore, to attain a required accuracy of probabilistic analysis, the associated computational burden can be a major bottleneck or even a barrier to scientific and engineering discovery, especially when the event of interest is extreme, rare, or peculiar. To address this challenge, this project will innovate a unified methodological framework that leverages heterogeneous data for speeding up stochastic computational experiments without compromising the accuracy of probabilistic analysis. The framework will include methods for identifying and exploiting a low-dimensional manifold (naturally appearing in science and engineering) of high-dimensional simulation input space to speed up stochastic computational experiments by addressing the curse of dimensionality. For the accelerated probabilistic analysis, asymptotically valid confidence bounds will be constructed to ensure the desired analysis accuracy. The framework will prescribe how to adaptively allocate computational resources for exploring the simulation input space while exploiting the important input manifold to minimize the computational expenditure while maintaining the desired analysis accuracy. The project will validate the methods and verify the open-source software developed for broader impacts, based on two engineering simulation case studies, namely, structural reliability evaluation of a wind turbine and cascading failure analysis of a power grid.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

NSFOpen Education ResourcesThe Research University (TRU)

CICI: SSC: Real-Time Operating System and Network Security for Scientific Middleware: University of Colorado at Colorado Springs

Gedare Bloom

[email protected]

Remote monitoring and control of industrial control systems are protected using firewalls and user passwords. Cyberattacks that get past firewalls have unfettered access to command industrial control systems with potential to harm digital assets, environmental resources, and humans in proximity to the compromised system. To prevent and mitigate such harms in scientific industrial control systems, this project enhances the security of open-source cyberinfrastructure used for high energy physics, astronomy, and space sciences. The results of this project enhance the security of scientific instruments used in particle accelerators, large-scale telescopes, satellites, and space probes. The benefits to science and the public include greater confidence in the fidelity of experimental data collected from these scientific instruments, and increased reliability of scientific cyberinfrastructure that reduces the costs associated with accidental misconfigurations or malicious cyberattacks.<br/><br/>The objective of this project is to enhance the security of the open-source Real-Time Executive for Multiprocessor Systems (RTEMS) real-time operating system and the Experimental Physics and Industrial Control System (EPICS) software and networks; RTEMS and EPICS are widely used cyberinfrastructure for controlling scientific instruments. The security enhancements span eight related project activities: (1) static analysis and security fuzzing as part of continuous integration; (2) cryptographic security for the open-source software development life cycle; (3) secure boot and update for remotely-managed scientific instruments; (4) open-source cryptographic libraries for secure communication; (5) real-time memory protection; (6) formal modeling and analysis of network protocols; (7) enhanced security event logging; and (8) network-based intrusion detection for scientific industrial control systems. The project outcomes provide a roadmap for enculturating cybersecurity best practices in open-source, open-science communities while advancing the state-of-the-art research in cyberinfrastructure software engineering and industrial control system security.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

NSFOpen Education ResourcesThe Research University (TRU)

Collaborative Research: SHF: Small: Interactive Synthesis and Repair For Robot Programs: University of Massachusetts Amherst

Arjun Guha

[email protected]

Over the past few years, robots have started to be deployed in unstructured human environments. There are hundreds of robots deployed in hospitals, hotels, and supermarkets. Unfortunately, the software that runs on robots is programmed using low-level abstractions and languages, and is hard to transfer across robots and environments. In addition robotic software requires complex control logic to ensure that robots are safe and well-behaved in all situations. Thus, robot software is extraordinarily hard to write and maintain. This research project develops tools and techniques to make robot software safer, easier to write, and easier to maintain. <br/><br/>The intellectual merits of the project are the development of (1) techniques for fixing bugs in robot software, based on advances to automatic program repair and program synthesis; (2) abstractions for writing robot software that can automatically handle certain kinds of failures, based on new programming-language design; (3) methods for checking the correctness of robot software, based on new program-verification techniques. The project's broader significance and importance are that it helps make robot software easier to write and maintain, and cheaper, safer, and more reliable. The project encourages further research at the intersection of programming languages and robotics by publishing research results and releasing open-source software. The project also involves high-school outreach workshops to broaden participation in computing.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

NSFOpen Education ResourcesThe Research University (TRU)

Collaborative Research: Frameworks: Machine learning and FPGA computing for real-time applications in big-data physics experiments: University of Illinois at Urbana-Champaign

Eliu Huerta Escudero

[email protected]

The cyberinfrastructure needs for gravitational wave astrophysics, high energy physics, and large-scale electromagnetic surveys have rapidly evolved in recent years. The construction and upgrade of the facilities used to enable scientific discovery in these disparate fields of research have led to a common pair of computational grand challenges: (i) datasets with ever-increasing complexity and volume; and (ii) data mining analyses that must be performed in real-time with oversubscribed computational resources. Furthermore, the convergence of gravitational wave astrophysics with electromagnetic and astroparticle surveys, the very birth of Multi-Messenger Astrophysics, has already provided a glimpse of the transformational discoveries that it will enable in years to come. Given the unique potential for scientific discovery with the Large Hadron Collider (LHC) and the combination of the Laser Interferometer Gravitational-wave Observatory (LIGO) and the Large Synoptic Survey Telescope (LSST) for Multi-Messenger Astrophysics, the community needs to accelerate the development and exploitation of deep learning algorithms that will outperform existing approaches. This project serves the national interest, as stated by NSF's mission, by promoting the progress of science. It will push the frontiers of deep learning at scale, demonstrating the versatility and scalability of these methods to accelerate and enable new physics in the big data era. Because these methods are also applicable to many other parts of our national and global economy and society, this work will positively impact many fields. The students and junior scientists to be mentored and trained in this research will interact closely with our industry partners, creating new career opportunities, and strengthening synergies between academia and industry. The team will share the algorithms with the community through open source software repositories, and through our tutorials and workshops the team will train the community regarding software credit and software citation.<br/><br/>In this project, the PIs will build upon our recent work developing high quality deep learning algorithms for real-time data analytics of time-series and image datasets, as open source software. This work combines scalable deep learning algorithms, trained with TB-size datasets within minutes using thousands of GPUs/CPUs, with state-of-the-art approaches to endow the predictions of deterministic deep learning models with complete posterior distributions. The team will also investigate the use of Field Programmable Gate Arrays (FPGAs) to accelerate low-latency inference of machine learning algorithms to minimize the demands of future computing, which is a central goal for Multi-Messenger Astrophysics and particle physics. The open source tools to be developed as part of these activities will be readily shared with and adopted by LIGO, LHC, and LSST as core data analytics algorithms that will significantly increase the speed and depth of existing algorithms, enabling new physics while requiring minimal computational resources for real-time inferences analyses. The team will organize deep learning workshops and bootcamps to train students and researchers on how to use and contribute to our framework, creating a wide network of contributors and developers across key science missions. The team will leverage existing open source and interactive model repositories, such as the Data and Learning Hub for Science (DLHub) at Argonne, to reach out to a large cross-section of communities that analyze open datasets from LIGO, LHC, and LSST, and that will benefit from the use of these technologies that require minimal computational resources for inference tasks.<br/><br/>This project is supported by the Office of Advanced Cyberinfrastructure in the Directorate for Computer & Information Science & Engineering and the Division of Physics in the Directorate of Mathematical and Physical Sciences.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

1 2 433 434 435 436
About Exponent

Exponent is a modern business theme, that lets you build stunning high performance websites using a fully visual interface. Start with any of the demos below or build one on your own.

Get Started
Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google
Spotify
Consent to display content from Spotify
Sound Cloud
Consent to display content from Sound