All times are Pacific (California) time. The schedule may be subject to changes to time and location.
You can view this schedule with more information (slides, authors, etc.) also in EasyChair. Click here to access the detailed Schedule page.
TUESDAY, JULY 18, 2023
In its 86-year history, the Jet Propulsion Laboratory (JPL), NASA’s only federally funded research and development center, has been driving the forefront of scientific discovery for the benefit of humanity. From launching the very first American satellite in 1958 to landing rovers on Mars, from fighting climate change to discovering thousands of other worlds – JPL dares mighty things by imagining and then achieving what others might think impossible. With a full manifest of missions, the future is bright at JPL as it seeks to have an expanded positive impact on the space ecosystems for decades to come.
The first images of light bending around a black hole, captured from the ground-based Event Horizon Telescope (EHT), has unlocked a new extreme laboratory of gravity that has already begun ushering in a new era of studying precision black hole physics on horizon scales. This talk will present the methods and procedures used to produce the first images of the M87 and Sagittarius A* black holes, but also highlight how remaining scientific questions motivate us to improve this computational telescope to see black hole phenomena still invisible to us. Space-based approaches may address these pressing questions in black hole science over the coming decades. In particular, we will discuss three broad classes of mission architectures: involving one or more orbiters in a) low- or b) medium-Earth orbit, and involving a single spacecraft at c) a considerable distance from Earth. The two former architectures would provide for rapid filling of an otherwise sparse aperture in order to increase image fidelity, while the latter architecture would provide extreme angular resolution. We discuss how these mission architectures, along with advances in analysis techniques, could help address the following science questions: testing theories of gravity, understanding jet formation and launching, and understanding black hole growth. This talk will also briefly discuss future directions currently being pursued and how we are developing techniques that will allow us to extract the evolving structure of a black hole over the course of a night in the future, perhaps even in three dimensions.
Autonomy for Space Robots: Past, Present, and Future
TBA
Advanced Air Mobility (AAM) including passenger transport and Uncrewed Aircraft Systems (UAS) requires autonomy capable of safely managing contingency responses as well as routine flight. This talk will describe pathways from aviation today to a fully autonomous AAM of the future. Research toward comprehensive low-altitude flight environment mapping will be summarized. Assured Contingency Landing Management (ACLM) requires a pipeline in which hazards/failures that risk loss of vehicle controllability or loss of landing site reachability trigger contingency response. Pre-flight preparation of contingency landing plans to prepared landing sites is supplemented by online planning when necessary. Dynamic airspace geofencing in support of UAS Traffic Management (UTM) will be defined and compared with traditional fixed airspace corridor solutions. The talk will conclude with a high-level mapping of presented aviation solutions to space applications.
Brief Introduction on Workshop
For several decades, NASA has employed in-space systems to enhance the performance and extend the useful life of operational orbital assets. In at least one case, an operational mission was not only enhanced, but enabled – the International Space Station was made possible by crewed and robotic in-space assembly and continues to support installation and operation of new science and technology payloads. In several cases (Hubble Space Telescope, Intelsat 401, Westar and Palapa), major operational assets were rescued or repaired soon after launch when otherwise mission-ending anomalies occurred or were detected. In addition to the original rescue, Hubble was upgraded four times, enabling high-demand, world class science over four decades. More recently, two Northrop Grumman Mission Extension Vehicles have captured two Intelsat spacecraft near the end of their life and fuel capacity, to take over maneuvering duties.
Despite these recent operational achievements, and except for large human exploration vehicles and large space telescopes, space architects rarely consider in-orbit servicing and assembly capabilities in their future planning. Technologies such as multi-launch mission architectures (and rendezvous and proximity operations systems), docking systems, external robotics, advanced tools, modular systems and structures, and fluid transfer systems are available today to support these missions. In-space manufacturing will soon be operational to enable resilient missions that recover from on-orbit failures and expand the utilization of space. We envision a future that includes these capabilities, and discuss the cultural, engineering, and technological challenges to achieving this vision. We discuss the vision and the status of the space industry’s slow but steady march to widespread operational use of in space servicing, assembly, and manufacturing.
Next generation space science missions can utilize in-space servicing, assembly, and manufacturing (ISAM) to enable and enhance the architectures needed to answer the key scientific questions of the future. This talk will summarize the work NASA’s Goddard Space Flight Center, in partnership with our government, industry, academic, and international partners, is doing to advance some of the necessary ISAM technologies needed for these missions. Software and hardware-in-the-loop simulations on the ground evaluate designs and prepare for and support on-orbit operations. Space-based demonstrations are conducted as part of the Robotic Refueling Mission and Raven on the International Space Station, as well as the On-orbit Servicing, Assembly, and Manufacturing-1 mission in Low Earth Orbit. An overview of these simulations and demonstrations will be discussed as part of the presentation along with some of the lessons learned.
As every Hubble-hugger knows, the ability to service that space observatory made it into a multi-generational telescope, with the longevity of major mountaintop observatories on the ground. But it was the ability to replace science instruments with new ones incorporating more capable designs and technologies that really kept Hubble on the cutting edge of astrophysics and planetary science for over three decades. The observatory was transformed with every servicing mission and Hubble today studies objects that weren’t even known to exist when the telescope was launched.
Arguably, in-space servicing and upgrading is a scientific capability in its own right, one which allows us to re-invent a mission after design and launch. NASA has embraced this philosophy for the Habitable Worlds Observatory, the future large space telescope recommended by the National Academies’ Astro2020 Decadal Survey. Furthermore, we may think about going beyond Hubble-style servicing into new mission development approaches that take advantage of in-space assembly and manufacturing. In this talk, I’ll give a recap of Hubble servicing and highlight the new science enabled after each servicing mission, then move on to opportunities and challenges for servicing Habitable Worlds Observatory. Finally, I’ll briefly discuss ideas for using in-space assembly and/or manufacturing to enable other transformative science missions.
Workshop Introduction and Welcome
This briefing will provide a broad overview of the state of the industry surrounding augmented reality, virtual reality and the digital engineering ecosystem, specifically focusing on elements related to space system development. A glimpse of where industry is heading within the next 5-10 years, what it means for the IT landscape, and how to prepare to leverage its benefits.
The harsh nature and conditions of the lunar environment make robots indispensable for lunar activities. Future lunar missions will explore potentially hazardous areas, such as shaded regions and ice-rocky polar areas, which could jeopardize missions if rovers become immobilized or damaged. Due to this, present and future activities rely on close monitoring and human supervision throughout their operational cycle. Even if autonomous solutions are used, humans are still needed to supervise these operations, as mission safety is of utmost importance. Furthermore, to increase efficiency, reliability and security, lunar rovers carrying out these missions require humans in the loop to teleoperate them. However, teleoperating in space presents a significant challenge due to various limiting factors, including communication barriers, the unstructured nature of the environment, and limited resources. Additionally, current approaches fail to provide enough information such that the operators understand their environment and feel immersed inside it while teleoperating and exploring an area. Reliance on cameras aboard rovers for navigation in unfamiliar environments with poor or variable illumination requires much time and caution. Instead, virtual reconstruction of the environment using sensors, such as RGB-D cameras and LiDAR, could improve operator understanding and spatial awareness, leading to more efficient and robust teleoperation. To this end, we develop a system to enhance lunar teleoperation by implementing advanced monitoring and control methods, using eXtended Reality (XR) and Artificial Intelligence (AI) based on Robot Operating System (ROS) and the Unity3D game engine, see Figure 1. The proposed system is tested in an analogue lunar facility, the LunaLab at the University of Luxembourg, where the rover uses the RGB-D camera input for a visual SLAM algorithm, acting as our mapping tool to recreate a point cloud of the lunar environment. As the point cloud is procedurally generated, its data is sent to Unity3D for 3D reconstruction using a triangulation technique. The environment is mapped by an operator wearing a VR rig, which allows them to teleoperate the rover and access various telemetry data via a Graphical User Interface (GUI). Furthermore, the operator can inspect and move around the virtual 3D reconstructed environment while it is being procedurally generated. This approach effectively provides a higher degree of immersion than traditional solutions. Finally, AI models that detect rocks in the rover’s vicinity and terrain characteristics are used, permitting us to label the 3D reconstructed environment with additional data for enhanced operational awareness.
An explainable video using the developed technology with a real robot in a lunar analogue facility (LunaLab) is available at the following link: https://www.youtube.com/watch?v=XokCArhHUdQ
The space compute solution NASA/JPL flies today has architectural origins dating all the way back to the mid 1990’s. With NASA/JPL missions on a trajectory to employ higher and higher levels of autonomy in the coming years to meet our science and exploration objectives, the current space compute solution does not have the capability to well-serve that trajectory. This presentation covers the major limitations of current space compute solutions, when viewed through the lens of powering the level of autonomy that next-generation space exploration is likely to require. And flipping that around, this presentation also covers what the key needs are for a modern space compute solution, from our perspective – again considering autonomy as a foundation.
System-on-a-chip (SoC) devices promise lighter, smaller, cheaper, more capable and more reliable space electronic systems. This paper describes the focal plane interface electronics – digital (FPIE-D) Xilinx Zynq-based data acquisition, cloud-screening, compression, storage and downlink computing system developed by the Jet Propulsion Laboratory (JPL), Alpha Data, Correct Designs and Mercury Systems for imaging sprectometers such as the NASA Earth Surface Mineral Dust Source Investigation (EMIT). EMIT is an imaging spectrometer that acquires 1280 cross-track by 328 band images at 216 images/sec. Following launch (14 July 2022), EMIT has been installed outside the International Space Station (ISS) and is collecting data from science targets in arid dust source regions of the Earth. EMIT will be used to study the mineral dust cycle which has multiple impacts on the Earth System. The science objective of EMIT is to close the gap in our understanding of mineral dust heating and cooling impact on the Earth now and in the future by determining the surface mineralogy of mineral dust sources. The FPIE-D board design is based on a standard Alpha Data COTS Zynq7100 board in an XMC form factor. The FPIE-D Alpha Data hardware and components, including a Mercury Systems 440 GByte RH3440 Solid-State Data Recorder (SSDR), fit into a 280mm×170mm×40mm assembly. The FPIE-D peak power usage is 40 W. The computing element is a Xilinx Zynq Z7100 which includes a Kintex-7 FPGA and dual-core ARM Cortex-A9 Processor. The COTS board was re-spun to make it suitable for space (replacing components with space grade equivalents) and to add features needed for the mission. The FPIE-D board is designed to be very flexible, and not specific to EMIT mission. The FPIE-D assembly with its Zynq SoC controls the other assemblies on the EMIT instrument. The FPIE-D Zynq Processing System is responsible for running the flight software, which includes command & data handling, command & telemetry with ISS over 1553 and science data downlink over a 7.4 Mbps Ethernet interface to the ISS. The Zynq Programmable Logic (PL) of the FPIE-D interfaces with the SSDR through a 3.125 Gbps Serial RapidIO interface. The SSDR alleviates the effect of two data rate bottlenecks in the FPIE-D System: data compression implemented on Zynq PL with a data compression (input) rate of 370 Mbps, and the data transfer to the ISS at 7.4 Mbps. The FPIE-D includes three processing elements implemented in the Zynq PL: (1) the Fast Lossless extended (FLEX) data compression block (a modified implementation of the CCSDS-123.0-B-2 recommended standard), which is providing 3.4:1 lossless compression (compared to 16 bit samples obtained after co-adding) and 21 MSamples/sec throughput; (2) co-adding capability of two successive images so that shorter exposures can be used, helping to avoid saturation of the Focal Plane Array during acquisition; and (3) cloud detection and screening so that cloudy images can be dropped prior to compression, saving SSDR space and downlink time.
Small changes can have seismic impact. Recent advancements in microelectronics are enabling transformation of satellite capabilities (or technology) on par with the transition from the flip phone to the iPhone. The resulting adaptability and autonomy could be the key to satellite network resilience, be it natural or manmade.
NASA Electronic Parts Assurance Group (NEPAG) operates under the Mission Assurance Standards and Capabilities (MASC) division of NASA Office of Safety and Mission Assurance (OSMA). All NASA missions, large or small, are important to mission assurance. The success of each mission counts. This presentation will describe the efforts underway where NEPAG has worked with DLA (Defense Logistics Agency), JC-13 (the manufacturers of government products), and CE-12 (the users of active devices) committees to ensure current military/aerospace standards address many challenges, one example being the insertion of new technology, the Class Y initiative, Class Y represents advancements in packaging technology, increasing functional density, and increasing operating frequency. The front runner Class Y suppliers are offering functions such as processors, application specific integrated circuits, and very high-speed analog to digital converters.
Efficient computing is critical to optimize size, weight and power for next generation space missions. Operating the logic and SRAM technology at the same voltage and leveraging dynamic voltage and frequency scaling can deliver maximum performance and efficiency but is often limited by voltage capability, speed and reliability of the SRAM macros. A new SRAM technology, optimized on state-of-the-art CMOS technology, targets high speed, robust/reliable operation over a wide voltage range from 0.45V to 0.8V and above. The high speed, low voltage SRAM technology can be combined with efficient microarchitecture and optimized physical design to maximize performance per watt.
TBA
Machine learning (ML) technologies are being investigated for use in the embedded software for manned and unmanned aircraft. ML will be needed to implement advanced functionality for increasingly autonomous aircraft and can also be used to reduce computational resources (memory, CPU cycles) in embedded systems. However, ML implementations such as neural networks are not amenable to verification and certification using current tools and processes. This talk will discuss current efforts to address the gaps and barriers to certification of ML for use onboard aircraft. We will discuss new verification and assurance technologies being developed for neural networks. This includes formal methods analysis tools, new testing methods and coverage metrics, and architectural mitigation strategies, with the goal of enabling autonomous systems containing neural networks to be safely deployed in critical environments. We will also discuss the new certification guidance that is under development to address the gaps in current processes. The overall strategy is to start will approvals of low-complexity and low-criticality applications, and gradually expand to include more complex and critical applications that involve perception.
Significant advances have been made in the last decade in constructing autonomous systems, as evidenced by the proliferation of a variety of unmanned vehicles. These advances have been driven by innovations in several areas, including sensing and actuation, computing, modeling and simulation, but most importantly deep machine learning, which is increasingly being adopted for real-world autonomy. In spite of these advances, deployment and broader adoption of learning techniques in safety-critical applications remain challenging. This talk will present some of the challenges posed by the use of these techniques towards assurance of system behavior, and summarize advances made in DARPA’s Assured Autonomy towards establishing trustworthiness at the design stage and providing resilience to the unforeseeable yet inevitable variations encountered during the operation stage. The talk will also discuss related work in creating frameworks for assurance driven software development.
Safety certification of autonomous vehicles is a major challenge due to the complexity of the environments in which they are intended to operate. In this talk I will discuss recent work in establishing the mathematical and algorithmic foundations of test and evaluation by combining advances in formal methods for specification and verification of reactive, distributed systems with algorithmic design of multi-agent test scenarios, and algorithmic evaluation of test results. Building on previous results in synthesis of formal contracts for performance of agents and subsystems, we are creating a mathematical framework for specifying the desired characteristics of multi-agent systems involving cooperative, adversarial, and adaptive interactions, develop algorithms for verification and validation (V&V) as well as test and evaluation (T&E) of the specifications, and perform proof-of-concept implementations that demonstrate the use of formal methods for V&V and T&E of autonomous systems. These results provide more systematic methods for describing the desired properties of autonomous systems in complex environments and new algorithms for verification of system-level designs against those properties, synthesis of test plans, and analysis of test results.
Panel Discussion
There are ongoing efforts to develop plans for launching and operating the Habitable Worlds Observatory approximately two decades from now. There is a desire to utilize emerging commercial servicing capabilities to robotically service and maintain the observatory. However, there currently are significant architectural questions regarding how the observatory should be built to facilitate servicing and how it can be effectively serviced. While engineering expertise and judgment will play a crucial role in making these decisions, current projections about future technological advancements may also impact these choices. There is a risk that the uncertainty surrounding the maturity of these technologies could lead to either exaggerated or underestimated claims about their impact. Hence, one perspective suggests that it could be advantageous to draw upon the experiences gained from the James Webb Space Telescope mission and make minimal modifications to its architecture when considering future servicing options. Conversely, the NASA In-space Assembled Telescope (ISAT) study suggests a contrasting approach to the observatory's architecture, emphasizing a more granular architecture by relying heavily on in-space robotic assembly. This presentation will introduce architectural aspects that seek to strike a balance between the projected servicing and assembly capabilities while considering the heritage of JWST, in order to address the unique challenges of the Habitable Worlds Observatory.
Habitable worlds Observatory - NASA's response to the 2020 decadal survey calling for a 6 m optical/UV/IR telescope to search for habitable exoplanets and to launch in the early 2040s. Astrophysics Division Chief, Dr. Mark Clampin, has laid out key tenants for success in developing HWO. He states that a key aspect to ensuring funding and science performance of this future Great Observatory is on-orbit serviceability. Perhaps the HWO could be much like a "mountain top" Observatory, where the fundamental structure is in place for decades of life with expendable and upgradable systems. Upgrades to systems can allow for extended life in the harsh environment of space and allow for a continued science relevancy. But, serviceability must also serve the here and now, enabling ground assembly, test and repair to achieve greater pre-launch efficiencies. What are the unique needs of servicing Great Observatories at L2? This talk will present possibilities and challenges for forging a path to serviceable observatories at L2.
Launching in 2025, the Interstellar Mapping and Acceleration Probe (IMAP) mission investigates two of the most important issues in space physics today — the acceleration of energetic particles and interaction of the solar wind with the interstellar medium. In this talk, we present the IMAP VR software which provides an interactive experience demonstrating the spacecraft's science instruments with an animation of how it will collect data, visualized in a model of its operational space environment. The software combines real scientific data from IBEX and mechanical models of IMAP with VR technology to create a simulated environment for exploring and understanding the spacecraft's mission. The talk will discuss the design and implementation of the software, its educational and scientific applications, and its potential for advancing space exploration.
As a precursor to our panel discussion, this presentation will outline the existing challenges of adopting AR / VR technologies, spatial computing, and Digital Engineering (DE) best practices. Using these broad challenges, we will transition into a panel discussion with selected speakers.
Group Discussion
New methods for characterizing FPGA performance and risk in space-radiation environments are presented. Application of the new methods are illustrated via walking through a NASA Mission use case. A requirement for the mission is to “work-through” a worst-week radiation environment with minimal ground intervention. Mitigation insertion becomes a necessity but is limited due to device capacity. This presentation shows that old test and evaluation methods are insufficient while new methods provide better characterization and assistance for determining suitable design/mitigation strategies.
The GR765 is a radiation-tolerant and fault-tolerant octa-core system-on-chip that is currently in development. During SCC 2021 we described the status of the architecture definition and the development of GR765 engineering samples. Since then, development has progressed, and several extensions have been made to the architecture. Most notably the GR765 is now a design that implements both the SPARC instruction set architecture and the RISC-V instruction set architecture. The selection between the two different processor core architectures is done through a bootstrap signal.
The NewSpace era has drastically increased the use of COTS (Commercial-off-the-shelf Components) to cover the needs of the new requirements: lower costs, shorter lead times, and better performances. However, the radiation risks associated with non-radiation hardened components are especially relevant in this context. Therefore, new approaches must be considered necessary to address this challenge for the assurance of radiation hardness. This work presents standard and parameterized radiation databases and how they can be used to numerically assess the critical variability from lot to lot in response to gamma radiation based on the coefficient of variation.
Radiation-hardened (rad-hard) processors are designed to be reliable in extreme radiation environments, but they typically have lower performance than commercial-off-theshelf (COTS) processors. For space missions that require more computational performance than rad-hard processors can provide, alternative solutions such as COTS-based systems-on-chips (SoCs) may be considered. One such SoC, the NVIDIA Tegra K1 (TK1), has achieved adequate radiation tolerance for some classes of space missions. Several vendors have developed radiationtolerant single-board computer solutions targeted primarily for low Earth orbit (LEO) space missions that can utilize COTSbased hardware due to shorter planned lifetimes with lower radiation requirements. With an increased interest in spacebased computing using advanced SoCs such as the TK1, a need exists for an improved understanding of its computational capabilities. This research study characterizes the performance of each computational element of the TK1, including the ARM Cortex-A15 MPCore CPU, the NVIDIA Kepler GK20A GPU, and their constituent computational units. Hardware measurements are generated using the SpaceBench benchmarking library on a TK1 development board. Software optimizations are studied for improved parallel performance using OpenMP for CPU multithreading, ARM NEON for single-instruction multiple-data (SIMD) operations, Compute Unified Device Architecture (CUDA) for GPU parallelization, and optimized Basic Linear Algebra Subprograms (BLAS) software libraries. By characterizing the computational performance of the TK1 and demonstrating how to optimize software effectively for each computational unit within the architecture, future designers can better understand how to successfully port their applications to COTS-based SoCs to enable improved capabilities in space systems. Experimental outcomes show that both the CPU and GPU achieved high levels of parallel efficiency with the optimizations employed and that the GPU outperformed the CPU for nearly every benchmark, with single-precision floating-point (SPFP) operations achieving the highest performance.
In a heterogeneous computing environment, there exist a variety of computational units such as multicore CPUs, GPUs, DSPs, FPGAs, Analog modules, and ASICs. IP Vendors, Engineers, and Scientists working with heterogeneous computing systems face numerous challenges, including integration of IP Cores and components from different vendors, system reliability, hardware-software partitioning, task mapping, the interaction between compute and Memory, and reliable communication. For advanced designs, the industry typically develops a system-on-a-chip (SoC), where different functions are shrunk at each node and pack them onto a monolithic die. But this approach is becoming more complex and expensive at each node. Another way to develop a system-level design is to assemble complex dies in an advanced package. Chiplets are a way of modularizing that approach. Chiplets can be combined with other chiplets on an interposer in a single package. This provides several advantages over a traditional system on chip (SoC) or integrated board, in terms of reusable IP, heterogeneous integration, and verifying die functional behavior. In our work, a system-level model composed of chiplets- IO Chiplet, Low Power Core Chiplet, High-Performance Core Chiplet, audio video Chiplet, and Analog chiplet, are interconnected using Universal Chiplet Interconnect Express (UCIe) standard. We looked at different scenarios and configurations including advanced and standard packages, different traffic profiles, sizing of resources, and Retimer to extend the reach and evaluate events on timeout. We were able to identify the strengths and weaknesses of UCIe interconnect in the scope of mission applications and obtain the optimal configuration for each of the subsystems to meet the performance, power, and functional requirements.
SpaceFibre is a high-performance, high-reliability and high-availability datalink and network technology designed specifically for demanding payload data-handling applications. The capabilities and characteristics of SpaceFibre will be described and a typical application architecture summarised. With the recent addition by STAR-Dundee of Remote Direct Memory Access (RDMA) capabilities, SpaceFibre is also suitable for low-overhead inter-processor and multi-processor communications. SpaceFibre RDMA will be introduced.
The constant increase in complexity of space applications leads to the ongoing research of more powerful computation capabilities. This includes more integrated system-on-chip (SoC) solutions. There is a strong push focused on increasing computing performance while integrating the main peripheral functions of the processing solutions embedded in the aerospace systems.
Conventional thinking places the processor IP core performance as the primary concern and therefore the temptation is to increase the number of cores. However, is the inherent performance of the controller the main driver that increases overall system performance? This presentation is intended to highlight the fact that, processor MIPs capability is not the only factor in an application and that an advanced architecture can provide much more benefit to optimize system performance than the pure computing capability of a processor.
Silicon (Si)-based semiconductor microcomputing has been core to manned and unmanned exploration of the solar system. However, Si-based are limited in operation given that they cannot adequately function in extreme high temperature and radiation environments. In contrast, Silicon Carbide (SiC) semiconductor electronic devices have the potential of bringing electronics functionality to extreme radiation and temperature environments physically beyond the reach of Si semiconductor devices. In particular, SiC integrated circuits based on Junction Field Effect Transistor (JFET) technology have produced the world’s first microcircuits of moderate complexity that have demonstrated sustained operation at 500˚C [1]. A major distinguishing aspect of this NASA Glenn Research Center (GRC) JFET Integrated Circuit (IC) work is the long-term durability (greater than a year) of these circuits and packaging at 500°C [2] and for 60 days in simulated Venus surface conditions [3]. Other NASA GRC work has shown operation of circuits across a total temperature range from low (-190°C) to high temperatures (961°C), a span more than 1000˚C [4]. Further, TID radiation testing to 7 Mrad(Si) without failure was conducted on earlier-generation of legacy SiC JFET logic chips [5]. No other IC approaches have accomplished this level of high temperature durability even for less-complicated circuits. These properties enable the potential for improved capability for exploration across the solar system, from Ocean Worlds to the interior of gas giants to surface of Venus or Mercury. Although these SiC electronics are presently comparable to that of standard-environment commercial electronics in the ~1970s, such electronics nevertheless enabled historic breakthroughs during the Viking and Voyager missions. The maturity of SiC electronics has now advanced to where a SiC microprocessor with unprecedented extreme environment durability can be built. NASA GRC is presently prototype fabricating a next generation family of SiC microcircuits. Specifically, this fabrication run, denoted as “Gen. 12”, aims to produce the first ever practical digital processing chipset hardened for durable operation in broad temperature range, high radiation environments. This paper describes that microprocessor. The limited component availability and complexity from this prototype chipset mandates an ultra-simple, augmentable computing topography. The approach is to use design methods implemented in earlier electronics. The result is a hybrid programmable logic controllers (PLC)-like logic core operating inside of a transport triggered architecture. Programmable logic controllers (PLC) from the 1960-1970s were configurable devices used in industry to replace the hardwired relationships between sensors and actuators with software-based reading of those sensors and software-based commanding of actuation. In particular, this SiC microprocessor design implements an efficient foundational 8-bit Transport-Triggered Architecture (TTA) processor core with 1-level stack that is designed to be packaged/interfaced with SiC Read-Only Memory (ROM) and Random-Access Memory (RAM), and other supporting peripheral SiC ICs also being prototyped in the Gen. 12 processing run. The physical IC layout of the microprocessor uses “Gen. 12” process rules [6] is spread across two separate chip designs that will be interconnected into a microprocessor unit at the package/board level. An overview of the design, fabrication, and testing of this microprocessor will be provided. It is concluded that while this microprocessor provides unique and game changing capabilities, future generations of this technology will enable even more capable tools for planetary exploration with significant terrestrial applications.
1. https://www1.grc.nasa.gov/research-and-engineering/silicon-carbide-electronics-and-sensors/technical-publications/ 2. D. J. Spry, P. G. Neudeck, L. Chen, D. Lukco, C. W. Chang, G. M. Beheim, M. J. Krasowski, and N. F. Prokop (2016) “Processing and Characterization of Thousand-Hour 500 °C Durable 4H-SiC JFET Integrated Circuits”. Additional Conferences (Device Packaging, HiTEC, HiTEN, & CICMT): May 2016, Vol. 2016, No. HiTEC, pp. 000249-000256. http://dx.doi.org/10.4071/2016-HITEC-249. 3. P. Neudeck, L. Chen, R. D. Meredith, D. Lukco, D. J. Spry, L. M. Nakley, and G. W. Hunter, “Operational Testing of 4H-SiC JFET ICs for 60 Days Directly Exposed to Venus Surface Atmospheric Conditions”, IEEE Journal of the Electron Devices Society, vol. 7, pp. 100-110, (2018). 4. P. G. Neudeck, D. J. Spry, M. J. Krasowski, N, F. Prokop, and L. Chen, “Demonstration of 4H-SiC JFET Digital ICs Across 1000 °C Temperature Range Without Change to Input Voltages”, Materials Science Forum, vol. 963, pp. 813-817, 2019 5. J. M. Lauenstein, P. G. Neudeck, K. L. Ryder, E. P. Wilcox, L. Y. Chen, M. A. Carts, S. Y. Wrbanek, and J. D. Wrbanek, “Room Temperature Radiation Testing of a 500 °C Durable 4H-SiC JFET Integrated Circuit Technology”, Proceeding of the 2019 Nuclear and Space Radiation Effects Conference Data Workshop, San Antonio, TX, 2019. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8906528 6. P. Neudeck and D. Spry, “Graphical Primer of NASA Glenn SiC JFET Integrated Circuit (IC) Version 12 Layout”, 2019. https://ntrs.nasa.gov/citations/20190025716
The promise of recoverable missions, increased performance through upgrades, and lengthened service life through in-flight servicing has been proven on the Hubble Space Telescope and the International Space Station. In both cases, crew and robotics have recovered the mission from failures, improved performance and enabled a 30+ year productive mission life. Robots played a small but critical part in the Hubble servicing missions but their use and operational maturity has grown significantly over the life of the ISS. Safely operated by personnel on the ground, all ISS dexterous servicing operations are performed in a supervised autonomous fashion, including ‘unprepared’ servicing tasks such refueling with legacy fill/drain valves and mating 38999 power and data connectors. This briefing illustrates the maturation of servicing robotic operations beginning with Hubble, through ISS and Orbital Express, and looking ahead to Gateway. It provides examples of the servicing capabilities available to Observatory designers that deliver a reliable method for recovering from surprises, lengthening productive life and opportunistically increasing performance.
The Space Shuttle missions, and the subsequent assembly of the International Space Station (ISS) provide myriad examples and benefits of designing space assets for serviceability. The benefits of serviceability are more easily realized with the use of standard interfaces. Common interfaces on the ISS permit the transfer of critical resources on-orbit, from Orbital Replacement Units (ORUs) comprising power bays, batteries, and instruments, to refueling and free-flyer capture.
Consortium for Execution of Rendezvous and Servicing Operations (CONFERS) is an industry-led initiative that identifies and leverages best practices from government and industry to develop standards for In-Space, Assembly, and Manufacturing (ISAM).
CONFERS Technical Working Groups (CTWGs) identify proven interfaces, the resources transferred at each interface and then develop standards and guidelines for implementation. While heritage interfaces continue to be used on current and future missions, the international collaboration fostered by CONFERS permit the modification and adaptation of such interfaces, given the foundation of knowledge and establishment of standards for flight readiness and mission success.
The Next Great Observatory could leverage the commonality and standards developed by CONFERS in order to ensure the science community is a primary benefactor of ISAM advancements.
Ending Remarks
Workshop Recap, Closing Remarks, & Action Items
This demonstration will showcase a prototype VR tool that visualizes Spacecraft Magnetic Fields for Europa Clipper and Psyche. Understanding a spacecraft magnetic field is important to conduct electromagnetic compatibility tasks so that scientific sensors (e.g., magnetometers) can successfully read signals of interest. While previous approaches visualize spacecraft magnetic fields as static 2D images or animated videos, this tool allows for real-time interactive VR viewing of the field. The tool allows users to adjust several parameters for how the field line tracing algorithm runs, including adjusting starting points and placing new magnetic sources.
Voice Control has been identified as a potential human-computer interface (HCI) candidate for Human Spaceflight. It may be a viable HCI solution to address the risk that the Human Research Program has identified for future space missions-small crews, complex systems to control, and long communication delays with mission control. At NASA, voice control is being investigated to aid crew members in extravehicular activity (EVA) procedures, automate command/control environments, interface with RFID chips to query for misplaced items, space-to-ground communication, speech-to-text transcription, and substitution for ground support. While speech recognizers are becoming more common in Earth-bound environments, space flight can present a unique and challenging physical & acoustical environment. Due to the time delays to/from the moon and more so Mars, terrestrial-type server farm solutions in space are not possible. Rather, every artificially intelligent system must be 100% offline in a cloudless solution - where 100% offline is defined as not dependent on a server farm such as used for Siri, Alexa, or Google.
Not only is a lack of a cloud-based speech control infrastructure a challenge to overcome in space flight, but current and future spaceflight missions do not have the luxury of extensive network bandwidth as we do on Earth. Network constraints requiring minimum bandwidth consumption often result in lower quality voice communications. Voice communication is deemed a critical component in every human spaceflight mission, yet crew member speech that has been compressed and coupled with various acoustic background noise can be difficult to understand. If compressed and noisy audio is processed by a speech recognition system, it may be prone to errors.
When building voice control for human spaceflight, a system needs to denoise crew member speech that includes high background noise, stationary and non-stationary acoustic noise, and adapts for reverberation within different environments. Reverberation varies between space suits with helmet bubbles, tight spaces inside rovers, or crew member habitat construction on different planets making common speech recognition difficult to adapt to the diverse acoustical environments. Current commercially available speech recognition engines are not tuned to spacecraft acoustic environments. To make matters more difficult, there is typically limited training data available which is necessary to improve performance within spacecraft and spacesuit systems. For extra-vehicular activities, a helmet creates a challenging reverberant acoustical environment that can smear spoken words-making it problematic for the speech recognition system to understand what was said. Therefore, commands, acronyms, background noise, and other acoustical factors make voice control in human spaceflight a unique problem to overcome.
Finally, another key challenge to overcome is radiation hardness. Any on premise solution for speech recognition that is offline and works in Earth doesn’t necessarily mean it will work in space due to radiation effects on the computer processing unit (CPU). Current radiation tolerant CPUs pose a challenge for voice control due to lack of processing power and speed for a query to be processed.
This presentation will cover key challenges when developing voice control applications for human spaceflight. Next, the efforts that various teams at the Johnson Space Center are undergoing to find viable solutions for voice control in deep space missions are given. Finally, an explanation on why voice control is imperative to human spaceflight if mankind wants to engage in deep space missions. For instance, a spaceflight specific voice control system could be most useful when ground communication has a long latency or is nonexistent and crew members need artificially intelligent aid or ground support substitution to perform long duration missions.
The formation and evolution of giant planets define the dominant characteristics of our planetary system. The giant-planet exploration can improve the understanding of heat flow, radiation balance, chemistry and can work as ground truth for exoplanets. Atmospheres of giant planets are larger and, in many respects, but simpler than that of Earth. Studying giant planets’ atmospheres and environments can serve as laboratories for the Earth atmosphere's fundamental physical and dynamical processes. On the other hand, exploring the relevant environments that affect the Earth's atmosphere can help us develop a sound technical and scientific basis in giant planets. Particularly, climate change on Earth is central to the question of understanding the roles of physics, geology, and dynamics in driving atmospheres and climates on Jupiter.
While Juno Mission has significantly enhanced our understanding of the Jovian atmosphere in every orbit through remote sensing, in-situ observations are essential for validating the models, studying the composition, and capturing the dynamic processes of gas giants. The singular in-situ observation made by the Galileo Probe in 1995 has a major disagreement with standard Jovian atmospheric models, making scientists believe the entry site may happen to be one of the least cloudy areas on Jupiter. This suspicion exhibits a strong limitation of free-fall probing. A logical next step, then, would be a mission with actively controllable probes that stay in the atmosphere of a gas giant for an extended duration. The 2016 NIAC Phase I study on ``WindBots'' by Stoica et al. found that an adaptive wing glider and a lightweight, quasi-buoyant vehicle would be a viable option by obtaining a lift from updrafts, however, autonomously navigating in the highly uncertain and turbulent flow field remains a major challenge.
The closest terrestrial analog to Jupiter's atmosphere would be the tropical cyclones (TCs) on the Earth. Although the formation mechanism of a TC is very different from GRS on Jupiter, the resulting phenomenon, represented by highly turbulent and strong wind field as well as its localized and dynamic nature, are similar, hence providing a suitable test ground for future missions to Jupiter's atmosphere. This paper develops an autonomous small unmanned aircraft system (sUAS) in-situ TC sensing, through the simulated environment of cooperative control of distributed autonomous multiagent systems deployed into the eye of TC. The main objective is to test various observing systems from single and multiple sUAS platforms close to the eyewall of the storm to capture essential measurements to be used in explaining the Jovian environment. Preliminary results demonstrated successful sUAS flight optimization for maximizing improvement of the quality of key measurements (e.g., 3-dimensional wind velocity, pressure, temperature, and humidity). Simulated sUAS flight toward the inner core of the TC boundary layer made high-resolution meteorological observations and supplemented existing partial knowledge for a better estimate of the TC intensity. This was a great addition to the task of testing sUAS technology, but a few methods have focused on the critical region of the storm environment and no data-driven optimal system design was developed to gain more information through exploring target locations. Anticipatory sUAS routing lowered the overall energy usage and maximized the reduction of forecasting error by exploring and sampling unobserved cells along the path to a target location.
This paper analyzes how online updating from sUAS collected meteorological data would benefit hurricane intensity forecasting considering the temporal variation of the uncertainty. Unobserved heterogeneity and randomness of the data have shown multiple modes in probability distribution in each location. A huge collection of multiple sources of granular microscopic data in each location may result in the loss of multivariate information if not retrieved properly. It is important to quantify the uncertainty of prior belief and update posterior when critical observations are obtained. However, traditional entropy theory cannot handle i) sequential learning multimodal multivariate information; ii) dynamic spatiotemporal correlation; iii) importance of observation for posterior approximation. In this paper, we advance autonomous in-situ sensing under highly uncertain and turbulent flow through a multi-variate multi-modal learning by analyzing the similarities between the different types of mixture distributions of multiple variables and allocating a cluster to each group across high dimensional time and space. Specifically, this paper can track structural information flow of temporal multi-variate multi-modal correlation data and automatically update the posterior, with the weights to the new observations through an iterative process. Extensive experiments on hurricane ensemble forecasting data demonstrate the superior performance of our method over the state-of-the-art baselines across various settings.
Recent advances in computer vision for space exploration have handled prediction uncertainties well by approximating multimodal output distribution rather than averaging the distribution. While those advanced multimodal deep learning models could enhance the scientific and engineering value of autonomous systems by making the optimal decisions in uncertain environments, sequential learning of those approximated information has depended on unimodal or bimodal probability distribution. In a sequence of information learning and transfer decisions, the traditional reinforcement learning cannot accommodate the noise in the data that could be useful for gaining information from other locations, thus cannot handle multimodal and multivariate gains in their transition function. Still, there is a lack of interest in learning and transferring multimodal space information effectively to maximally remove the uncertainty. In this study, a new information theory overcomes the traditional entropy approach by actively sensing and learning information in a sequence. Particularly, the autonomous navigation of a team of heterogeneous unmanned ground and aerial vehicle systems in Mars outperforms benchmarks through indirect learning.
Formulating a cost function with an appropriate valuation of information is necessary when knowledge of the information in one time and/or space gives conditional attributes of information in another time and/or space. This model, Sequential Multimodal Multivariate Learning (SMML) outputs informed decisions, conditioned on the cost of exploration and benefit of uncertainty removal. For instance, given an observable input, SMML is trained to infer posterior from samples taken from the same multimodal and multivariate distribution, approximate gains, and make optimal decisions. The utility is the usual metric to be optimized based on the difference between prior and posterior tasks, and it tells us how well the model is improving the data distribution after observation. Recall that, in general, it does not suffice to learn from average values like in the standard reinforcement learning problem to solve this kind of task, for this reason, SMML extends the capabilities of deep learning models for reinforcement learning whose reward for each action is restricted to unimodal or univariate distributions. In highly uncertain conditions, this reduction of entropy is vital to any optimization platform employed in the robust, efficient, autonomous exploration of the search space. To overcome Shannon's limitation in multimodal learning, we consider both standard deviation and entropy. We target cells with the highest importance of information distinguishing two cells with identical entropy, but different values of information.
Predictive routing is effective in knowledge transfer, however, ignore information gained from probability distributions with more than one peak. Consider a network with a grid laid on top, where each cell represents a small geographical region. To find an optimal route from an origin cell to a destination, forecasting the condition of intermediate cells is critical. Routing literature did not use a location's observed data to forecast conditions at distant non-contiguous locations' unobserved data. We aggregate the data from all the grid cells and cluster cells that have similar combinations of probability distributions. When one cell of a cluster is explored, the information gained from the explored cell can partially remove uncertainty about the conditions in distant non-contiguous unexplored cells of the same cluster. With this new framework, we explore the best options to travel with partial, sequential, and mixture of information gain.
We use observations obtained en route to infer the most likely conditions in unobserved locations. While distant unobserved locations may not share any inherent correlation with locally observed locations, classification errors by the image classifier may be correlated with certain image features found in different locations on the Martian surface. By clustering pixels with similar classifications, we gather evidence en-route that either supports or fails to support the hypothesis that the image classifier is making correct classification of the different terrain types. The two-step process (clustering - posterior) updates the state estimation of un-observed locations when navigating Jezero region of Mars environments in which prior belief is provided but contains high uncertainty. During 55 out of 100 runs in the Monte Carlo simulation the optimal expected travel time path based on the prior SPOC map resulted in the rover becoming stuck due to misclassification. Real-world misclassifications are expected to be lower compared with the ground truth map used in this research. Without considering runs in which the rover would have been stuck using the prior map, the median travel time was improved by 1 hour using the posterior. Travel time in the worst case outlier performance is improved by 2.32 hours and 75% quantile performance is improved by 1.61 hours. Over a mission spanning many months, this saving adds significant additional time for scientific experiments.
Modern Space missions are requiring more autonomy and on-board processing capabilities. To accomplish this, high performance computers that can operate in harsh space environments (vibration, thermal, and radiation) is required. The poster will explore processor, form factor standard, architecture, and analysis decisions made during the development of a GPU based Single Board Computer for space applications.
The intent is that this poster would touch on the following Topics of Interest: Components, Radiation, and Packaging Current Rad Hard by Design processors may not provide the performance needed for on-board processing. Qualifying modern commercial processors for use in space applications is one path to solve the performance bottleneck. This may involve radiation testing of the processor and associated DDR memories along with a PEMs qualification and screening to gain confidence the parts will operate in the space environment. The poster would briefly describe radiation testing performed on the GPU device.
Computing Architectures Using a commercial processor requires mitigation techniques to manage and recover from SEUs and SEFIs. The poster would briefly describe the choices made and implementation of the fault mitigation approach.
Avionics Adopting an industry standard for board form factor and protocols allows the board to be integrated into higher level systems providing modularity and scalability. VITA 78 SpaceVPX was chosen to provide these features. Additionally, leveraging open-source software tools and libraries allows for fast development and test efforts. The poster would briefly describe industry standards utilized to enable modularity and scalability.
Machine Learning/Neural Computing The poster would briefly describe the benefits of using GPUs for AI/ML applications and the advantages of having this capability on the satellite.
This paper discusses the adaptation of Model-Based Systems Engineering (MBSE) into CubeSat lifecycle development. This adaptation involves transitioning from traditional satellite design practices to a model-based design approach by developing system models required for analysis, trade-offs, verification and validation (V&V). This approach has been applied to the launch-ready CubeSat, EIRSAT-1, which has been designed and developed by students and staff at University College Dublin (UCD), Ireland. The model contains several key features and components of EIRSAT-1. This poster presents the integration and verification of the communication link between the spacecraft and the Ground Segment (GS) within an MBSE framework.
Future planned space telescopes, such as the HabEx and LUVOIR telescope concepts and the recently proposed Habitable Worlds Observatory, will use high contrast imaging and coronagraphy to directly image exoplanets for both detection and characterization. Such instruments will achieve the ~10^10 contrast level necessary for Earth-like exoplanet imaging by controlling thousands of actuators and sensing with thousands of pixels. Updates to the wavefront control actuators will need to be computed within seconds, placing unprecedented requirements on the real-time computational ability of radiation hardened processors. In this work we characterize the wavefront sensing and control algorithms to estimate the performance based on publicly available benchmark performance for currently available space-rated processors.
Autonomous Cyber-Physical Systems (CPS) play a substantial role in many domains, such as aerospace, transportation, critical infrastructure, and industrial manufacturing. However, despite the popularity of autonomous CPS, their susceptibility to errant behavior is a considerable concern for safety-critical applications.
Testing and simulation is the most common method used in practice to ensure correctness of autonomous CPS. This is because of its ability to scale to complex systems. In many domains, CPS complexity and scalability have been exponentially growing and will continue to expand in the future due to rapid integration with machine learning components and growing autonomy level such as unmanned aerial vehicles or self-driving cars.
Traditional software test methodologies extensively depending on code coverage are expensive, difficult to manage, and ineffective in verifying CPS behavior. Moreover, these test methodologies suffer flexibility absence where dynamical CPS control requirements and plant parameters are evolving through continuous state space and time.
We investigate ways to improve automated test case generation for autonomous CPS by Coverage Guided State Space Exploration, which systematically generates trajectories to explore desired (or undesired) outcomes. For this, we introduce a novel coverage metric and integrate this metric with various techniques, such as fuzz testing or model predictive control (MPC) to generate test cases.
The Resilient Exploration And Lunar Mapping System 2 (REALMS2) continues the REALMS project [1]. It focuses on exploring and mapping lunar environments with special focus on resilience using ROS 2. REALMS2 uses multiple sensors for more robust mapping results and a multi-robot architecture based on a Mesh network communication system. Multiple rovers can interact, exchange data and act as a relay for other rovers to communicate to the base station by using their ad-hoc Mesh network.
Software verification is an important approach to establishing the reliability of critical systems. One important area of application is in the field of robotics, as robots take on more tasks in both day-to-day areas and highly specialised domains. Our particular interest is in checking the plans that robots are expected to follow to detect errors that would lead to unreliable behaviour. Python is a popular programming language in the robotics domain through the use of the Robot Operating System (ROS) and various other libraries. Python’s Turtle package provides a mobile agent, which we formally model here using Communicating Sequential Processes (CSP). Our interactive toolchain CSP2Turtle with CSP models and Python components enables plans for the turtle agent to be verified using the FDR model-checker before being executed in Python. This means that certain classes of errors can be avoided, providing a starting point for more detailed verification of Turtle programs and more complex robotic systems. We illustrate our approach with examples of robot navigation and obstacle avoidance in a 2D grid-world. We evaluate our approach and discuss future work, including how our approach could be scaled to larger systems.
Active debris removal in space has become a necessary activity to maintain and facilitate orbital operations. Current approaches tend to adopt autonomous robotic systems which are often furnished with a robotic arm to safely capture debris by identifying a suitable grasping point. These systems are controlled by mission-critical software, where a software failure can lead to mission failure which is difficult to recover from since the robotic systems are not easily accessible to humans. Therefore, verifying that these autonomous robotic systems function correctly is crucial. Formal verification methods enable us to analyse the software that is controlling these systems and to provide a proof of correctness that the software obeys its requirements. However, robotic systems tend not to be developed with verification in mind from the outset, which can often complicate the verification of the final algorithms and systems. In this poster, we describe the process that we used to verify a pre-existing system for autonomous grasping which is to be used for active debris removal in space. In particular, we formalise the requirements for this system using the Formal Requirements Elicitation Tool (FRET). We formally model specific software components of the system and formally verify that they adhere to their corresponding requirements using the Dafny program verifier. From the original FRET requirements, we synthesise runtime monitors using ROSMonitoring and show how these can provide runtime assurances for the system. We also describe our experimentation and analysis of the testbed and the associated simulation. We provide a detailed discussion of our approach and describe how the modularity of this particular autonomous system simplified the usually complex task of verifying a system post-development.
The complexity and flexibility of autonomous robotic systems necessitates a range of distinct verification tools. This presents new challenges not only for design verification but also for assurance approaches. Combining the distinct formal verification tools, while maintaining sufficient formal coherence to provide compelling assurance evidence is difficult, often being abandoned for less formal approaches. In this poster we demonstrate, through a case study, how a variety of distinct formal techniques can be brought together in order to develop a justifiable assurance case. We use the AdvoCATE assurance case tool to guide our analyses and to integrate the artifacts from the formal methods that we use, namely: FRET, CoCoSim and Event-B. While we present our methodology as applied to a specific Inspection Rover case study, we believe that this combination provides benefits in maintaining coherent formal links across development and assurance processes for a wide range of autonomous robotic systems.
WEDNESDAY, JULY 19, 2023
At start of the 21st century, we are in an exciting time in planetary science with an increased cadence of missions, an increasing number of nations and entities engaged in exploration, and near-term human missions to the Moon and beyond. These are occurring within a broader set of changes in aerospace: lowered barriers to accessing deep space, more space-qualified off-the-shelf commercial hardware and software, and a vibrant start-up culture. From the perspective of a planetary scientist working missions to the Moon, Mars, Venus, and the outer solar system as well as Earth observation, I will highlight some of the challenges and opportunities at different destinations and mission classes. On-board science data processing with real-time feedback to mission operations, autonomous fault tolerance/handling, lower cost avionics as mission enabling technology, and approaches for integration of real-time science instrument data into astronaut lunar traverses are some key opportunities in planetary exploration.
Introduction
The Lightweight Surface Manipulation System (LSMS) is a family of robotically operated cranes designed to handle a wide range of payload off-loading, and general coarse manipulations on a planetary surface, such as the Moon or Mars. In this talk, some of the sources of uncertainty that arise in the manipulation of this family of cable actuated cranes will be discussed, as well as some of the mechanisms leveraged to mitigate the uncertainty.
This presentation will discuss a subset of the sources of uncertainty that impact autonomous in-space servicing, assembly, and manufacturing missions. These include robotic manipulator modeling uncertainties in both kinematics and dynamics, perception error associated machine learning models for pose estimation, and sensor noise. Mitigation strategies will be discussed including the incorporation of capture envelopes in the design of robotic tools and selection of robot goal poses to minimize end-effector sensitivity in manipulators with redundant degrees of freedom.
Discussion
Software designed to run in space has traditionally been bespoke, one-off, low-compute-overhead applications that perform a specific task on specific hardware. Scalability and composability have not been design requirements for spaceflight software because there’s been no need. However, as Earth orbit is commoditized and deep space infrastructure is deployed for the next wave of interplanetary exploration, the need to scale, compose, and upgrade services over the duration of a mission goes from being a fault recovery mechanism to being a necessity.
NASA and our industrial and international partners are planning to establish a sustained presence on the lunar surface as part of the Artemis campaign. Robotics and autonomous systems are key technology areas which will enable remote operations on the lunar surface in applications such as logistics, maintenance, science utilization, construction and outfitting. The lunar surface will serve as a proving ground for robotics with increasing levels of autonomy, flexibility, and resilience that will enable the human exploration of Mars. I will be giving a brief overview of the challenges associated with robotic remote operations in architectures designed around human explorers and the plans that NASA is developing to bridge the gap between terrestrial innovation and the demands of space applications.
Deployment of robots will revolutionize space exploration in the coming years, both for manned and unmanned missions; however, the success of these robots is linked as much to advances in sensors, manipulators, and AI algorithms as it is to the robustness of the underlying computational architectures that support the software & hardware. Most space missions require the use of specialized – computationally limited – radiation tolerant hardware, which in turn depends upon specialized flight software (FSW). This is as true for robots as it is for the ISS or Gateway. Because of this specialization, FSW has traditionally been developed via “clone-and-own” processes, where software from a previous mission is copied and adapted. An alternative approach, increasingly accepted by both the robotics and space-flight communities, suggests that developing and sharing component-based, reusable software will facilitate the number, scope, and innovation of space missions. This will require that complex robot and flight software is developed through the use of a common framework, such as ROS2, of shared libraries and tools. As such, TRACLabs and Johns Hopkins APL have recently been developing a BRASH (Bridge for ROS2 Application to Space Hardware) toolkit for software components to enable ROS2 to interoperate with existing flight software tools. As there is no “one size fits all” mission architecture, we instead have focused on developing a number of components that mission designers can draw upon to meet their specific needs. While still a work in progress, our toolkit includes ROS2-to-FSW bridge utilities for message translation and conversion between ROS2 and commons FSW frameworks such as NASA core Flight Software (cFS) and JPL’s F’; plugins for integration with both ground and flight systems; utilities for time synchronization, parameter and event management; and integration with TRACLabs' PRIDE electronic procedure application software. You can find more about our tools, which are being made available to the community, at https://traclabs-brash.bitbucket.io/.
Ransomware is a prevailing concern across sectors, wreaking havoc on operations and causing cascading failures - often cyber-physical in nature. The space sector has not been immune to ransomware attacks; however, there has yet to be a publicly disclosed ransomware built for a space vehicle itself. A ransomware attack against a space vehicle would need to be carefully crafted to mitigate the risk of destroying the underlying functionality of the spacecraft, while still achieving its purpose - denying access until a ransom is paid. Through static code analysis, this paper proposes an approach for deploying a ransom attack against a space vehicle that engages NASA's core Flight System (cFS).
Space flight software is no longer a closely guarded secret for space vehicle developers, owners and operators - it is open-sourced and available as a commercial-off-the-shelf module. Despite its wide availability, limited security research has been conducted on flight software in an unclassified environment. This paper proposes a research agenda that outlines critical challenges for space flight software and proposes a series of research and development efforts that could ultimately aid in developing inherently secure space vehicles.
Two issues to be discussed in the available 30-minute time slot: 1) mission critical networks for space flight systems, and 2) deep nano-meter, radiation hardened embedded FPGA technology advancements and advantages.
As aerospace and defense firms are working towards developing future air and space platforms, decisions involving which hardware and software elements to use in their design needs to be made. One significant design choice is the underlying instruction set architecture (ISA) that defines the interface between the software and hardware for a processor. A relatively new ISA called RISC-V has emerged as an open source alternative to commercial ISAs. Verifiable security, frozen base specification for long-term stability, designed for extensibility and no license fee for modifications makes RISC-V particularly well suited for aerospace and defense applications. It is important for the architect to evaluate not only the RISC-V core but the interaction of the core with other subsystems, data accesses, and interrupts, in the scope of target application at early design stages inorder to minimize design bugs, reduce cost and optimize design. In this work, we developed the system models of not only the RISC-V core but also the entire SoC where the RISC-V cores were plugged in. Using the system model, we are able to run target applications/benchmarks on RISC-V core and evaluate the performance and power for different clock frequencies, custom instructions, topology, cache associativity degrees, cache replacement policies, cache sizes, write back policies, bus width, buffer sizes, bus speeds, memory types, memory width, and interconnect protocols like the TileLink. For each of the system model run, the simulation model generates various statistics including details on pipeline stalls, pipeline utilization, execution unit utilization, execution unit buffer occupancy, instruction and data cache accesses, cache hit ratio, number of evictions, write backs, memory throughput, cycles per instruction, memory access latency and network latency. Using the system model, we were able to obtain debug logs from each SoC subsystems including the RISC-V cores. Using the pipeline traces for each instruction, we able to verify the behaviour of custom instructions which were introduced to improve our application performance. Performance and functional requirements were provided as input to the system model and faults were injected into the system model to determine the expected performance of the system under failure. Our SoC design was then updated to improve the fault tolerance and application performance by 30% under faults by adding redundant cores and error correction mechanisms, based on early results.
The growing need to provide reliable high bandwidth, low latency communication services to remote and underserved locations is being addressed in both the commercial and defense space sectors. The result is the proliferation of low earth orbit (LEO) satellite constellations with complex radio frequency (RF) communications payloads. To fully realize the potential of networked communications satellites in LEO, satellite designers are creating electronically-steerable multi-beam antenna systems. These antenna systems rely on digital beamforming, a technique which enables an active electronic array antenna to efficiently direct RF energy to the intended recipient. Digital beamforming is a complex technique which requires significant parallel computing resources to operate in real-time at rates of hundreds of billions of complex multiply and accumulate operations per second. Availability of highly integrated semiconductor devices with heterogeneous computing resources is enabling unprecedented flexibility for designers of satellite RF systems. With newly available products, designers can implement reconfigurable processing platforms which, under software control, can respond to evolving mission parameters with a variety of disparate compute resources which enable efficient digital beamforming as well as optimal real-time control of modulation techniques, in-flight updates of waveforms, and AI-based decision-making. AMD Versal ACAP (Adaptive Compute Acceleration Platform) devices combine application class processors, real-time processors with lock-step capabilities, AI acceleration vector processors, DSP blocks capable of 27 x 24 multiplication operations with 58-bit accumulation, and an infinitely reconfigurable programmable logic fabric in a single chip form-factor with sufficient radiation tolerance for LEO and GEO space missions. The density, variety, and performance of the various computing resources in the Versal ACAP devices enable a significant leap forward in the drive toward miniaturization of satellite payload processing systems, and allow designers to achieve autonomous decision making on orbit based on highly complex analysis of high bandwidth data from multiple sensors. In this paper we review the application of Versal ACAP technology to a digital beamforming application, including a discussion of the implementation of the matrix multiplication using the array of vector processing engines integrated into the monolithic Versal device. We will also provide the latest information on the radiation characterization of the Versal ACAP family, including test data on the Adaptive Intelligent Engines and the gigabit transceivers. Current status of qualification for space-flight applications and product availability will also be covered.
Commercial, government, and defense space systems will operate in some level of naturally occurring radiation. Additionally, terrestrial systems are often exposed to natural or man-made radiation effects. These radiation environments include space, nuclear events, and atmospheric neutrons.
In this session we will discuss how designers can use automated methods to meet their radiation environmental requirements in their microelectronic designs. There are three key areas of focus: device-level radiation modeling, functional safety (FuSa) design methodologies, and fault tolerant IPs for radiation designs. An interesting aspect of this discussion is that Synopsys is working with JPL and other partners to leverage commercial capabilities that enable various levels of radiation environments, from low earth orbit to strategic radiation hardened solutions.
Another area of focus is high performance space computing. An existing IP library and suite of EDA tools allows designers to prioritize low power, processing bandwidth, fault tolerance and radiation performance. The EDA tools can manage dual core lock step (DCLS) processors with very little manual intervention with an emulation and verification environment that can model single or multiple core designs.
Access to space is increasingly fundamental to our modern world. Launch volume is rapidly increasing annually, requiring increasing spaceport and ground support capabilities. The design, ownership, and operation of spaceports are bespoke, where their cybersecurity considerations are always an afterthought. This paper focuses on the critical role spaceports play in the access to space and their vulnerabilities to cyber attacks. Further, it demonstrates that without strong cybersecurity, adversaries can easily deny access to space.
The Cyber Analysis Visualization Environment (CAVE) software system is a collaboration between the Systems Engineering and Mission Systems and Operations Divisions at JPL. CAVE is an easy to use, model-based cyber threat visualization, analysis, and assessment platform designed to respond to the growing threat of cyber attacks targeting space missions. CAVE was originally designed and developed as a Python-based desktop application with integrated 3D visualization, user interface, and a cyber analysis engine, before a recent redesign to the current client/server web-based application architecture. This new client/server web application architecture allows users to use CAVE on multiple platforms, requiring minimal or no software installation on their local systems while still allowing for a level of customizability at the user level. CAVE’s client/server architecture ensures that sensitive or proprietary data always resides securely on the CAVE analysis server, thereby avoiding the storage of any sensitive data on the end user‘s local machine. At the high level, the CAVE web server exposes a REST API to allow the web GUI to communicate with the analysis engine. The web server has user authentication capabilities, and restricts access to model data based on the assigned group of the authenticated user. The server is architected with a plugin structure in mind for analyses, so that anyone opting to deploy the CAVE server can easily write and deploy additional model analyses. Model data is stored in a Mongo database and delivered to the client via authenticated HTTPS request. The client handles all interactions with the web server and provides feedback to the user on the status of server requests. CAVE’s web-browser user interface provides an intuitive, 3D presentation of complex network models, containing both physical and virtual network assets, which allows mission cybersecurity engineers to easily swap between a selection of visual layouts and execute a variety of cybersecurity analyses algorithms designed to help users better understand possible network vulnerabilities and defenses. This web-based user interface model allows for ease of expansion and potential future additions to the user experience, including real-time collaboration and sharing capabilities, and the interactive exploration of large cybersecurity datasets. In this paper we present CAVE's software architecture and use cases, and discuss CAVE's value as an intuitive tool for cyber threat identification and assessment.
Mirabilis Design enables creation and reuse across the continuum of concept to mission lifecycle. VisualSim Architect is a system modeling and simulation platform with IP modules for rapid model construction and simulation framework with decision support system including reporting, optimization capabilities and visualization. The environment integrates with SysML use cases, workload traces, requirements database and existing simulators. VisualSim can be used in the design of semiconductors, embedded systems, software and networking.
Autonomous agents performing in-space assembly tasks must make informed decisions about the sequence in which they assemble a structure. In order for these algorithms to be trustworthy, care must be taken to ensure that there is reproducibility between low fidelity and high-fidelity simulation results as well as a strong correlation between the simulation results and the physical behavior of the hardware performing the operations. A method of assembly sequence optimization using surrogate models will be presented.
In this talk, we examine the issue of maintaining desirable safety characteristics in an operational environment under uncertain conditions. While operating in an uncertain environment, agents are faced with both an uncertain model of themselves, and of other agents. This leads to challenges in enforcing operational safety measures, such as minimum separation, desired speed, or spatial constraints. To address this challenge, we consider the issue of an autonomous quadcopter operating in an unsafe and uncertain environment. We first demonstrate derivation of an Exponential Control Barrier Function (ECBF) from quadrotor dynamics, assuming no parametric uncertainty or external disturbance, in order to enforce minimum separation. This exercise highlights both the impact of external disturbances on vehicle safety and establishes a baseline comparative method to build upon. We will then extend this derivation to adapt to parametric uncertainty, resulting from (for example) external disturbance or faulty sensors. After establishing this uncertainty-robust control method, we will explore ways to incorporate desirable airspace safety measures using Control Lyapunov Functions (CLFs), and how such safety measures can be made likewise uncertainty-robust in order to assure both safe flight (through ECBF satisfaction) and coordinated flight (through CLF optimization). Finally, we will discuss how approaches used in atmospheric flight uncertainty management and adaptation can be extended to in-space autonomy.
All actions in any system are a result of solving various decision-making problems, at various scales. Successful and safe functioning of the system depends on the ability of decision makers, human or machine, to reach successful solutions in time – the factor to which system uncertainty may be reduced. In this talk, we discuss one approach to reducing system uncertainty: increasing the likelihood that the upcoming decision-making problems are solvable on a required time budget, in principle. As a side effect, the approach points to a dynamic determination of the entity entrusted with solving a specific decision-making problem in multiagent environments.
The architecture of robotic systems must take into account many unique factors not readily apparent to the outside observer. Using the Mars Sample Recovery Helicopter’s tube acquisition sub system as a case study, the factors driving the architecture decisions and methods used to determine the path forward will be discussed. The current limitations in space robotics that are pain points will be highlighted as community opportunities for advancement of the field.
The lunar environment is significantly harsher than that of Mars. Lunar surface explorers will be operating in vacuum and have to cope with extreme temperature swings, extreme cold (30-60 K, 18K at coldest Permanently Shadowed Region), extended periods of darkness, tribocharging (static electricity build up), and abrasive and hard to remove lunar dust. In addition, many areas of interest such as PSRs and lava tubes have steep slopes, uneven terrain, and temporary or permanent communication blackouts that preclude teleoperation. As a result, the model of supervised autonomy developed for operating Martian rovers is unlikely to meet the needs of lunar surface robots. Operations that have been customarily performed by mission ops on earth will need to be performed by software on the robot instead. This presentation will discuss the environmental challenges of the Moon and the ways in which they will demand new capabilities from robotic software.
In this talk we will discusses the need for a new paradigm in robotic space exploration that can effectively explore challenging destinations beyond Mars. The current incremental approach used for Mars exploration, based on detailed environmental knowledge, is not applicable in environments with long cruise times and limited launch opportunities. The proposed paradigm, called "into the unknown," emphasizes versatility and intelligent, risk-aware autonomy. The talk introduces the Exobiology Extant Life Surveyor (EELS), a highly versatile and intelligent snake-like robot designed for exploring Enceladus vents and other challenging targets. EELS has a high degree of mechanical flexibility, allowing it to adapt to unknown environments. It is equipped with a novel autonomy framework called NEO, which enables control and decision-making in extreme and unknown environments. Prototypes of EELS have been developed and successfully tested for surface and vertical mobility. The talk highlights the importance of versatility and intelligence in addressing environmental uncertainty and illustrates how EELS outperforms Mars rovers in coping with unknown terrain. The proposed paradigm and the development of EELS represent a significant shift in robotic exploration strategies for future missions beyond Mars.
Field Programmable Gate Array (FPGA) technology has been used extensively in space applications where the natural radiation environment presents major challenges to electronic parts. Commercial FPGA technology is trending to deep nano-meter silicon processes, which impacts the availability of radiation resilience FPGA chips. Space systems require long timeframes for development and launch, and often the electronics and code may become obsolete or require updating before the system can be launched. FPGA logic/fabric-size continues to grow dramatically which allows and practically requires more and more IP cores to be integrated within a chip. New IP cores and tools will be needed to enable space designs with commercial FPGA technology to withstand radiation. This paper discusses the challenges in designing FPGA-based space systems and potential open-source and commercial technologies that will be useful to space application developers. It also references an ongoing FPGA based space telescope spectrometer design to discuss different aspects of complex FPGA design with mixed analog and digital circuits.
Advanced algorithms dictated by future mission needs are pushing space-borne computing requirements. Current platforms used in manned and unmanned spaceflight are limited by the intersection of radiation tolerance, power consumption, computing performance and safety critical features. Until recently, advanced instruction set architectures (ISAs), algorithm specific instructions, high speed external interfaces and high performance on-chip networks were eschewed from processor designs and current production spacecraft processors are based on past computing paradigms.
Advances in processor manufacturing, emerging ISAs and machine learning techniques will significantly impact future system-on-chip (SoC) designs, enabling true high-performance space computing. To better understand the computational requirements of modern spacecraft, a comprehensive set of benchmarks that include basic system characterization, high performance computing, navigation and landing, image recognition, route finding, data mining and machine learning are necessary to characterize candidate architectures. The analysis of these space flight focused algorithms will drive the design of next generation space computing SoCs.
SiFive is the Founder and Brand Standard for RISC-V processor IP. The RISC-V portfolio includes robust processor offerings that are integrated into solutions supporting many aspects required for present and future space missions, including hi-rel. SiFive will provide an overview of the RISC-V portfolio, in addition to engagement overviews with both the government and DIB and SiFive's desire to provide differentiation to the space community.
Welcome and introductory comments
He will explain how JPL’s Chief Technology and Innovation Office explores and researches AI, Analytics, and Innovation in the Information Technology and Solutions Directorate (ITSD) supports advanced analytics, AI and Machine Learning for Smarter Rovers, a Smarter Campus, and beyond.
We introduce model-agnostic approaches for explaining machine learning predictions and testing algorithms to ensure their robustness. The goal is to create tools that are broadly applicable to any existing and future machine learning algorithms and produce intuitive, user-friendly descriptions to encourage wider adoption. Our ultimate objective is to advocate for more responsible use of machine learning, which can have a deep and transformative impact on its applications.
Intelligent exploration is the practice of using AI to guide a user through the analysis of multidimensional, massive, datasets. Generative AI can be used to create content (such as text and visualizations) and can be used to ask questions in a more intuitive way. By coupling intelligent exploration and LLMs we can create tools that can explore complex datasets more effectively. This can lead to new discoveries and insights that would not be possible with traditional methods.
Panel Discussion: Explainable AI
This talk looks back at the Descent Image Motion Estimation System (DIMES), used successfully to reduce a critical mission risk during the landings of NASA’s two Mars Exploration Rovers on Mars in 2004. We will see how the DIMES’ designers addressed challenges of uncertainty present both in the environment, and those inevitable in the time and computationally constrained envelope within which their system had to operate.
Panel and Open-Mic Discussion
Space and planetary surface missions will continue to rely on and leverage robotics technology. As the scope of missions to new science and exploration destinations evolves, so will the robotics problems and associated software-centric solutions. Persistent robotics software gaps remain while new gaps will emerge driven by future mission needs. What would help is a greater collective effort to address the needs via use of common software frameworks that offer a range of solutions spanning robotic In-space Servicing, Assembly, and Manufacturing capabilities. This talk considers robotics software-related topics encountered over the course of several decades of experience. It touches on a variety of what are considered to be current gaps related to software-based robotic capabilities needed for on-orbit and surface missions. Considerations associated with flight and ground software for robotics are highlighted along with mission/flight system matters, including relationships to computing, cybersecurity, sensing & perception, control modes, interoperability, testing, and human interaction.
Panelists: Ivan Perez Dominguez, Shuan Azimi and Kalind Carpenter
Reinforcement Learning (RL) has become an increasingly important research area as the success of machine learning algorithms and methods grows. To combat the safety concerns surrounding the freedom given to RL agents while training, there has been an increase in work concerning Safe Reinforcement Learning (SRL). However, these new and safe methods have been held to less scrutiny than their unsafe counterparts. For instance, comparisons among safe methods often lack fair evaluation across similar initial condition bounds and hyperparameter settings, use poor evaluation metrics, and cherry-pick the best training runs rather than averaging over multiple random seeds. In this work, we conduct an ablation study using evaluation best practices to investigate the impact of Run Time Assurance (RTA), which monitors the system state and intervenes to assure safety, on effective learning. By studying multiple RTA approaches in both on-policy and off-policy RL algorithms, we seek to understand which RTA methods are most effective, whether the agents become dependent on the RTA, and the importance of reward shaping versus safe exploration in RL agent training. Our conclusions shed light on the most promising directions of SRL, and our evaluation methodology lays the groundwork for creating better comparisons in future SRL work.
This paper describes the Autonomy System that flew on the Double Asteroid Redirection Test (DART) mission. A detailed description of the rules-based logic is provided, including the heritage of the system, how it was tailored to meet DART’s fault management philosophy, build process, testing, and necessary in-flight updates. The purpose of the Autonomy software was to implement fault protection against components’ off-nominal behaviors and to implement a safing strategy designed to place the spacecraft in a known, safe configuration in response to critical faults while Mission Operations worked on a resolution. Aside from maintenance and fault protection tasks, the onboard Autonomy software was key to guaranteeing the execution of time-critical sequences during the mission. The resulting system was also instrumental in being the tool of choice to address anomalies in other subsystems that were discovered as early as a month into flight and as late as a month before impact.
Accurate mapping of software requirements-to-tests can assure high software reliability as a result of robust traceability, test coverage, and improved transparency. Software requirements change frequently across mission phases. A testable and measurable requirement maintenance and tracing are essential in all phases of mission life cycle. In a development phase, a predictable and controlled software system deployment, test, and integration can firmly support mission’s rapid innovations. In an operation phase, patches need to be applied periodically and prompt evaluation and verification turnaround are critical. By integrating and exercising Natural Language Processing (NLP) and Machine Learning (ML) assisted model-based test engineering (MBTE) many of these challenges can be overcome. This paper will present a new and novel method and lesson learned from formalizing and automating the software requirement-to-test mapping, which allows engineers to review the recommendations generated by the automated system.
AI inferencing in space opens new opportunities for enhanced payload data processing and autonomous decision making on orbit. In this talk we review the silicon architecture and software tool flows which enable designers to use AMD’s radiation tolerant Versal adaptive SoCs for AI inferencing in space.
A method and design are described for a system that processes multiple data streams, utilizing a multicore asymmetric processing architecture, that eliminates data interrupts to the application processors. The design supports a deterministic environment for flight software in NASA’s Safe and Precise Landing – Integrated Capabilities Evolution (SPLICE) project. The SPLICE project develops sensor, algorithm, and compute technologies for Precision Landing and Hazard Avoidance (PL&HA) capabilities. The compute technology for SPLICE is the Descent and Landing Computer (DLC). The DLC hosts several SPLICE algorithms with high computational resource requirements that must be executed in a real-time and deterministic manner. The software runs on a custom Single Board Computer (SBC), with a Xilinx Ultrascale+ Multiprocessor System-on-a-Chip (MPSoC). Input data for the flight software is from a variety of sensors, unique with respect to data rate and packet size. A data path between the SPLICE sensors and algorithms is designed to efficiently deliver this data to the flight software using the MPSoC asymmetric processing cores and Field Programmable Gate Array (FPGA) fabric. This is implemented in a manner that isolates the application processors running the flight software from interrupts associated with the input data. By leveraging real-time processors on the MPSoC, and a structure with the appropriate interfaces in the shared memory on the SBC, the flight software can use the full set of application processors. The available utilization for each processor in this set is also maximized for the SPLICE applications, providing a sufficiently deterministic execution environment without the cost and overhead of a real-time operating system.
ROS2 (Robot Operating System Version 2) is an open-source software framework and suite of libraries that facilitates robotics application development and multi-core processing. ROS2 is becoming the de-facto standard for autonomous robotics application and recently, with the sponsorship of NASA and Blue Origin, space-ros -a pre-qualified implementation of ROS2- it is also started to be used for Space autonomous applications, like landers and space robotics.
ROS2 data workflow design has been thought out to provide a robust and uniform access to sensor data via different supported middleware including DDS and memory sharing solutions. While this design has largely simplified the development effort required to write applications based on ROS2, the processing performance has been compromised and it is not uncommon to face long latencies in receiving sensor data or even non-negligible data losses.
The ROS2 community -and specifically the Real-Time Working Group (RTWG)- has made a big effort in trying to improve this situation that resulted in the creation of the Static Single Thread and the Multi-thread Executors. These executors have yielded substantial performance improvements for some specific scenarios respectively. However, the community has been unable to find a general approach to solve the performance bottlenecks in ROS2 and has left to the developer to find the optimal combination of executors for the use case at hand.
Other industries like the financial industry have reached ground-breaking speed by parallelization of data processing algorithms that enable real-time applications based on the so-called lock-free programming technique. This technique, although very complex to program efficiently, can yield performance results when used appropriately. The authors of this paper have implemented a lock-free ring-buffer and validated it in different onboard computers with outstanding performance results for sensor data aggregation, processing, and even artificial intelligence.
The research presented here shows a ROS2 executor implemented of our lock-free ring-buffer that can not only increase by several times the data processing rate with respect to the above-mentioned executors, but also reduce the processing consumption substantially. Moreover, a ROS2 multi-node approach of this executor has been implemented combined with offline optimization based in genetic algorithms, that can improve the performance of virtually any scenario for ROS2 applications.
Successful performance results of this novel ring-buffer ROS2 executor and the multi-node variation are presented using the so-called reference system and validated in several Space computers including Unibap's iX5, Teledyne e2v's LS1046 and Xilinx Ultrascale+.
NASA’s space-based observation platforms generate unprecedented amount of data, requiring new capabilities, ranging from enabling significantly greater computing capabilities onboard and across robotic spacecraft, to drive autonomy, interpretation and automation, and to support new computing architectures that bring space and ground systems seamlessly together. We will discuss the challenges and opportunities for future observing systems for space, and the integration of these capabilities into the next generation of space observing systems.
For space missions that venture far from Earth, real-time decision-making isn't always possible due to numerous challenges. Many deep-space science cases revolve around change detection, and anomaly detection. We will outline a few science cases, and show how advances in data fusion and machine learning pave a way for progress and innovation in such contexts. Finally, we propose steps to equip near-future missions to make the best of such capabilities.
The Director of the National Reconnaissance Office, Dr. Chris Scolese, remarked in his keynote address at the Space Symposium in April 2023, “We’re also increasing automation, multi-intelligence processes and machine learning capabilities so we can operate at the speed of machines, and deliver the right information at the right time to the right place”. This contribution will highlight the potential impact of advancements in scalable in-space computing, data and networking services from the perspective of Intelligence, Surveillance, and Reconnaissance applications.
Panel Discussion: Space Clouds
Concluding Remarks
The German Aerospace Center (DLR) is developing ScOSA (Scalable On-board Computing for Space Avionics) as a distributed on-board computing architecture for future space missions. The ScOSA architecture consists of commercial off-the-shelf (COTS) and radiation-tolerant nodes interconnected by a SpaceWire network. The system software provides services to enable parallel computing and system reconfiguration. This allows ScOSA to adapt to node errors and failures that COTS hardware is susceptible to in the space environment. In the ongoing ScOSA Flight Experiment project, a ScOSA system consisting of eight Xilinx Zynq systems-on-chip with dual-core ARM-based processors and a LEON3 radiation-tolerant processor is being built for launch on DLR's next CubeSat in late 2024. In this flight experiment, not only all 18 cores but also the programmable logic will be used for high performance on-board data processing. This paper presents the current hardware and software architecture of ScOSA. The scalability of ScOSA is highlighted from both hardware and software perspectives. We present benchmark results of the ScOSA system and experiments of the ScOSA system software on ESA's OPS-SAT in orbit in combination with a machine learning application for image classification.
The James Webb Space Telescope is the successor to the Hubble Space Telescope. It is the largest space telescope ever constructed and is giving humanity its first high-definition view of the infrared universe. The Webb is observing early epochs of the universe that the Hubble cannot see to reveal how its galaxies and structure have evolved over cosmic time. The Webb is exploring how stars and planetary systems form and evolve and is searching exoplanet atmospheres for evidence of life. The Webb’s science instrument payload includes four sensor systems that provide imagery, coronagraphy, and spectroscopy over the near- and mid-infrared spectrum. NASA developed the JWST in partnership with the European and Canadian Space Agencies, with science observations proposed by the international astronomical community in a manner like the Hubble. Launch of the Webb occurred during Christmas day 2021. In-flight commissioning was completed during June 2022 and science operations are now underway.
THURSDAY, JULY 20, 2023
The human space enterprise has experienced significant changes over the past decade, ranging from the explosive growth of new commercial space launch and on-orbit services and capabilities, to the global realization of the advantages that space provides in military conflict and the impacts that the loss space capabilities would have to countries that depend on them. Independently, there has been unprecedented growth in a number of technologies, that are either driving our terrestrial computational tech base (e.g., machine learning) or have the potential to upend it (e.g., quantum computation/quantum information science). This talk will discuss some of the current forces and trends that are driving the requirements for space processing from a military perspective, and will touch on several current space computing projects under development at AFRL addressing these issues.
NASA's Mars 2020 Mission is to study Mars' habitability and seek signs of past microbial life. The mission uses an X-ray fluorescence spectrometer to identify chemical elements at sub-millimeter scales of the Mars surface. The instrument captures high spatial resolution observations comprised of several thousand individual measured points by raster-scanning an area of the rock surface. This paper will show how different methods, including linear regression, k-means clustering, image segmentation, similarity functions, and euclidean distances, perform when analyzing datasets provided by the X-ray fluorescence spectrometer to assist scientists in understanding the distribution and abundance variations of chemical elements making up the scanned surface. We also created an interactive map to correlate the x-ray spectrum data with a visual image acquired by an RBG camera.
The Modular Unified Space Technology Avionics for Next Generation (MUSTANG) is a small integrated Avionics system including Command and Data Handling (C&DH), Power System Electronics (PSE), Attitude Control System Interfaces (ACS), and Propulsion Electronics. The MUSTANG Avionics Architecture is built upon many years of knowledge capture and lessons learned at the Goddard Space Flight Center. With a motivation towards modularity and keeping board redesign costs to a minimum, MUSTANG offers flexibility in features with a backplane-less design and allows the user to choose the options (cards) needed for their system. It incorporates a distributed power system that provides secondary power to all its subcomponents reducing the number of primary services needed for an Avionics. MUSTANG can be integrated into one system or divided into several smaller components. MUSTANG supports redundancy and cross-strap ability for a more robust and reliable Avionics system. A variation of MUSTANG exists for Instrument Electronics called iMUSTANG and allows the user to select functionality applicable to the instrument electronics. MUSTANG is not meant to replace Avionics for all spacecraft. There are limitations due to its relatively compact size, but the MUSTANG design has proven broadly applicable on many spacecraft and instrument bus avionics architectures.
Single-event-latchup (SEL) in a semiconductor device is an undesirably induced high current state, typically rendering the affected device to be non-functional and compromising its operating lifetime. The lower-current SEL phenomenon – the micro-SEL – is often difficult to detect, particularly when the normal operating current of the protected device is variable and the magnitude of micro-SEL currents is different under different operating conditions. In Machine-Learning (ML), the said variable current inadvertently affects the multiple features of the input current profile required for micro-SEL detection, thereby severely reducing the detection accuracy. In this paper, we propose a data pre-processing module to improve the accuracy of the ML-based micro-SEL detection under the aforesaid current conditions. The proposed pre-processing module encompasses the following. Prior to classification by ML, the input current profile is processed by a data pre-processing module employing a proposed background subtraction algorithm and proposed adaptive normalization algorithm. By filtering the irrelevant base current and normalizing the micro-SEL current based on the base current value, the data pre-processing module provides improved accurate features of the input current profile and widens the difference between normal samples and micro-SEL samples in the feature space. Ultimately, the proposed module facilitates ML algorithms to generate a more accurate decision boundary. The outcome is a worthy ~13% accuracy improvement (from ~79% to ~92%) in the micro-SEL detection in a device operating with variable currents.
By looking at the press headlines, we've learned that open source is already being used in space applications that have safety considerations today. Details about the safety analysis performed are behind NDAs and are not available to developers in the open source projects being used. To make the challenge even more interesting, the processes the safety standards are expecting are behind paywalls, and not readily accessible to the wider open source community maintainers and developers. Figuring out pragmatic steps to adopt in open source projects, like the Linux kernel, requires the safety assessor communities, the product creators, and open source developers to communicate openly. There are some tasks that can be done today that help, like knowing exactly what source is being included in a system and how it was configured and built. Automatic creation of accurate Software Bill of Materials (SBOMs), is one pragmatic step that has emerged as a best practice for security and safety analysis. There are also other different practices that various open source projects are adopting that can help with the safety analysis. This talk will overview some of the methods being applied in different open source projects, as we try to establish other pragmatic steps that will help to solve this challenge.
TBD
Introduction to Trust Readiness Levels
Ethical Scenario
Breakout Discussion on Trust Readiness Level 1 Artifacts and Metrics: All Participants
Group Outbriefs
As NASA exploration moves beyond low-Earth-orbit (LEO), the need for interoperable avionics systems becomes more important due to the cost, complexity, and the need to maintain distant systems for long periods.
The existing SpaceVPX industry standard addresses some of the needs of the space avionics community, but falls short of an interoperability standard that would enable reuse and common sparing on long duration missions and reduce NRE for missions in general.
A NASA Engineering & Safety Center (NESC) study was conducted to address the deficiencies in the SpaceVPX standard for NASA missions and define the recommended use of the SpaceVPX standard within NASA. Subsequently, the broader spaceflight avionics community has been engaged to work towards a more interoperable variant of the SpaceVPX standard. This presentation will provide a background on SpaceVPX interoperability, proposed solutions, and an update on efforts to develop a variant of the standard.
TSN is a set of IEEE 802.1 standards and technologies that bring bounded latency, low packet delay variation and guaranteed packet delivery to conventional Ethernet networks. While TSN has been deployed in residential, automotive and telecommunication Ethernet networks, its applicability to space applications is now being studied by the IEEE P802.1DP (TSN for Aerospace Onboard Ethernet Communications) Task Group.
This presentation provides an overview of how TSN can bring fault tolerance, resiliency, latency reduction and determinism to space, and how TSN Ethernet can change the economics and ecosystem for space. The author will also give a status update on the work at P802.1DP.
Computing systems for the space market have historically been highly optimised for their intended application with little priority given to commonality or reuse. A modular approach based on open standards and Commercial Off-The-Shelf (COTS) products can increase flexibility and reuse, whilst reducing total costs and timescales. Alpha Data presents examples of how this has been achieved using AMD FPGAs, and highlights off-the-shelf solutions using Adaptive Systems on Chips for the next generation of computing systems in space.
We present a compositional approach to modeling and analyzing space mission operation sequences with steps across multiple viewpoints. We consider different tasks such as communication, science observation, trajectory correction, and battery charging; and separate their interactions across discipline viewpoints. In each sequence step, these tasks are modeled as assume-guarantee contracts. They make assumptions on the initial state of a step, and if these assumptions are satisfied, they guarantee desirable properties of the state at the end of the step. These models are then used in Pacti, a tool for reasoning about contracts. We demonstrate a design methodology leveraging Pacti's operations: contract composition for computing the contract for the end-to-end sequence of steps and contract merging for combining them across viewpoints. We also demonstrate applying Pacti's optimization techniques to analyze the figures of merit of admissible sequences satisfying operational requirements for a CubeSat-sized spacecraft performing a small-body asteroid rendez-vous mission. We show that analyzing tens of thousands of combinations of sequences and operational requirements takes just over one minute, confirming the scalability of the approach. The methodology presented in this paper supports the early design phases, including requirement engineering and task modeling.
A `hybrid space architecture' has been proposed to facilitate robust and resilient satellite data downlink, integration and analysis; however, the technical details for what may comprise a hybrid space architecture are severely lacking. Thus far, `hybrid' principally entails the diversity of commercial providers. While diverse suppliers can contribute to hybrid space architectures, we argue that robustness and resilience will only be achieved through heterogeneous network and asset architectures. A connected satellite services ecosystem composed of the union of different networks with different characteristics would limit single points of failure, thereby generating high levels of redundancy, resilience and scalability. This research outlines parameters of a hybrid space architecture, documents satellite service reference architectures and provides a comparative analysis of the features for each architecture. Further, through a case study of existing satellite service providers, we propose how a hybrid space architecture could be piloted in Northern Europe and the High North.
This paper presents a trade space comparing forty-eight different configurations for redundant computer systems in spacecraft, in terms of their availability versus lifetime metrics. Each of these configurations uses a different redundancy scheme. Failure modes include transient failures due to radiation effects in space, such as Single Event Upsets (SEUs) and Single Event Functional Interrupts (SEFIs) and permanent failures due to degradation. Configurations include various combinations of up to four total processors, with at least one of them being a prime, and the others, hot or cold spares. Dual, Triple, and Quad Modular Redundancy are covered, along with some deployed spacecraft configurations, e.g. the Parker Solar Probe, and the Curiosity rover during its Entry, Descent, Landing (EDL) phase. Some hypothetical designs lie outside the convex hull of previously-known configurations.
AdvoCATE (Assurance Case Automation Toolset) is a tool that supports the development and management of assurance cases. An assurance case is a comprehensive, defensible, and valid justification that a system will function as intended for the specific mission and operating environment. Dynamic Assurance Case (DAC) is an assurance case that combines both the static and dynamic elements for assuring validity of the captured justifications. AdvoCATE supports a range of notations and modeling formalisms, including Goal Structuring Notation (GSN) to document safety cases and Bow-Tie Diagrams (BTDs) for risk modelling. AdvoCATE implements an assurance metamodel that allows all of the artifacts relevant from the safety assurance perspective to be explicitly defined and their relations captured. Some of the artifacts can be created directly in AdvoCATE (e.g., hazard log, safety arguments, safety architecture), while other artifacts such as formal verification results, can be imported into the tool so that the evidence can be collectively viewed. AdvoCATE enables creation of different dynamic views of the assurance case where it can receive and evaluate external data to highlight the current state of the captured assurance case justifications. In this talk, we give an overview of the current capabilities and future developments in supporting DACs with AdvoCATE.
TBD
Slingshot and SatCat5 Overview
TBD
Quick Re-Introduction to Trust Readiness Levels, Morning Recap
Aerospace Human/Automation Dissonance Examples
Breakout Discussion on Trust Readiness Level 2 Artifacts and Metrics
Group Outbriefs
The Vehicle System Manager (VSM) is the highest-level software control system in the Gateway hierarchical Autonomous System Management Architecture (ASMA). A key objective in ASMA design is to focus on infrastructure and systems to allow autonomous operations aboard Gateway. VSM will integrate modules and visiting vehicles to assist ground controllers and onboard crew in operation of Gateway as the head of a distributed and hierarchical system. The VSM provides four function categories: Mission Management and Timeline Execution, Resource Management, Fault Management, Vehicle Control and Operation. VSM provides various levels of automation ranging from fully autonomous operations with no flight crew and minimal ground monitoring to advisory automation when Gateway is crewed and has full ground monitoring. Trustworthiness is achieved via verified specification, comprehensive development verification, and real-time verification using assume-guarantee contracts (AGCs). VSM is heavily data-driven. Development verification includes semantic verification of the data model via peer review and testing. Development AGCs are implemented using the PlusCal/TLA+ environment to model key state machines with the AGCs implemented as assertions and linear temporal logic formulas, checked using the TLC model checker. VSM uses runtime AGCs, implemented in R2U2 using assertions in propositional logic and guarantees in mission-time linear temporal logic (MLTL). R2U2 was selected because it permits formulas to be written in a mathematically concise, unambiguous notation. Additionally R2U2 is optimized for speed and size and the inferencing engine has been proven correct with respect to the operator space. R2U2 is integrated in VSM via a runtime monitor that feeds the necessary telemetry data to R2U2 and which receives and responds to the R2U2 verdict stream. The full lifecycle verification approach and use of AGCs provides increased trustworthiness to VSM. Preliminary results provide encouragement that VSM can be both autonomous and trustworthy.
Robots capable of human-level manipulation are key to industrial space viability and the commercial space economy. Beyond building space-rated robot hardware, advanced software capabilities including artificial intelligence are critical to enabling practical, everyday robotics usage. In this talk Dr. Coleman will propose an approach for mostly autonomous control software, with minimum human supervision. A key to this approach is designing multi-purpose robots that can adapt to a variety of IVA, EVA, and lunar manipulation needs for dynamically changing environments.
Dr. Dave Coleman is CEO of PickNik Robotics and an industry thought leader in robotics. PickNik has been successfully delivering robotics innovation on and off earth over the past 7 years to over 60 customers. Before founding PickNik, Dave worked at Google Robotics, Open Robotics, and Willow Garage. Dave is an international advocate of open source software, robotic interoperability, and an expert in autonomous motion control. He has been collaborating with NASA on various robotic programs and SBIRs since 2014.
This paper introduces the Space Radiation Tolerant Edge Processing solutions from Teledyne e2v : Multicore Processors, high speed DDR4 Memories, and integrated computing modules. An introduction of Teledyne e2v Space manufacturing flow will be presented, prior to dive into the radiation testing and mitigation strategy. Finally, some specific examples of use cases in Space will be proposed.
NASA’s plans to return humans to the Lunar surface require overcoming a variety of challenging technical and operational obstacles. In 2022, NASA formed the Extravehicular Activity (EVA) and Human Surface Mobility (HSM) Program (EHP) at the Johnson Space Center with responsibilities including development of space suits and surface mobility systems for Lunar missions. This program includes a Technology Development and Partnerships office chartered to identify high priority gaps in capabilities for Lunar surface mobility and to coordinate resources to close those gaps. This presentation details the EHP technology roadmap for “Informatics and Decision Support,” a subset of spacecraft avionics focused on effective and autonomous crew interaction with spaceflight systems. The gaps, grouped into displays, audio systems, and information technology infrastructure, are largely driven by the unique interaction requirements for human spacecraft and the severe radiation environments beyond low earth orbit. The roadmap identifies ongoing activities and paths to technology infusion into Lunar spacecraft. NASA is seeking input on the content and ideas for alternative paths to gap closure. Closing these gaps is important to successful human operations on the Lunar surface and vital to NASA’s long-term goal of human missions to Mars.
Emerging space mission requirements for complex autonomous operations, high-end sensor processing, swarm/constellation management, planetary/lunar landing, and autonomous exploration missions, etc. require processing requirements that rad-hard processors cannot achieve and fault tolerance requirements that COTS processors alone cannot meet. Troxel Aerospace’s SEFI/SEU Mitigation Middleware (SMM) provides a path to enabling these types of missions by allowing high-performance COTS processors to operate with a high degree of fault tolerance without the need for rad-hard electronics, and thus simultaneously meet advanced mission criticality and performance requirements. Troxel Aerospace has demonstrated operate-through capability through extensive radiation testing on multicore processors and GPUs, most recently for several critical missions including astronaut displays for NASA Johnson and a few DoD missions. This presentation will highlight SMMs history, features, recent radiation test results, and upcoming missions.
TBD
High fidelity simulated lunar environments play a key role in making lunar missions successful. From landing on moon’s surface to exploring it, simulators allow to plan and refine software and hardware components. Yet, as of today, high-quality lunar simulators are either proprietary or expensive closed source solutions. Worse, they only incorporate a subset of the features required to simulate complete lunar missions. This talk aims at engaging with the audience to try answering the following question: how, as a community, could we build a high-fidelity open-source simulator for lunar missions?
TBD
Choose Your Own Adventure - Group decides deep dive area
Breakout Discussions
Group Outbriefs
Workshop Summary and Next Steps
The NASA Artemis campaign will return humans to the Moon. This time, with the help of commercial and international partners, the campaign's objective is a permanent Moon base. The Moon base infrastructure, including an orbiting station and surface assets, will be developed for astronauts to stay for the long haul to learn to live and work on another planet in preparation for an eventual Humans-to-Mars mission. As the roundtrip communication delays increase in deep space exploration, the crew will need more onboard systems autonomy and functionality to maintain and control the vehicle or habitat. These mission constraints will change the current Earth-based spacecraft to ground control support approach that will demand safer, more efficient, and more effective Computer-Human Interface (CHI) control. For Artemis, CHI is defined as the elements that the crew interfaces with: audio, imagery, lighting, displays, and crew controls subsystems. Understanding how CHI will need to evolve to support deep space missions will be critical for the Artemis campaign --especially crew controls, which is the focus of this paper. How does NASA ensure crew controls are reliable enough to control complex systems and prevent a catastrophic event due to human error--especially when the astronauts could be physiologically and/or psychologically impaired? NASA's approach to mitigating catastrophic hazards in human spaceflight system development such as crew controls is through a holistic system engineering and Human System Integration (HSI) methodology. This approach focuses on incorporating NASA's Human-Rating Requirements to ensure consideration of human performance characteristics to control and safely recover the crew from hazardous situations. This paper discusses, at a high level, CHI for the Artemis campaign. Next, a discussion of what it means to human-rate a space system crew controls and how trust in CHI begins with the NASA human rating requirements. Finally, a discussion on how systems engineering and the HSI process ensure that crew control implementation incorporates the NASA human-rating requirements.
The Callisto Technology Demonstration Payload was installed inside the cabin of the Artemis I Orion spacecraft to enjoy a perfect string of 21 successful operational sessions throughout the spectacular 25.5 day lunar mission. A joint partnership between Lockheed Martin, Amazon Alexa, and Webex by Cisco, this demonstration explored multiple unique crew interfaces that may potentially improve a crew’s operational efficiency and quality of life aboard future exploration missions. Augmented by payload-provided cabin lights, cameras, and an intercom, Callisto’s primary intent was realized by implementing crew displays and Cisco’s Webex collaboration technology on an iPad, as well as a ‘local voice control’ version of Amazon’s Alexa on a COTS single-board computer. Via the first-ever integration between a secondary payload and Orion’s flight software, Callisto made available Orion’s extensive telemetry to both the crew displays and Alexa. With no crew aboard Artemis I to test Callisto, the system was instead evaluated by “virtual crew members” (VCM) invited to the payload operations suite in the Johnson Space Center Mission Control Center, thus meeting Callisto’s secondary objective to engage the public throughout the mission.
Having operated flawlessly in every engineering respect, and having been met with universal enthusiasm by every VCM, Callisto not only met both of its main objectives, but lent further weight to our ever-growing appreciation for just how far it may be possible to push smartly implemented COTS hardware into the challenging realm of space. This presentation provides an overview of Callisto's development as a platform at least partially intended to further explore such COTS possibilities in space, and its subsequent operational success during its trip around the moon.
Recent commitments to return humans to the lunar surface and long duration crewed missions beyond the protection of the Earth’s atmosphere and magnetosphere require examination of technologies and challenges that are unique to human inhabitants. Electronic displays serve as a critical, real time informational interchange between crew and the plethora of support technologies that contribute to a successful crewed mission (e.g. scientific instrumentation, safety and health monitors, computer interface, etc.). Critical components utilized in space-based applications must reliably operate through a variety of hostile environments such as the particle radiation environment comprised of galactic cosmic rays, trapped particle belts, and solar particle emissions. These highly energetic particles interact with materials at the atomic level, temporarily distorting free charge carrier populations and modifying intrinsic material parameters that in-turn impact the performance of devices built upon that material. Present-day utilization of electronic displays on the International Space Station and space tourism are confined to well-shielded spacecraft in low earth orbit altitudes with non-polar orbits which results in significantly attenuated energetic particle populations and radiation dose seen by on-board components. In contrast, crewed missions to the lunar surface will subject electronic displays to a particle radiation environment without geomagnetic shielding and in some cases with little to no shielding at all, (e.g. displays on an unpressurized lunar rover, surface-based instrumentation, etc.). Given the realities of the small quantities required for space-based applications compared to the broader market, it is critical to evaluate potential design sensitivities while also understanding the tunable parameters (if any) within the electronic display fabrication and assembly process. In anticipation of extensive usage in future crewed missions, preemptive examination of radiation tolerance of commercially available electronic displays serves to reduces the risk volatility posed by the inclusion of electronic displays in upcoming crewed mission (NASA HEOMD-405 Integrated Exploration Capability Gaps List Tier 1 Gap 02-02).
Notionally, an electronic display can be decomposed into 1) the pixel screen responsible for manipulation of light and 2) support electronics required to drive pixels on the screen to portray an image. Based on existing physics-of-failure understanding of radiation effects, the constituent components of an electronic display can be interrogated for its most susceptible degradation mechanisms. In the case of the screen, the primary degradation mechanisms are cumulative over the lifetime of the screen with the light emission layers (e.g. light emitting diodes) being sensitive to atomic displacements (displacement damage) and the thin-film transistor backplane and plastic/glass overlayers being susceptible to excess charge accumulation in oxides and passivation of charge centers (total ionizing dose). While the support electronics will still be susceptible to cumulative dose effects as well, the primary concern is the potential disruption caused by instantaneous charge injection from an individual high energy particle (single event effect). At a system level, this radiation-induced degradation in the screen presents as reduced output luminosity and potential color shift of display images while single event effects in the support circuitry can result in temporary and persistent visual distortions as well as unrecoverable electrical failure.
To further develop the foundation of radiation effects in electronic displays, a multi-institution collaboration has conducted initial heavy ion (single event effect) and 64 MeV proton irradiation (cumulative dose) test campaigns to 1) develop the characterization and analysis techniques for electronic displays and 2) collect test data for broadly assessing the susceptibility of display technologies. In accordance with the shift towards utilization of commercial-off-the shelf components and systems, it is pragmatic to evaluate a range of commercially available pixel technologies that could be selected by designers or original equipment manufacturers based on the trade-space of performance, cost, and resource requirements. From these test results, radiation-induced degradation in the pixel technology of organic light emitting diodes (OLEDs), backlight thin-film transistor liquid crystal displays (TFT-LCDs), and light emitting diode (LED) dot arrays was demonstrated and used to examine the significance of the red, green, and blue pixels degrading at distinct rates (non-uniform). Additionally, heavy ion tests allowed for cataloguing of non-destructive visual single event error signatures to better categorize error signatures as acceptable/unacceptable and preemptively develop and identify the software mitigation approaches for the computer systems that ultimately drive the electronic displays. The intent of this presentation is to socialize the necessity of radiation tolerant electronic displays for future crewed missions to the broader space computing community, outline characterization and analysis techniques utilized for characterizing radiation-induced degradation in human-interface applications, and summarize radiation test results from a cross-section of commercially available display technologies to grow the body-of-knowledge in anticipation of the need for reliable electronic displays for crewed missions.
Towards A Radiation-Tolerant Display System
To ensure a safe and economically valuable operation of a battery system over the whole lifetime, a battery management system is used for measuring and monitoring battery parameters and controlling the battery system. Since the battery performance decreases over its lifetime, a precise on-board aging estimation is needed to identify significant capacity degradation endangering the functionality and safety of a battery system. Especially for aviation and space applications, this can result in catastrophic scenarios. Therefore, in this work, a generic battery management system approach is presented considering aerospace application requirements. The modular hardware and software architecture and its components are described. Moreover, it is shown that the developed battery management system supports the execution of data-driven state of health estimation algorithms. For this purpose, aging estimation models are developed that only receive eight high-level parameters of partial charging profiles as input without executing further feature extraction steps and can thus be easily provided by a battery management system. Three different neural network architectures are implemented and evaluated: a fully connected neural network, a 1D convolutional neural network and a long short-term memory network. It is shown that all three aging models provide a precise state of health estimation by only using the obtained high-level parameters. The achieved fully connected neural network provides the best tradeoff between required memory resources and accuracy with an overall mean absolute percentage error of 0.41%.
New generations of spacecrafts are required to perform tasks with an increased level of autonomy. Space exploration, Earth Observation, space robotics, etc. are all growing fields in Space that require more sensors and more computational power to perform these missions. Furthermore, new sensors in the market produce better quality data at higher rates while new processors can increase substantially the computational power. Therefore, near-future spacecrafts will be equipped with large number of sensors that will produce data at rates that has not been seen before in space, while at the same time, data processing power will be significantly increased.
Use cases like guidance navigation and control applications, vision-based navigation has become increasingly important in a variety of space applications for enhancing autonomy and dependability. Future missions such as Active Debris Removal will rely on novel high-performance avionics to support image processing and Artificial Intelligence algorithms with large workloads. Similar requirements come from Earth Observation applications, where data processing on-board can be critical in order to provide real-time reliable information to Earth.
This new scenario of advanced Space applications and increase in data amount and processing power, has brought new challenges with it: low determinism, excessive power needs, data losses and large response latency.
In this article, a novel approach to on-board artificial intelligence (AI) is presented that is based on state-of-the-art academic research of the well known technique of data pipeline. Algorithm pipelining has seen a resurgence in the high performance computing work due its low power use and high throughput capabilities. The approach presented here provides a very sophisticated threading model combination of pipeline and parallelization techniques applied to deep neural networks (DNN), making these type of AI applications much more efficient and reliable. This new approach has been validated with several DNN models developed for Space application (including asteroid landing, cloud detection and coronal mass ejection detection) and two different computer architectures. The results show that the data processing rate and power saving of the applications increase substantially with respect to standard AI solutions, enabling real AI on space.
FRIDAY, JULY 21, 2023
If you have any questions, feel free to contact us at:
Copyright © IEEE SMC-IT/SCC 2022-2023 - Privacy Policy