feature article
Subscribe Now

DARPA Funds Space Lasers to Bring Non-Sectarian Internet Communications to Outer Space

Have you noticed that low-Earth orbit (LEO) is getting crowded? Several companies are developing globe-spanning satellite networks to provide Internet access to every square inch of the Earth’s surface. Elon Musk’s SpaceX and Starlink may be the most visible venture, but other entrants in this derby include Amazon’s Project Kuiper, SpaceLink, Viasat, and Telesat. On the one hand, it’s going to be very handy to have multiple companies competing to spread Internet-based communications everywhere. On the other hand, there’s an issue of compatibility that’s raising its ugly head. All of these companies are developing their own communication satellite constellation networks, and it’s very apparent that the satellites within each constellation will be communicating with each other to minimize latency through the network. It’s also quite apparent that the links among these satellites will be based on optical communications. In other words, space lasers.

One of the worries being raised at the moment is the lack of a standard protocol for the inter-satellite optical comms. Sure, satellites within one vendor’s constellation will likely be able to communicate with the other satellites, at least until the next optical communications stack upgrade, but there’s no standard protocol in sight that would allow satellites from different vendors to communicate with each other. But would that not be a good idea?

One organization that thinks compatibility would be a good idea is the latest incarnation of the US government agency that started this whole Internet thing. That would be DARPA, the Defense Advanced Research Projects Agency. DARPA’s predecessor, ARPA (that’s DARPA minus the “D” for “defense” because, back then, the “D” was silent) funded the development of the ARPANET – the Internet’s predecessor – back in 1966. ARPANET is where we got packet switching, distributed network control, and the TCP/IP protocol that are the bedrock foundational technologies of today’s Internet.

DARPA’s trying to avoid the networking situation we had in the 1970s and 1980s, where many computer companies had developed mutually incompatible networking protocols. Digital Equipment Corporation (DEC) had DECnet. IBM had SNA (Systems Networking Architecture) and used a token-ring protocol. Telephone carriers like AT&T were using ISDN. Depending on your perspective, ISDN reportedly meant “Integrated Services Digital Network,” “I See Dollars Now,” or “I Still Don’t Know.” Datapoint pushed its ARCNET (Attached Resource Computer Network). General Motors played with a Token Bus networking protocol for a while. And then, of course, Xerox PARC developed Ethernet, which was so successful that Xerox exited the computer business altogether.

DARPA would prefer that history not repeat itself in space, where no one can hear you scream, so the agency has a new program designed to bring some organization to the situation. The project’s name is Space-Based Adaptive Communications Node program, abbreviated as Space-BACN, and, of course, pronounced “space bacon,” which is not to be confused with regular BACN, the US Air Force’s Battlefield Airborne Communications Node. See, we’re well on the way to minimizing confusion here.

According to DARPA’s Website, the Space-BACN initiative “aims to create a low-cost, reconfigurable optical communications terminal that adapts to most optical intersatellite link standards, translating between diverse satellite constellations.”

It seems, at present, that DARPA is not trying to develop one standard inter-satellite networking standard. That’s probably a very good idea because these are early days and the experimentation is still ongoing. Instead, DARPA seems to want to develop a universal optical modem for satellites that can be readily adapted to a range of protocols.

Space-BACN Phase 0, already completed, developed an architectural design. Phase 1, which consists of three “Technical Areas,” is now underway. Technical Area 1 (TA1) aims to develop a flexible, low-SWaPC (size, weight, power, cost) optical aperture or optical head, which is responsible for pointing acquisition, target tracking, and optical transmit and receive functions. Here be lasers. The three companies that DAPRA selected for TA1 are CACI, MBRYONICS, and Mynaric.

TA2’s goal is to develop a reconfigurable optical modem that supports data rates to 100 Gbps on a single wavelength of light. TA2 electronics connect to the TA1 space lasers via optical fiber. The DARPA-selected TA2 participants include II-VI Aerospace and Defense, Arizona State University, and Intel Federal, LLC.

TA3 will “identify critical command and control elements required to support cross-constellation optical intersatellite link communications and develop the schema necessary to interface between Space-BACN and commercial partner constellations.” The companies participating in TA3 are the satellite constellation suppliers: Space Exploration Technologies (SpaceX), Telesat, SpaceLink, Viasat, and Kuiper Government Solutions, an Amazon subsidiary.

The TA2 effort needs to develop several capabilities, and I discussed those needs with Jose Alvarez, a Senior Director in the Intel office of the CTO, representing the Intel Programmable Solutions Group. That’s the group within Intel responsible for FPGAs. Alvarez explains that Intel is working on the optical modem’s electronic section, which will require significant DSP capabilities to implement the modem’s FEC (forward error correction) algorithm and to compensate for the Doppler effects on light. After all, the orbiting satellites are moving fast enough for relativistic effects to become apparent. These DSP capabilities are likely to be based on FPGA implementations for the foreseeable future, at least until there’s enough experience to codify the algorithms into standards.

In addition, Intel is engaging technologists from its Assembly Test Technology Development (ATTD) division and researchers from Intel Labs. For its role in the TA2 effort, Intel plans to develop three new chiplets that will be integrated into an MCP (multi-chip package) device using Intel’s EMIB (embedded multi-die interconnect bridge) and AIB (advanced interface bus) packaging technologies. EMIB and AIB technologies are commercially proven Intel packaging technologies that the company has used for years to manufacture its most advanced processors and FPGAs. The most extensive use of these packaging technologies is in the company’s Ponte Vecchio device, which combines 47 active chiplets into one package to create its GPGPU for high-performance computing (aka supercomputers).

The three planned chiplets include:

  • A DSP/FEC chiplet implemented with the Intel 3 process, which is currently Intel’s most advanced digital process node.
  • A data converter, optical transimpedance amplifier (TIA), and driver chiplet implemented with the Intel 16 process node, which includes RF FinFETs that can be used for analog RF signal processing, high-speed data converters, TIAs, and drivers.
  • A photonic IC (PIC) chiplet based on Tower Semiconductor’s photonic process technologies that can implement low-loss waveguides and etched V-groove mechanical interfaces for optical fiber.

Intel has started its Phase 1 part of the TA2 program, beginning with the design of the three chiplets, and they are working with the other performers to fully define the interfaces between the system components in each of the other technical areas. Phase 1 will last 14 months and conclude with a preliminary design review. After Phase 1 concludes, DARPA will select Phase 1 participants from the first two technical areas to participate in an 18-month Phase 2 project to develop engineering design units based on the Phase 1 work. All this work will hopefully produce interoperable satellite constellations in the coming years.

Leave a Reply

featured blogs
May 8, 2024
Learn how artificial intelligence of things (AIoT) applications at the edge rely on TSMC's N12e manufacturing processes and specialized semiconductor IP.The post How Synopsys IP and TSMC’s N12e Process are Driving AIoT appeared first on Chip Design....
May 2, 2024
I'm envisioning what one of these pieces would look like on the wall of my office. It would look awesome!...

featured video

Why Wiwynn Energy-Optimized Data Center IT Solutions Use Cadence Optimality Explorer

Sponsored by Cadence Design Systems

In the AI era, as the signal-data rate increases, the signal integrity challenges in server designs also increase. Wiwynn provides hyperscale data centers with innovative cloud IT infrastructure, bringing the best total cost of ownership (TCO), energy, and energy-itemized IT solutions from the cloud to the edge.

Learn more about how Wiwynn is developing a new methodology for PCB designs with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver.

featured paper

Altera® FPGAs and SoCs with FPGA AI Suite and OpenVINO™ Toolkit Drive Embedded/Edge AI/Machine Learning Applications

Sponsored by Intel

Describes the emerging use cases of FPGA-based AI inference in edge and custom AI applications, and software and hardware solutions for edge FPGA AI.

Click here to read more

featured chalk talk

Miniaturization Impact on Automotive Products
Sponsored by Mouser Electronics and Molex
In this episode of Chalk Talk, Amelia Dalton and Kirk Ulery from Molex explore the role that miniaturization plays in automotive design innovation. They examine the transformational trends that are leading to smaller and smaller components in automotive designs and how the right connector can make all the difference in your next automotive design.
Sep 25, 2023
28,494 views