Turning the Body Into a Wire - IEEE Spectrum

2022-08-20 10:57:45 By : Mr. Sancho Wang

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

In 2007, U.S. vice president Dick Cheney ordered his doctors to disable all wireless signals to and from his Internet-connected pacemaker. Cheney later said that the decision was motivated by his desire to prevent terrorists from being able to hack his pacemaker and use it to lethally shock his heart. Cheney's command to his doctors might seem to some to be overly cautious, but wirelessly connected medical devices have a history of exploitable vulnerabilities. At a series of conferences in 2011 and 2012, for example, New Zealand hacker Barnaby Jack showed that connected medical devices could be remotely attacked. Jack used a high-gain antenna to capture the unencrypted electromagnetic signals transmitted by an insulin pump on a mannequin 90 meters away. He then used those signals to hack into the pump and adjust the level of insulin the pump delivered. He also hacked a pacemaker and made it deliver deadly electric shocks.

Eight years after those demonstrations, connected medical devices remain vulnerable. In June 2020, for example, the U.S. Department of Homeland Security recalled a model of connected insulin pumps. The pumps were transmitting sensitive information without encryption, making the data accessible to anyone nearby who might want to listen in.

Medical devices are only the tip of the iceberg when it comes to the wireless devices people are putting in or on their bodies. The list includes wireless earbuds, smartwatches, and virtual-reality headsets. Technologies still in development, such as smart contact lenses that display information and digital pills that transmit sensor data after being swallowed, will also be at risk.

All of these devices need to transmit data securely at low power and over a short range. That's why researchers have started to think about them as individual components of a single human-size wireless network, referred to as a body-area network. The term “Internet of Bodies" (IoB) is also coming into use, taking a cue from the Internet of Things.

At the moment, IoB devices use established wireless technologies, mainly Bluetooth, to communicate. While these technologies are low power, well understood, and easy to implement, they were never designed for IoB networks. One of Bluetooth's defining features is the ability for two devices to easily find and connect to one another from meters away. That feature is precisely what allows a hypothetical attacker to snoop on or attack the devices on someone's body. Wireless technologies have also been designed to travel through air or vacuum, not through the medium of the human body, and therefore they are less efficient than a method of communicating designed to do so from scratch.

Through our research at Purdue University, we have developed a new method of communication that will keep medical devices, wearables, and any other devices on or near the body more secure than they are using low-power wireless signals to communicate with one another. The system capitalizes on the human body's innate ability to conduct tiny, harmless electrical signals to turn the entire body into a wired communication channel. By turning the body into the network, we will make IoB devices more secure.

Sensitive personal data like medical information should always be encrypted when it's transmitted, whether wirelessly or in an email or via some other channel. But there are three other especially good reasons to prevent an attacker from gaining access to medical devices locally.

The first is that medical data should be containable. You don't want a device to be broadcasting information that someone might eavesdrop on. The second reason is that you don't want the integrity of the device to be compromised. If you have a glucose monitor connected to an insulin pump, for example, you don't want the pump to release more glucose because the monitor's data was compromised. Not enough glucose in the blood can cause headaches, weakness, and dizziness, while too much can lead to vision and nerve problems, kidney disease, and strokes. Either situation can eventually lead to death. The third reason is that the device's information always needs to be available. If an attacker were to jam the signals from an insulin pump or a pacemaker, the device might not even know it needed to respond to a sudden problem in the body.

So if security and privacy are so important, why not use wires? A wire creates a dedicated channel between two devices. Someone can eavesdrop on a wired signal only if they physically tap the wire itself. That's going to be hard to do if the wire in question is on or inside your body.

Setting aside the benefits of security and privacy, there are some important reasons why you wouldn't want wires crisscrossing your body. If a wire isn't properly insulated, the body's own biochemical processes can corrode the metal in the wire, which could in turn cause heavy-metal poisoning. It's also a matter of convenience. Imagine needing to repair or replace a pacemaker with wires. Rethreading the wires through the body would be a very delicate task.

Rather than choose between wireless signals, which are easy for eavesdroppers to snoop, and wired signals, which bring risk to the body, why not a third option that combines the best of both? That's the inspiration behind our work to use the human body as the communication medium for the devices in someone's body-area network.

People are putting more and more devices in and on their bodies. Whether they're medical devices like pacemakers, insulin pumps, and body-temperature sensors, or consumer tech like wireless earbuds, smartwatches, and fitness trackers, they all have one thing in common. None of them need to send data beyond the range of the human body. Any communications beyond the body can be handled by a central wireless hub.

The network that these devices create is called an Internet of Bodies (IoB) network, borrowing from the Internet of Things concept. IoB networks share some of the same needs as a smart home, for example. A smart home can be filled with wildly different devices—an Amazon Alexa, a smart fridge, and a system that adjusts lights automatically when people enter and exit rooms—that all use Bluetooth or Wi-Fi to communicate. Likewise, consumer tech and medical devices in an IoB network can both use the common medium of the body itself to send signals.

We call the method of sending signals directly through the body electro-quasistatic human-body communication. That's a mouthful, so let's just think of it as a body channel. The important takeaway is that by exploiting the body's own conductive properties, we can avoid the pitfalls of both wired and wireless channels.

Metal wires are great conductors of electric charge. It's a simple matter to transmit data by encoding 1s and 0s as different voltages. You need only define 1s as some voltage, which would cause current to flow through the wire, and 0s as zero voltage, which would mean no current flowing through the wire. By measuring the voltage over time at the other end of the wire, you end up with the original sequence of 1s and 0s. However, given you don't want metal wires running around or through the body, what can you do instead?

The average adult human is about 60 percent water by weight. And though pure water is a terrible electrical conductor, water filled with conductive particles like electrolytes and salts conducts electricity better. Your body is filled with a watery solution called the interstitial fluid that sits underneath your skin and around the cells of your body. The interstitial fluid is responsible for carrying nutrients from the bloodstream to the body's cells, and is filled with proteins, salts, sugars, hormones, neurotransmitters, and all sorts of other molecules that help keep the body going. Because inter­stitial fluid is everywhere in the body, it allows us to establish a circuit among two or more communicating devices sitting pretty much anywhere on the body.

Imagine someone with diabetes who uses an insulin pump and a separate monitor on the abdomen to manage blood glucose levels. Suppose they want their smartwatch, among its many other functions, to display current glucose levels and the operational status of the pump. Traditionally, these devices would have to be connected wirelessly, which would make it theoretically possible for anyone to grab a copy of the user's personal data. Or worse, potentially attack the pump itself. Today, many medical devices still aren't encrypted, and even for those that are, encryption is not a guarantee of security.

Here's how it would work with a body channel instead. The pump, the monitor, and the smartwatch would each be outfitted with a small copper electrode on its back, in direct contact with the skin. Each device also has a second electrode not in contact with the skin that functions as a sort of floating ground, which is a local electrical ground that is not directly connected with Earth's ground. When the monitor takes a blood glucose measurement, it will need to send that data to both the pump, in case the insulin level needs to be adjusted, and to the smartwatch, so that the individual can see the level. The smartwatch can also store data for long-term monitoring, or encrypt it and send it to the user's computer, or their doctor's computer, for remote storage and analysis.

The monitor communicates its glucose measurements by encoding the data into a series of voltage values. Then, it transmits these values by applying a voltage between its two copper electrodes—the one touching the human body, and the one acting as a floating ground.

This applied voltage very slightly changes the potential of the entire body with respect to Earth's ground. This tiny change in potential between the body and Earth's ground is just a fraction of the potential difference between the monitor's two electrodes. But it's enough to be picked up, as an even smaller fraction after crossing the body, by the devices elsewhere. Because both the pump on the waist as well as the smartwatch on the wrist are on the body, they can detect this change in potential across their own two electrodes—both on-body and floating. The pump and the smartwatch then convert these potential measurements back into data. All without the actual signal ever traveling beyond the skin.

One of the biggest challenges for realizing this method of body communication is in selecting the best wavelengths for the electrical signals. Electrical wavelengths like the ones we're considering here are much longer than the RF wavelengths for wireless communications.

Staying Grounded: David Yang [right], a Ph.D. student of Shreyas Sen [left], wears a transmitter on his right wrist. The transmitter sends a code through his body to unlock a computer connected to the receiver in his left hand. On the left is a close up of the wrist wearable and another prototype of the receiver that has been miniaturized into a USB insert.

The reason selecting a frequency is a challenge is that there is a range of frequencies at which the human body itself can become an antenna. An ordinary radio antenna creates a signal when an alternating current causes the electrons in its material to oscillate and create electromagnetic waves. The frequency of the transmitted waves depends on the frequency of the alternating current fed into the antenna. Likewise, an alternating current at certain frequencies applied to the human body will cause the body to radiate a signal. This signal, while weak, is still strong enough to be picked up with the right equipment and from some distance away. And if the body is acting as an antenna, it can also pick up unwanted signals from elsewhere that might interfere with wearables' and implants' ability to talk with one another.

For the same reason you don't want to use technologies like Bluetooth, you want to keep electrical signals confined to the body and not accidentally radiating from or to it. So you have to avoid electrical frequencies at which the human body becomes an antenna, which are in the range of 10 to 100 megahertz. Above that are the wireless bands, and we've already mentioned the problems there. The upshot is that you need to use frequencies in the range of 0.1 to 10 MHz, in which signals will stay confined to the body.

Earlier attempts to use the human body to communicate have usually shied away from these lower frequencies because the body is typically high loss at low frequencies. In other words, signals at these lower frequencies require more power to guarantee that a signal will make it to its destination. That means a signal from a glucose monitor on the abdomen might not make it to a smartwatch on the wrist before it's unreadable, without a significant boost in power. These previous efforts were high loss because they focused on sending direct electrical signals, rather than information encoded in potential changes. We've found that the parasitic capacitance between a device and the body is key to creating a working channel.

Capacitance refers to the ability of an object to store electrical charge. Parasitic capacitance is unwanted capacitance that occurs unintentionally between any two objects. For example, two charged areas in close proximity on a circuit board, or between a person's hand and their phone. Typically, parasitic capacitance is a nuisance, although it also enables certain applications like touch screens.

Astute readers may have picked up that we haven't mentioned one key aspect of circuits before now: A circuit needs to be a closed loop for electrical communication to be possible. Up until now, we've restricted our discussion to the forward path, meaning the part of the circuit from the transmitting electrode to the receiving electrode. But we need a path back. We have one thanks to parasitic capacitance between the floating ground electrodes on the devices and Earth's ground.

Here's how to picture the circuit we're using. First, imagine two circuit loops. The first loop begins with the transmitting device, at the electrode touching the skin. The circuit then goes through the body, down through the feet to the actual ground, and then back up through the air to the other (floating) electrode on the transmitting device. We should note here that this is not a loop through which direct current can flow. But because parasitic capacitances exist between any two objects, such as your feet and your shoes, and your shoes and the ground, a small alternating current can exist.

The second loop, in a similar fashion, begins with the receiving device, at its electrode that is touching the skin. It then goes through the body—both loops share this segment—to the ground, and back through the air to the floating-ground electrode on the receiving device.

The key here is to understand that the circuit loops are important not because we have to push a current through them necessarily, but because we need a closed path of capacitors. In a circuit, if the voltage changes across one capacitor—for example, the two electrodes of the transmitting device—it creates a slight alternating current in the loop. The other capacitors, meaning both the body and the air, “see" this current and, because of their impedances, or resistances to the current, their voltages change as well.

Remember that the circuit loop with the transmitting device and the one with the receiving device share the body as a segment of their respective loops. Because they share that segment, the receiving device also responds to the slight change in the body's voltage. The two electrodes making up the receiving device's capacitor detect the body's changing voltage and allow that measurement to be decoded as meaningful information.

Using the body as the communications channel for IoB devices avoids the fundamental problem with radiative technologies like Wi-Fi and Bluetooth by keeping the electrical signals under the skin.

Take an insulin pump that needs to send blood-glucose levels to a smartwatch. The technique creates two circuit loops in the body. The first loop [blue] starts with the pump. Two electrodes, one touching the body, the other floating, create an electric-potential difference between them. The voltage change also slightly modulates the body's potential. This causes a slight alternating current to flow from the pump, through the air to the ground, and up through the body.

The change in the body's potential causes a similar voltage change between the electrodes of the smartwatch, which also has one touching the body and one floating. This creates a second small alternating current [yellow] to flow in a similar loop. These two closed circuits make it possible to send electrical signals between the devices.

We have found that we want any IoB device's capacitor to have high capacitance. If this is the case, relatively high voltages created by the transmitting device will result in extremely low currents in the body itself. Obviously, this makes sense from a safety perspective: We don't want to run high current through the body, after all. But it also makes the communications channel low loss. That's because a high-impedance capacitor will be particularly sensitive to minor changes in current. The upshot is that we can keep the current low (and safe) and still get clear voltage measurements at the receiving device. We've found that our technique results in a reduction in loss of two orders of magnitude compared with previous attempts to create a wireless channel in the body, which relied on sending an electrical signal via current directly through the body.

Our method for turning the human body into a communications channel shifts the distance at which signals can be intercepted from the 5- to 10-meter range of ­Bluetooth and similar signals to below 15 centimeters. In other words, we've reduced the distance over which an attacker can both intercept and interfere with signals by two orders of magnitude. With our method, an attacker would need to be so close to the target that there's no way to hide.

Not only does our method offer more privacy and security for anyone with a medical implant or device, but as a bonus, the communications are far more energy efficient as well. Because we've developed a system that is low loss at low frequencies, we can send information between devices using far less power. Our method requires less than 10 picojoules per transferred bit. For reference, that's about 0.01 percent of the energy required by Bluetooth. Using 256-bit encryption, it drew 415 nanowatts of power to transmit 1 kilobit per second, which is more than three orders of magnitude below Bluetooth (which draws between 1 and 10 milliwatts).

Medical devices like pacemakers and insulin pumps have been around for decades. Bluetooth earbuds and smartwatches may be newer, but neither life-saving medical equipment nor consumer tech is leaving our bodies any time soon. It only makes sense to make both categories of devices as secure as possible. Data is always most vulnerable to a malicious attack when it is moving from one point to another, and our IoB communication technique can finally close the loop on keeping personal data from leaving your body.

This article appears in the December 2020 print issue as “To Safeguard Sensitive Data, Turn Flesh and Tissue Into a Secure Wireless Channel."

Shreyas Sen is an associate professor of electrical and computer engineering at Purdue University. He is a Senior Member of the IEEE. Shovan Maity and Debayan Das are graduate students of Sen at Purdue University.

The company’s Earth-2 supercomputer is taking on climate change

Kathy Pretz is editor in chief for The Institute, which covers all aspects of IEEE, its members, and the technology they're involved in. She has a bachelor's degree in applied communication from Rider University, in Lawrenceville, N.J., and holds a master's degree in corporate and public communication from Monmouth University, in West Long Branch, N.J.

Nvidia’s CTO Michael Kagan is an IEEE senior member.

In 2019 Michael Kagan was leading the development of accelerated networking technologies as chief technology officer at Mellanox Technologies, which he and eight colleagues had founded two decades earlier. Then in April 2020 Nvidia acquired the company for US $7 billion, and Kagan took over as CTO of that tech goliath—his dream job.

Nvidia is headquartered in Santa Clara, Calif., but Kagan works out of the company’s office in Israel.

At Mellanox, based in Yokneam Illit, Israel, Kagan had overseen the development of high-performance networking for computing and storage in cloud data centers. The company made networking equipment such as adapters, cables, and high-performance switches, as well as a new type of processor, the DPU. The company’s high-speed InfiniBand products can be found in most of the world’s fastest supercomputers, and its high-speed Ethernet products are in most cloud data centers, Kagan says.

The IEEE senior member’s work is now focused on integrating a wealth of Nvidia technologies to build accelerated computing platforms, whose foundation are three chips: the GPU, the CPU, and the DPU, or data-processing unit. The DPU can support the ability to offload, accelerate, and isolate data center workloads, reducing CPU and GPU workloads.

“At Mellanox we worked on the data center interconnect, but at Nvidia we are connecting state-of-the-art computing to become a single unit of computing: the data center,” Kagan says. Interconnects are used to link multiple servers and combine the entire data center into one, giant computing unit.

“I have access and an open door to Nvidia technologies,” he says. “That’s what makes my life exciting and interesting. We are building the computing of the future.”

Kagan was born in St. Petersburg, Russia—then known as Leningrad. After he graduated high school in 1975, his family moved to Israel. As with many budding engineers, his curiosity led him to disassemble and reassemble things to figure out how they worked. And, with many engineers in the family, he says, pursuing an engineering career was an easy decision.

He attended the Technion, Israel’s Institute of Technology, because “it was one of the best engineering universities in the world,” he says. “The reason I picked electrical engineering is because it was considered to be the best faculty in the Technion.”

Kagan graduated in 1980 with a bachelor’s degree in electrical engineering. He joined Intel in Haifa, Israel, in 1983 as a design engineer and eventually relocated to the company’s offices in Hillsboro, Ore., where he worked on the 80387 floating-point coprocessor. A year later, after returning to Israel, Kagan served as an architect of the i8060XP vector processor and then led and managed design of the Pentium MMX microprocessor.

During his 16 years at Intel, he worked his way up to chief architect. In 1999 he was preparing to move his family to California, where he would lead a high-profile project for the company. Then a former coworker at Intel, Eyal Waldman, asked Kagan to join him and five other acquaintances to form Mellanox.

Alma mater: Technion, Israel’s Institute of Technology, Tel Aviv

Kagan had been turning down offers to join startups nearly every week, he recalls, but Mellanox, with its team of cofounders and vision, drew him in. He says he saw it as a “compelling adventure, an opportunity to build a company with a culture based on the core values I grew up on: excellence, teamwork, and commitment.”

During his more than 21 years there, he said, he had no regrets.

“It was one of the greatest decisions I’ve ever made,” he says. “It ended up benefiting all aspects of my life: professionally, financially—everything.”

InfiniBand, the startup’s breakout product, was designed for what today is known as cloud computing, Kagan says.

“We took the goodies of InfiniBand and bolted them on top of the standard Ethernet,” he says. “As a result, we became the vendor of the most advanced network for high-performance computing. More than half the machines at the top 500 computer companies use the Mellanox interconnect, now the Nvidia interconnect.

“Most of the cloud providers, such as Facebook, Azure, and Alibaba, use Nvidia’s networking and compute technologies. No matter what you do on the Internet, you’re most likely running through the chip that we designed.”

Kagan says the partnership between Mellanox and Nvidia was “natural,” as the two companies had been doing business together for nearly a decade.

“We delivered quite a few innovative solutions as independent companies,” he says.

One of Kagan's key priorities is Nvidia’s Bluefield DPU. The data center infrastructure on a chip offloads, accelerates, and isolates a variety of networking, storage, and security services.Nvidia

As CTO of Nvidia for the past two years, Kagan has shifted his focus from pure networking to the integration of multiple Nvidia technologies including building BlueField data-processing units and the Omniverse real-time graphics collaboration platform.

He says Nvidia’s vision for the data center of the future is based on its three chips: CPU, DPU, and GPU.

“These three pillars are connected with a very efficient and high-performance network that was originally developed at Mellanox and is being further developed at Nvidia,” he says.

Development of the BlueField DPUs is now a key priority for Nvidia. It is a data center infrastructure on a chip, optimized for high-performance computing. It also offloads, accelerates, and isolates a variety of networking, storage, and security services.

“In the data center, you have no control over who your clients are,” Kagan says. “It may very well happen that a client is a bad guy who wants to penetrate his neighbors’ or your infrastructure. You’re better off isolating yourself and other customers from each other by having a segregated or different computing platform run the operating system, which is basically the infrastructure management, the resource management, and the provisioning.”

Kagan is particularly excited about the Omniverse, a new Nvidia product that uses Pixar’s Universal Scene Description software for creating virtual worlds—what has become known as the metaverse. Kagan describes the 3D platform as “creating a world by collecting data and making a physically accurate simulation of the world.”

Car manufacturers are using the Omniverse to test-drive autonomous vehicles. Instead of physically driving a car on different types of roads under various conditions, data about the virtual world can be generated to train the AI models.

“You can create situations that the car has to handle in the real world but that you don’t want it to meet in the real world, like a car crash,” Kagan says. “You don’t want to crash the car to train the model, but you do need to have the model be able to handle hazardous conditions on the road.”

Kagan joined IEEE in 1997. He says membership gives him access to information about technical topics that would otherwise be challenging to obtain.

“I enjoy this type of federated learning and being exposed to new things,” he says.

He adds that he likes connecting with members who are working on similar projects, because he always learns something new.

“Being connected to these people from more diverse communities helps a lot,” he says. “It inspires you to do your job in a different way.”

The Omniverse platform can generate millions of kilometers of synthetic driving data in orders of magnitude faster than actually driving the car.

Nvidia is investing heavily in technology for self-driving cars, Kagan says.

The company is also building what it calls the most powerful AI supercomputer for climate science: Earth-2, a digital twin of the planet. Earth-2 is designed to continuously run models to predict climate and weather events at both the regional and global levels.

Kagan says the climate modeling technology will enable people to try mitigation techniques for global warming and see what their impact is likely to be in 50 years.

The company is also working closely with the health care industry to develop AI-based technologies. Its supercomputers are helping to identify cancer by generating synthetic data to enable researchers to train their models to better identify tumors. Its AI and accelerated computing products also assist with drug discovery and genome research, Kagan says.

“We are actually moving forward at a fairly nice pace,” he says. “But the thing is that you always need to reinvent yourself and do the new thing faster and better, and basically win with what you have and not look for infinite resources. This is what commitment means.”

Standard handsets on Earth, in some locations, will soon connect directly to satellites for remote roaming

Lucas Laursen is a journalist covering global development by way of science and technology with special interest in energy and agriculture. He has lived in and reported from the United States, United Kingdom, Switzerland, and Mexico.

Lynk Tower 1 launched in April 2022, deploying the world’s first commercial cell tower in space.

The next generation of cellphone networks won’t just be 5G or 6G—they will be zero g. In April, Lynk Global launched the first direct-to-mobile commercial satellite, and on 15 August a competitor, AST SpaceMobile, confirmed plans to launch an experimental direct-to-mobile satellite of its own in mid-September. Inmarsat and other companies are working on their own low Earth orbit (LEO) cellular solutions as launch prices drop, satellite fabrication methods improve, and telecoms engineers push new network capabilities.

LEO satellite systems such as SpaceX’s Starlink and Amazon’s Kuiper envision huge constellations of satellites. However, the U.S. Federal Communications Commission just rejected SpaceX’s application for some of the US $9 billion federal rural broadband fund—in part because the Starlink system requires a $600 ground station. Space-based cell service would not require special equipment, making it a potential candidate for rural broadband funds if companies can develop solutions to the many challenges that face satellite-based smartphone service.

“The main challenge is the link budget,” says electrical engineer Symeon Chatzinotas of the University of Luxembourg, referring to the amount of power required to transmit and receive data between satellites and connected devices. “Sending signals to smartphones outdoors could be feasible by using low Earth orbit satellites with sizable antennas in the sky. However, receiving info would be even more challenging since the smartphone antennas usually disperse their energy in all directions.”

“From a nerdy engineering perspective, what’s happening is that network architectures are diverging.” —Derek Long, Cambridge Consultants

The typical distance from a phone to an LEO satellite might be 500 kilometers, at least two orders of magnitude more than typical signal-transmission distances in urban settings, so the dispersion of the phone’s power would be at least eight times greater, and would be further complicated by the phone’s orientation. It is unlikely that a satellite-smartphone connection would work well when the handset is inside a building, for example.

Lynk Global’s initial offering, which it predicts will be available in late 2022, is narrowband—meaning limited voice calls, texting, and Internet of Things (IoT) traffic. That might not allow plutocrats to make 4K video calls from their ocean-faring yachts, but it would be enough for ship insurance companies or rescue services to remain in contact with vessels in places where they couldn’t be reached before, using off-the-shelf cellular devices. AST SpaceMobile’s is aiming for 4G and 5G broadband service for mobiles.

AST satellites will use a phased-array antenna, which consists of many antennas fanned out around the satellite. Each portion of the antenna will transmit within a well-defined cone terminating at the Earth’s surface; that will be the space-to-Earth equivalent of a cell originating from a single ground base station. The company plans for an initial fleet of 20 satellites to cover the equator and help fund the launch of subsequent satellites providing more global coverage.

The size of the coverage zone on the ground should exceed the limited size of those created by Alphabet’s failed balloon-based Project Loon. Broader coverage areas should allow AST to serve more potential customers with the same number of antennas. The low Earth orbit AST is experimenting with yields round-trip signal travel times of around 25 milliseconds or less, an order of magnitude faster than is the case for higher-orbit geostationary satellites that have provided satellite telephony until now.

Plenty of behind-the-scenes technical work remains. The relatively high speed of LEO satellites will also cause a Doppler shift in the signals for which the network will have to compensate, according to a recent review in IEEE Access. New protocols for handoffs between satellites and terrestrial towers will also have to be created so that an active call can be carried from one cell to the next.

The international telecoms standards group 3GPP began providing guidelines for so-called nonterrestrial networks in March in the 17th iteration of its cellular standards. “Nonterrestrial networks” refers not just to LEO satellites but also high-altitude platforms such as drones or balloons. Nonterrestrial networks will need further updates to 3GPP’s standards to accommodate their new network architecture, such as the longer distances between cell base stations and devices.

For example, Stratospheric Platforms earlier this year tested a drone-based network prototype that would fly at altitudes greater than 18,000 meters. Its behavior as part of a 5G network will differ from that of a Lynk Global or AST satellite.

“From a nerdy engineering perspective, what’s happening is that network architectures are diverging. On the one hand, small cells are replacing Wi-Fi. On the other hand [telecom operators] are going to satellite-based systems with very wide coverage. In the middle, traditional macrocells, which are kind of difficult economically, are being squeezed,” says Derek Long, head of telecommunications at Cambridge Consultants. The company has advised Stratospheric Platforms and other companies working on nonterrestrial networks.

If telecom operators succeed, users won’t even notice their space-age smartphone networks.

“When you buy a phone, you expect it to work. Not just where someone says it will work, but everywhere. This is a step toward making that a possibility,” Long says.

Register for this webinar to enhance your modeling and design processes for microfluidic organ-on-a-chip devices using COMSOL Multiphysics

If you want to enhance your modeling and design processes for microfluidic organ-on-a-chip devices, tune into this webinar.

You will learn methods for simulating the performance and behavior of microfluidic organ-on-a-chip devices and microphysiological systems in COMSOL Multiphysics. Additionally, you will see how to couple multiple physical effects in your model, including chemical transport, particle tracing, and fluid–structure interaction. You will also learn how to distill simulation output to find key design parameters and obtain a high-level description of system performance and behavior.

There will also be a live demonstration of how to set up a model of a microfluidic lung-on-a-chip device with two-way coupled fluid–structure interaction. The webinar will conclude with a Q&A session. Register now for this free webinar!