MIT team races to fill Covid-19-related ventilator shortage

It was clear early on in the unfolding Covid-19 pandemic that a critical need in the coming weeks and months would be for ventilators, the potentially life-saving devices that keep air flowing into a patient whose ability to breathe is failing.

Seeing a potential shortfall of hundreds of thousands of such units, professor of mechanical engineering Alex Slocum Sr. and other engineers at MIT swung into action, rapidly pulling together a team of volunteers with expertise in mechanical design, electronics, and controls, and a team of doctors with clinical experience in treating respiratory conditions. They started working together nonstop to develop an inexpensive alternative and share what they learned along the way. The goal was a design that could be produced quickly enough, potentially worldwide, to make a real difference in the immediate crisis.

In a very short time, they succeeded.

Just four weeks since the team convened, production of the first devices based directly on its work has begun in New York City. A group including 10XBeta, Boyce Technologies, and Newlab has begun production of a version called Spiro Wave, in close collaboration with the MIT team. The consortium expects to quickly deliver hundreds of units to meet the immediate needs of hospitals in New York and, eventually, other hospitals around the country.

Meanwhile, the team, called MIT Emergency Ventilator, has continued their research to develop the design further. The next iteration will be more compact, have a slightly different drive system, and add a key respiratory function. Their overarching goal is to focus on safety and straightforward functionality and fabrication. 10XBeta, in both New York and Johannesburg, along with Vecna Technologies and NN Life Sciences in the Boston Area, are participating in this effort. 10XBeta was founded by MIT alumnus Marcel Botha SM ’06.

One version of the MIT Emergency Ventilator team’s emergency ventilator design undergoes testing in their lab. Courtesy of MIT Emergency Ventilator​ Team

A complex design challenge

Alexander Slocum Jr. SB ’08, SM ’10, PhD ’13, a mechanical engineer who is now a surgical resident at the Medical College of Wisconsin, worked closely with his father, Slocum Sr., and MIT research scientist Nevan Hanumara MS ’06, PhD ’12 to help lead the initial ramp up.

“The numbers are frightening, to put it bluntly,” Slocum Jr. says. “This project started around the time of news reports from Italy describing ventilators being rationed due to shortages, and available data at that time suggested about 10 percent of Covid patients would require an ICU.” One of his first tasks was to estimate the potential ventilator shortage, using resources like the CDC’s Pandemic Response Plan, and literature on critical care resource utilization. “We estimated a shortage of around 100,000 to 200,000 ventilators was possible by April or May,” he says.

Hanumara, who is one of the project leads for the Emergency Ventilator team, says the team intends to offer open-source guidelines, rather than detailed plans or kits, that will serve as resources to enable skilled teams around the country and world — such as hospital-based engineering groups, biomedical device manufacturing companies, and industry groups — to develop their own specific versions, taking into account local supply chains.

“There’s a reason we don’t have a single exact plan [on the website],” Hanumara says. “We have information and reference designs, because this isn’t something a home hobbyist should be making. We want to emphasize that it’s not trivial to create a system that can provide ventilation safely.”

“We saw all these designs being posted online, which is awesome that so many people wanted to help,” says Slocum Jr. “We thought the best first step would be to identify the minimum clinical functional requirements for safe ventilation, compare that to reported methods for managing ventilated patients with Covid, and use that to help us choose a design.”

The principle behind the existing device is certainly simple enough: Take an emergency resuscitator bag (Ambu is a common brand), which hospitals already have in large numbers and which is designed to be squeezed by hand. Automating the squeezing — using a pair of curved paddles driven by a motor — would allow rapid scale-up. But there’s a lot more to it, Hanumara says: “The controls are really tricky, and they have required many iterations as our understanding of the clinical and safety challenge grew.”

Slocum Jr. adds, “Covid patients often require ventilation for a week or more, and in longer cases that would mean about a million breaths. The paddles are specifically designed to encourage rolling contact in order to minimize wear on the bag.”

The starting point was a design developed a decade ago as a student team project in MIT class 2.75 (Medical Device Design), taught by Slocum Sr. and Hanumara. The team’s paper gave the new project a significant head start in tackling the design problem now, as they make rapid progress in close consultation with clinical practitioners.

That integral involvement of clinicians “is one key difference between us and a lot of the others” working on this engineering problem, says Kimberly Jung, an MIT master’s student in mechanical engineering.

Jung — who previously served five years in the U.S. Army, earned an MBA at Harvard University, and started a spice business that is currently the largest employer of women in Afghanistan — has been acting as the team’s executive officer as well as part of the engineering team. She says “there’s a lot of individuals and many small companies who are trying to make solutions for low-cost ventilators. The problem is that they just haven’t adhered to clinical guidelines, such as the tidal volume, inspiration-to-expiration ratio, breath per minute rate, maximum pressures, and key monitoring for safety. Developing these clinical requirements and translating them into engineering design requirements takes a lot of time and effort. This is a year-long research and development process that has been condensed into several weeks.”

A team assembles

Others got pulled into the team as the project ramped up. Coby Unger, an industrial designer and instructor at the MIT Hobby Shop, started building the first prototypes in the machine shop. Jung recruited her classmate and neighbor, Shakti Shaligram SM ’19, to help with machining, and also brought in Michael Detienne, an electrical engineer and member of the MITERS makerspace. Two students at the MIT Maker Workshop helped with initial fabrication with stock borrowed from the MIT Laboratory for Manufacturing and Productivity shop. Looking for pressure sensors, Hanumara reached out to David Hagan PhD ’20, CEO of an MIT spinoff company called QuantAQ, and he joined the team. The website was rapidly deployed by Eric Norman, a communications expert who had worked with Hanumara on another MIT project.

Realizing that feedback and control systems were crucial to the device’s safe operation, the team early on decided they needed help from specialists in that area. Daniela Rus, head of MIT’s Computer Science and Artificial Intelligence Laboratory, joined the team and took responsibility for the control system along with several members of her research group. Rus also suggested research scientist Murad Abu-Khalaf and graduate students Teddy Ort and Brandon Araki join the volunteer team. They eagerly accepted the invictation. Ort’s roommate, Amado Antonini SM ’18, also joined the team to assist with motor controls.

Meanwhile, alumnus Albert Kwon SB ’08, HST ’13, an anesthesiologist at Westchester Medical Center and assistant professor of anesthesiology at New York Medical College, was recruited by Slocum Jr. to join the project early on. Kwon was granted leave from his job at Westchester to devote time to the project, providing clinical guidance on the kinds of controls and safety systems needed for the device to work safely. “Westchester Medical Center gave him up, which is very special, and he’s been working to translate the technical to clinical, and explain the scenarios that fit with a stripped-down system like this,” Hanumara says. Jay Connor, a surgeon at Mt. Auburn Hospital and part of the Medical Device Design course teaching team, Christoph Nabzdyk, a cardiothoracic anesthesiologist and critical care physician at Mayo Clinic and long-time colleague of Kwon, and Dirk Varelmann, another anesthesiologist from Brigham and Women’s Hospital, and many other clinicians advised the MIT Emergency Ventilator team.

A spark to help others fill the gap

“While our design cannot replace a full featured ventilator,” Hanumara stresses, “it does provide key ventilation functions that will allow health care facilities under pressure to better ration their ICU ventilators and human resources, in a bad scenario.”

In a way, he says, “we’re turning the clock back, going back to the core parameters of ventilation.” Before today’s electronic sensors and controls were available, “doctors were trained to adjust the ventilators based on looking directly at physiological responses of the patient. So, we know that’s doable. … The patient himself is a reasonable sensor.”

While the federal government has now established contracts with large manufacturing companies to start producing ventilators to help meet the urgent need, that process will take time, Jung says, leaving a significant gap for something to meet the need in the meantime. “The fastest these large manufacturers can spin up is about two months,” she says.

“This need will probably be even more pronounced in the emerging markets,” Hanumara adds.

The team doesn’t plan to directly launch their own production, or even to provide a single, detailed set of plans. “Our goal is to put out a really solid reference design,” Hanumara says “and to a limited extent help big groups scale it. We have shared great learnings with our local industry collaborators.” It will be up to local teams to adapt the design to the materials and parts that they can reliably obtain and the particular needs of their hospitals.

He says “your mechanical and electrical engineering team will have to inquire as to what’s in their supply chain and what fabrication methods they have easily available to them and adapt the design. The base designs are intended to be really adaptable, but it may require modifications. What motors can they source? What motor drivers and controllers do electrical team need to look at? What level of controls and safeties do their clinicians require for their patient population and how should this be reflected in the code? So, we can’t put out an exact kit,” says Hanumara.

The hope is to provide a spark to start teams everywhere to further develop and adapt the concept, Hanumara says. “Provided clinical safety is shown, we’ll probably see many of these around the world, with some shared DNA from us, as well as local flavors. And I think that will be beautiful, because it will mean that people all over are working hard to help their communities.”

“I’m super proud of the team,” Jung says, “for how each of us has stepped up to the plate and stuck with it despite the internal and external challenges. All of us have one mission in mind, which is to save lives, and that’s what has kept us together and turned us into a quirky MIT family.”

This article has been updated to reflect a change to the project’s name, from MIT E-Vent to MIT Emergency Ventilator.

Read More

Deploying more conversational chatbots

The comedian Bill Burr has said he refuses to call into automated customer service lines for fear that, years later on his death bed, all he’ll be able to think about are the moments he wasted dealing with chatbots.

Indeed, the frustrating experience of trying to complete even the most straightforward task through an automated customer service line is enough to make anyone question the purpose of life.

Now the startup Posh is trying to make conversations with chatbots more natural and less maddening. It’s accomplishing this with an artificial intelligence-powered system that uses “conversational memory” to help users complete tasks.

“We noticed bots in general would take what the user said at face value, without connecting the dots of what was said before in the conversation,” says Posh co-founder and CEO Karan Kashyap ’17, SM ’17. “If you think about your conversations with humans, especially in places like banks with tellers or in customer service, what you said in the past is very important, so we focused on making bots more humanlike by giving them the ability to remember historical information in a conversation.”

Posh’s chatbots are currently used by over a dozen credit unions across voice- and text-based channels. The well-defined customer base has allowed the company to train its system on only the most relevant data, improving performance.

The founders plan to gradually partner with companies in other sectors to gather industry-specific data and expand the use of their system without compromising performance. Down the line, Kashyap and Posh co-founder and CTO Matt McEachern ’17, SM ’18 plan to provide their chatbots as a platform for developers to build on.

The expansion plans should attract businesses in a variety of sectors: Kashyap says some credit unions have successfully resolved more than 90 percent of customer calls with Posh’s platform. The company’s expansion may also help alleviate the mind-numbing experience of calling into traditional customer service lines.

“When we deploy our telephone product, there’s no notion of ‘Press one or press two,’” Kashyap explains. “There’s no dial tone menu. We just say, ‘Welcome to whatever credit union, how can I help you today?’ In a few words, you let us know. We prompt users to describe their problems via natural speech instead of waiting for menu options to be read out.”

Bootstrapping better bots

Kashyap and McEachern became friends while pursuing their degrees in MIT’s Department of Electrical Engineering and Computer Science. They also worked together in the same research lab at the Computer Science and Artificial Intelligence Laboratory (CSAIL).

But their relationship quickly grew outside of MIT. In 2016, the students began software consulting, in part designing chatbots for companies to handle customer inquiries around medical devices, flight booking, personal fitness, and more. Kashyap says they used their time consulting to learn about and take business risks.

“That was a great learning experience, because we got real-world experience in designing these bots using the tools that were available,” Kashyap says. “We saw the market need for a bot platform and for better bot experiences.”

From the start, the founders executed a lean business strategy that made it clear the engineering undergrads were thinking long term. Upon graduation, the founders used their savings from consulting to fund Posh’s early operations, giving themselves salaries and even hiring some contacts from MIT.

It also helped that they were accepted into the delta v accelerator, run by the Martin Trust Center for MIT Entrepreneurship, which gave them a summer of guidance and free rent. Following delta v, Posh was accepted into the DCU Fintech Innovation Center, connecting it with one of the largest credit unions in the country and netting the company another 12 months of free rent.
 

With DCU serving as a pilot customer, the founders got a “crash course” in the credit union industry, Kashyap says. From there they began a calculated expansion to ensure they didn’t grow faster than Posh’s revenue allowed, freeing them from having to raise venture capital.

The disciplined growth strategy at times forced Posh to get creative. Last year, as the founders were looking to build out new features and grow their team, they secured about $1.5 million in prepayments from eight credit unions in exchange for discounts on their service along with a peer-driven profit-sharing incentive. In total, the company has raised $2.5 million using that strategy.

Now on more secure financial footing, the founders are poised to accelerate Posh’s growth.

Pushing the boundaries

Even referring to today’s automated messaging platforms as chatbots seems generous. Most of the ones on the market today are only designed to understand what a user is asking for, something known as intent recognition.

The result is that many of the virtual agents in our lives, from the robotic telecom operator to Amazon’s Alexa to the remote control, take directions but struggle to hold a conversation. Posh’s chatbots go beyond intent recognition, using what Kashyap calls context understanding to figure out what users are saying based on the history of the conversation. The founders have a patent pending for the approach.

“[Context understanding] allows us to more intelligently understand user inputs and handle things like changes in topics without having the bots break,” Kashyap says. “One of our biggest pet peeves was, in order to have a successful interaction with a bot, you as a user have to be very unnatural sometimes to convey what you want to convey or the bot won’t understand you.”

Kashyap says context understanding is a lot easier to accomplish when designing bots for specific industries. That’s why Posh’s founders decided to start by focusing on credit unions.

“The platforms on the market today are almost spreading themselves too thin to make a deep impact in a particular vertical,” Kashyap says. “If you have banks and telecos and health care companies all using the same [chatbot] service, it’s as if they’re all sharing the same customer service rep. It’s difficult to have one person trained across all of these domains meaningfully.”

To onboard a new credit union, Posh uses the customer’s conversational data to train its deep learning model.

“The bots continue to train even after they go live and have actual conversations,” Kashyap says. “We’re always improving it; I don’t think we’ll ever deploy a bot and say it’s done.”

Customers can use Posh’s bots for online chats, voice calls, SMS messaging, and through third party channels like Slack, WhatsApp, and Amazon Echo. Posh also offers an analytics platform to help customers analyze what users are calling about.

For now, Kashyap says he’s focused on quadrupling the number of credit unions using Posh over the next year. Then again, the founders’ have never let short term business goals cloud their larger vision for the company.

“Our perspective has always been that [the robot assistant] Jarvis from ‘Iron Man’ and the AI from the movie ‘Her’ are going to be reality sometime soon,” Kashyap says. “Someone has to pioneer the ability for bots to have contextual awareness and memory persistence. I think there’s a lot more that needs to go into bots overall, but we felt by pushing the boundaries a little bit, we’d succeed where other bots would fail, and ultimately people would like to use our bots more than others.”

Read More

Reducing delays in wireless networks

MIT researchers have designed a congestion-control scheme for wireless networks that could help reduce lag times and increase quality in video streaming, video chat, mobile gaming, and other web services.

To keep web services running smoothly, congestion-control schemes infer information about a network’s bandwidth capacity and congestion based on feedback from the network routers, which is encoded in data packets. That information determines how fast data packets are sent through the network.

Deciding a good sending rate can be a tough balancing act. Senders don’t want to be overly conservative: If a network’s capacity constantly varies from, say, 2 megabytes per second to 500 kilobytes per second, the sender could always send traffic at the lowest rate. But then your Netflix video, for example, will be unnecessarily low-quality. On the other hand, if the sender constantly maintains a high rate, even when network capacity dips, it could  overwhelm the network, creating a massive queue of data packets waiting to be delivered. Queued packets can increase the network’s delay, causing, say, your Skype call to freeze.

Things get even more complicated in wireless networks, which have “time-varying links,” with rapid, unpredictable capacity shifts. Depending on various factors, such as the number of network users, cell tower locations, and even surrounding buildings, capacities can double or drop to zero within fractions of a second. In a paper at the USENIX Symposium on Networked Systems Design and Implementation, the researchers presented “Accel-Brake Control” (ABC), a simple scheme that achieves about 50 percent higher throughput, and about half the network delays, on time-varying links.

The scheme relies on a novel algorithm that enables the routers to explicitly communicate how many data packets should flow through a network to avoid congestion but fully utilize the network. It provides that detailed information from bottlenecks — such as packets queued between cell towers and senders — by repurposing a single bit already available in internet packets. The researchers are already in talks with mobile network operators to test the scheme.

“In cellular networks, your fraction of data capacity changes rapidly, causing lags in your service. Traditional schemes are too slow to adapt to those shifts,” says first author Prateesh Goyal, a graduate student in CSAIL. “ABC provides detailed feedback about those shifts, whether it’s gone up or down, using a single data bit.”

Joining Goyal on the paper are Anup Agarwal, now a graduate student at Carnegie Melon University; Ravi Netravali, now an assistant professor of computer science at the University of California at Los Angeles; Mohammad Alizadeh, an associate professor in MIT’s Department of Electrical Engineering (EECS) and CSAIL; and Hari Balakrishnan, the Fujitsu Professor in EECS. The authors have all been members of the Networks and Mobile Systems group at CSAIL.

Achieving explicit control

Traditional congestion-control schemes rely on either packet losses or information from a single “congestion” bit in internet packets to infer congestion and slow down. A router, such as a base station, will mark the bit to alert a sender — say, a video server — that its sent data packets are in a long queue, signaling congestion. In response, the sender will then reduce its rate by sending fewer packets. The sender also reduces its rate if it detects a pattern of packets being dropped before reaching the receiver.

In attempts to provide greater information about bottlenecked links on a network path, researchers have proposed “explicit” schemes that include multiple bits in packets that specify current rates. But this approach would mean completely changing the way the internet sends data, and it has proved impossible to deploy. 

“It’s a tall task,” Alizadeh says. “You’d have to make invasive changes to the standard Internet Protocol (IP) for sending data packets. You’d have to convince all Internet parties, mobile network operators, ISPs, and cell towers to change the way they send and receive data packets. That’s not going to happen.”

With ABC, the researchers still use the available single bit in each data packet, but they do so in such a way that the bits, aggregated across multiple data packets, can provide the needed real-time rate information to senders. The scheme tracks each data packet in a round-trip loop, from sender to base station to receiver. The base station marks the bit in each packet with “accelerate” or “brake,” based on the current network bandwidth. When the packet is received, the marked bit tells the sender to increase or decrease the “in-flight” packets — packets sent but not received — that can be in the network.

If it receives an accelerate command, it means the packet made good time and the network has spare capacity. The sender then sends two packets: one to replace the packet that was received and another to utilize the spare capacity. When told to brake, the sender decreases its in-flight packets by one — meaning it doesn’t replace the packet that was received.

Used across all packets in the network, that one bit of information becomes a powerful feedback tool that tells senders their sending rates with high precision. Within a couple hundred milliseconds, it can vary a sender’s rate between zero and double. “You’d think one bit wouldn’t carry enough information,” Alizadeh says. “But, by aggregating single-bit feedback across a stream of packets, we can get the same effect as that of a multibit signal.”

Staying one step ahead

At the core of ABC is an algorithm that predicts the aggregate rate of the senders one round-trip ahead to better compute the accelerate/brake feedback.

The idea is that an ABC-equipped base station knows how senders will behave — maintaining, increasing, or decreasing their in-flight packets — based on how it marked the packet it sent to a receiver. The moment the base station sends a packet, it knows how many packets it will receive from the sender in exactly one round-trip’s time in the future. It uses that information to mark the packets to more accurately match the sender’s rate to the current network capacity.

In simulations of cellular networks, compared to traditional congestion control schemes, ABC achieves around 30 to 40 percent greater throughput for roughly the same delays. Alternatively, it can reduce delays by around 200 to 400 percent by maintaining the same throughput as traditional schemes. Compared to existing explicit schemes that were not designed for time-varying links, ABC reduces delays by half for the same throughput. “Basically, existing schemes get low throughput and low delays, or high throughput and high delays, whereas ABC achieves high throughput with low delays,” Goyal says.

Next, the researchers are trying to see if apps and web services can use ABC to better control the quality of content. For example, “a video content provider could use ABC’s information about congestion and data rates to pick the resolution of streaming video more intelligently,” Alizadeh says. “If it doesn’t have enough capacity, the video server could lower the resolution temporarily, so the video will continue playing at the highest possible quality without freezing.”

Read More

Bluetooth signals from your smartphone could automate Covid-19 contact tracing while preserving privacy

Imagine you’ve been diagnosed as Covid-19 positive. Health officials begin contact tracing to contain infections, asking you to identify people with whom you’ve been in close contact. The obvious people come to mind — your family, your coworkers. But what about the woman ahead of you in line last week at the pharmacy, or the man bagging your groceries? Or any of the other strangers you may have come close to in the past 14 days?

A team led by MIT researchers and including experts from many institutions is developing a system that augments “manual” contact tracing by public health officials, while preserving the privacy of all individuals. The system relies on short-range Bluetooth signals emitted from people’s smartphones. These signals represent random strings of numbers, likened to “chirps” that other nearby smartphones can remember hearing.

If a person tests positive, they can upload the list of chirps their phone has put out in the past 14 days to a database. Other people can then scan the database to see if any of those chirps match the ones picked up by their phones. If there’s a match, a notification will inform that person that they may have been exposed to the virus, and will include information from public health authorities on next steps to take. Vitally, this entire process is done while maintaining the privacy of those who are Covid-19 positive and those wishing to check if they have been in contact with an infected person.

“I keep track of what I’ve broadcasted, and you keep track of what you’ve heard, and this will allow us to tell if someone was in close proximity to an infected person,” says Ron Rivest, MIT Institute Professor and principal investigator of the project. “But for these broadcasts, we’re using cryptographic techniques to generate random, rotating numbers that are not just anonymous, but pseudonymous, constantly changing their ‘ID,’ and that can’t be traced back to an individual.”

This approach to private, automated contact tracing will be available in a number of ways, including through the privacy-first effort launched at MIT in response to Covid-19 called SafePaths. This broad set of mobile apps is under development by a team led by Ramesh Raskar of the MIT Media Lab. The design of the new Bluetooth-based system has benefited from SafePaths’ early work in this area.

Bluetooth exchanges

Smartphones already have the ability to advertise their presence to other devices via Bluetooth. Apple’s “Find My” feature, for example, uses chirps from a lost iPhone or MacBook to catch the attention of other Apple devices, helping the owner of the lost device to eventually find it. 

“Find My inspired this system. If my phone is lost, it can start broadcasting a Bluetooth signal that’s just a random number; it’s like being in the middle of the ocean and waving a light. If someone walks by with Bluetooth enabled, their phone doesn’t know anything about me; it will just tell Apple, ‘Hey, I saw this light,’” says Marc Zissman, the associate head of MIT Lincoln Laboratory’s Cyber Security and Information Science Division and co-principal investigator of the project.

With their system, the team is essentially asking a phone to send out this kind of random signal all the time and to keep a log of these signals. At the same time, the phone detects chirps it has picked up from other phones, and only logs chirps that would be medically significant for contact tracing — those emitted from within an approximate 6-foot radius and picked up for a certain duration of time, say 10 minutes.

Phone owners would get involved by downloading an app that enables this system. After a positive diagnosis, a person would receive a QR code from a health official. By scanning the code through that app, that person can upload their log to the cloud. Anyone with the app could then initiate their phones to scan these logs. A notification, if there’s a match, could tell a user how long they were near an infected person and the approximate distance.  

Privacy-preserving technology

Some countries most successful at containing the spread of Covid-19 have been using smartphone-based approaches to conduct contact tracing, yet the researchers note these approaches have not always protected individual’s privacy. South Korea, for example, has implemented apps that notify officials if a diagnosed person has left their home, and can tap into people’s GPS data to pinpoint exactly where they’ve been.

“We’re not tracking location, not using GPS, not attaching your personal ID or phone number to any of these random numbers your phone is emitting,” says Daniel Weitzner, a principal research scientist in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-principal investigator of this effort. “What we want is to enable everyone to participate in a shared process of seeing if you might have been in contact, without revealing, or forcing anyone to reveal, anything.”

Choice is key. Weitzner sees the system as a virtual knock on the door that preserves people’s right to not answer it. The hope, though, is that everyone who can opt in would do so to help contain the spread of Covid-19. “We need a large percentage of the population to opt in for this system to really work. We care about every single Bluetooth device out there; it’s really critical to make this a whole ecosystem,” he says.

Public health impact

Throughout the development process, the researchers have worked closely with a medical advisory team to ensure that this system would contribute effectively to contact tracing efforts. This team is led by Louise Ivers, who is an infectious disease expert, associate professor at Harvard Medical School, and executive director of the Massachusetts General Hospital Center for Global Health.

“In order for the U.S. to really contain this epidemic, we need to have a much more proactive approach that allows us to trace more widely contacts for confirmed cases. This automated and privacy-protecting approach could really transform our ability to get the epidemic under control here and could be adapted to have use in other global settings,” Ivers says. “What’s also great is that the technology can be flexible to how public health officials want to manage contacts with exposed cases in their specific region, which may change over time.”

For example, the system could notify someone that they should self-isolate, or it could request that they check in through the app to connect with specialists regarding daily symptoms and well-being. In other circumstances, public health officials could request that this person get tested if they were noticing a cluster of cases.

The ability to conduct contact tracing quickly and at a large scale can be effective not only in flattening the curve of the outbreak, but also for enabling people to safely enter public life once a community is on the downward side of the curve. “We want to be able to let people carefully get back to normal life while also having this ability to carefully quarantine and identify certain vectors of an outbreak,” Rivest says.

Toward implementation

Lincoln Laboratory engineers have led the prototyping of the system. One of the hardest technical challenges has been achieving interoperability, that is, making it possible for a chirp from an iPhone to be picked up by an Android device and vice versa. A test at the laboratory late last week proved that they achieved this capability, and that chirps could be picked up by other phones of various makes and models.

A vital next step toward implementation is engaging with the smartphone manufacturers and software developers — Apple, Google, and Microsoft. “They have a critical role here. The aim of the prototype is to prove to these developers that this is feasible for them to implement,” Rivest says. As those collaborations are forming, the team is also demonstrating its prototype system to state and federal government agencies.

Rivest emphasizes that collaboration has made this project possible. These collaborators include the Massachusetts General Hospital Center for Global Health, CSAIL, MIT Lincoln Laboratory, Boston University, Brown University, MIT Media Lab, The Weizmann Institute of Science, and SRI International.

The team also aims to play a central, coordinating role with other efforts around the country and in Europe to develop similar, privacy-preserving contact-tracing systems.

“This project is being done in true academic style. It’s not a contest; it’s a collective effort on the part of many, many people to get a system working,” Rivest says.  

Read More

Sprayable user interfaces

For decades researchers have envisioned a world where digital user interfaces are seamlessly integrated with the physical environment, until the two are virtually indistinguishable from one another. 

This vision, though, is held up by a few boundaries. First, it’s difficult to integrate sensors and display elements into our tangible world due to various design constraints. Second, most methods to do so are limited to smaller scales, bound by the size of the fabricating device. 

Recently, a group of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with SprayableTech, a system that lets users create room-sized interactive surfaces with sensors and displays. The system, which uses airbrushing of functional inks, enables various displays, like interactive sofas with embedded sensors to control your television, and sensors for adjusting lighting and temperature through your walls.

SprayableTech lets users channel their inner Picassos: After designing your interactive artwork in the 3D editor, it automatically generates stencils for airbrushing the layout onto a surface. Once they’ve created the stencils from cardboard, a user can then add sensors to the desired surface, whether it’s a sofa, a wall, or even a building, to control various appliances like your lamp or television. (An alternate option to stenciling is projecting them digitally.)

“Since SprayableTech is so flexible in its application, you can imagine using this type of system beyond walls and surfaces to power larger-scale entities like interactive smart cities and interactive architecture in public places,” says Michael Wessely, postdoc in CSAIL and lead author on a new paper about SprayableTech. “We view this as a tool that will allow humans to interact with and use their environment in newfound ways.”

The race for the smartest home has now been in the works for some time, with a large interest in sensor technology. It’s a big advance from the enormous glass wall displays with quick-shifting images and screens we’ve seen in countless dystopian films. 

The MIT researchers’ approach is focusing on scale, and creative expression. By using the airbrush technology, they’re no longer limited to the size of the printer, the area of the screen-printing net, or the size of the hydrographic bath — and there’s thousands of possible design options. 

Let’s say a user wanted to design a tree symbol on their wall to control the ambient light in the room. To start the process, they would use a toolkit in a 3D editor to design their digital object, and customize for things like proximity sensors, touch buttons, sliders, and electroluminescent displays. 

Then, the toolkit would output the choice of stencils: fabricated stencils cut from cardboard, which are great for high-precision spraying on simple, flat, surfaces, or projected stencils, which are less precise, but better for doubly-curved surfaces. 

Designers can then spray on the functional ink, which is ink with electrically functional elements, using an airbrush. As a final step to get the system going, a microcontroller is attached that connects the interface to the board that runs the code for sensing and visual output.

The team tested the system on a variety of items, including:

  • a musical interface on a concrete pillar;
  • an interactive sofa that’s connected to a television;
  • a wall display for controlling light; and
  • a street post with a touchable display that provides audible information on subway stations and local attractions.

Since the stencils need to be created in advance via the digital editor, it reduces the opportunity for spontaneous exploration. Looking forward, the team wants to explore so-called “modular” stencils that create touch buttons of different sizes, as well as shape-changing stencils that adjust themselves based on a desired user interface shape. 

“In the future, we aim to collaborate with graffiti artists and architects to explore the future potential for large-scale user interfaces in enabling the internet of things for smart cities and interactive homes,” says Wessely.

Wessely wrote the paper alongside MIT PhD student Ticha Sethapakdi, MIT undergraduate students Carlos Castillo and Jackson C. Snowden, MIT postdoc Isabel P.S. Qamar, MIT Professor Stefanie Mueller, University of Bristol PhD student Ollie Hanton, University of Bristol Professor Mike Fraser, and University of Bristol Associate Professor Anne Roudaut. 

Read More

Learning about artificial intelligence: A hub of MIT resources for K-12 students

In light of the recent events surrounding Covid-19, learning for grades K-12 looks very different than it did a month ago. Parents and educators may be feeling overwhelmed about turning their homes into classrooms. 

With that in mind, a team led by Media Lab Associate Professor Cynthia Breazeal has launched aieducation.mit.edu to share a variety of online activities for K-12 students to learn about artificial intelligence, with a focus on how to design and use it responsibly. Learning resources provided on this website can help to address the needs of the millions of children, parents, and educators worldwide who are staying at home due to school closures caused by Covid-19, and are looking for free educational activities that support project-based STEM learning in an exciting and innovative area. 

The website is a collaboration between the Media Lab, MIT Stephen A. Schwarzman College of Computing, and MIT Open Learning, serving as a hub to highlight diverse work by faculty, staff, and students across the MIT community at the intersection of AI, learning, and education. 

“MIT is the birthplace of Constructionism under Seymour Papert. MIT has revolutionized how children learn computational thinking with hugely successful platforms such as Scratch and App Inventor. Now, we are bringing this rich tradition and deep expertise to how children learn about AI through project-based learning that dovetails technical concepts with ethical design and responsible use,” says Breazeal. 

The website will serve as a hub for MIT’s latest work in innovating learning and education in the era of AI. In addition to highlighting research, it also features up-to-date project-based activities, learning units, child-friendly software tools, digital interactives, and other supporting materials, highlighting a variety of MIT-developed educational research and collaborative outreach efforts across and beyond MIT. The site is intended for use by students, parents, teachers, and lifelong learners alike, with resources for children and adults at all learning levels, and with varying levels of comfort with technology, for a range of artificial intelligence topics. The team has also gathered a variety of external resources to explore, such as Teachable Machines by Google, a browser-based platform that lets users train classifiers for their own image-recognition algorithms in a user-friendly way.

In the spirit of “mens et manus” — the MIT motto, meaning “mind and hand” — the vision of technology for learning at MIT is about empowering and inspiring learners of all ages in the pursuit of creative endeavors. The activities highlighted on the new website are designed in the tradition of constructionism: learning through project-based experiences in which learners build and share their work. The approach is also inspired by the idea of computational action, where children can design AI-enabled technologies to help others in their community.

“MIT has been a world leader in AI since the 1960s,” says MIT professor of computer science and engineering Hal Abelson, who has long been involved in MIT’s AI research and educational technology. “MIT’s approach to making machines intelligent has always been strongly linked with our work in K-12 education. That work is aimed at empowering young people through computational ideas that help them understand the world and computational actions that empower them to improve life for themselves and their communities.”

Research in computer science education and AI education highlights the importance of having a mix of plugged and unplugged learning approaches. Unplugged activities include kinesthetic or discussion-based activities developed to introduce children to concepts in AI and its societal impact without using a computer. Unplugged approaches to learning AI are found to be especially helpful for young children. Moreover, these approaches can also be accessible to learning environments (classrooms and homes) that have limited access to technology. 

As computers continue to automate more and more routine tasks, inequity of education remains a key barrier to future opportunities, where success depends increasingly on intellect, creativity, social skills, and having specific skills and knowledge. This accelerating change raises the critical question of how to best prepare students, from children to lifelong learners, to be successful and to flourish in the era of AI.

It is important to help prepare a diverse and inclusive citizenry to be responsible designers and conscientious users of AI. In that spirit, the activities on aieducation.mit.edu range from hands-on programming to paper prototyping, to Socratic seminars, and even creative writing about speculative fiction. The learning units and project-based activities are designed to be accessible to a wide audience with different backgrounds and comfort levels with technology. A number of these activities leverage learning about AI as a way to connect to the arts, humanities, and social sciences, too, offering a holistic view of how AI intersects with different interests and endeavors. 

The rising ubiquity of AI affects us all, but today a disproportionately small slice of the population has the skills or power to decide how AI is designed or implemented; worrying consequences have been seen in algorithmic bias and perpetuation of unjust systems. Democratizing AI through education, starting in K-12, will help to make it more accessible and diverse at all levels, ultimately helping to create a more inclusive, fair, and equitable future.

Read More

Computational thinking class enables students to engage in Covid-19 response

When an introductory computational science class, which is open to the general public, was repurposed to study the Covid-19 pandemic this spring, the instructors saw student registration rise from 20 students to nearly 300.

Introduction to Computational Thinking (6.S083/18.S190), which applies data science, artificial intelligence, and mathematical models using the Julia programming language developed at MIT, was introduced in the fall as a pilot half-semester class. It was launched as part of the MIT Stephen A. Schwarzman College of Computing’s computational thinking program and spearheaded by Department of Mathematics Professor Alan Edelman and Visiting Professor David P. Sanders. They very quickly were able to fast-track the curriculum to focus on applications to Covid-19 responses; students were equally fast in jumping on board.

“Everyone at MIT wants to contribute,” says Edelman. “While we at the Julia Lab are doing research in building tools for scientists, Dave and I thought it would be valuable to teach the students about some of the fundamentals related to computation for drug development, disease models, and such.” 

The course is offered through MIT’s Department of Electronic Engineering and Computer Science and the Department of Mathematics. “This course opens a trove of opportunities to use computation to better understand and contain the Covid-19 pandemic,” says MIT Computer Science and Artificial Intelligence Laboratory Director Daniela Rus.

The fall version of the class had a maximum enrollment of 20 students, but the spring class has ballooned to nearly 300 students in one weekend, almost all from MIT. “We’ve had a tremendous response,” Edelman says. “This definitely stressed the MIT sign-up systems in ways that I could not have imagined.”

Sophomore Shinjini Ghosh, majoring in computer science and linguistics, says she was initially drawn to the class to learn Julia, “but also to develop the skills to do further computational modeling and conduct research on the spread and possible control of Covid-19.”

“There’s been a lot of misinformation about the epidemiology and statistical modeling of the coronavirus,” adds sophomore Raj Movva, a computer science and biology major. “I think this class will help clarify some details, and give us a taste of how one might actually make predictions about the course of a pandemic.” 

Edelman says that he has always dreamed of an interdisciplinary modern class that would combine the machine learning and AI of a “data-driven” world, the modern software and systems possibilities that Julia allows, and the physical models, differential equations, and  scientific machine learning of the “physical world.” 

He calls this class “a natural outgrowth of Julia Lab’s research, and that of the general cooperative open-source Julia community.” For years, this online community collaborates to create tools to speed up the drug approval process, aid in scientific machine learning and differential equations, and predict infectious disease transmission. “The lectures are open to the world, following the great MIT tradition of MIT open courses,” says Edelman.

So when MIT turned to virtual learning to de-densify campus, the transition to an online, remotely taught version of the class was not too difficult for Edelman and Sanders.

“Even though we have run open remote learning courses before, it’s never the same as being able to see the live audience in front of you,” says Edelman. “However, MIT students ask such great questions in the Zoom chat, so that it remains as intellectually invigorating as ever.”

Sanders, a Marcos Moshinsky research fellow currently on leave as a professor at the National University of Mexico, is working on techniques for accelerating global optimization. Involved with the Julia Lab since 2014, Sanders has worked with Edelman on various teaching, research, and outreach projects related to Julia, and his YouTube tutorials have reached over 100,000 views. “His videos have often been referred to as the best way to learn the Julia language,” says Edelman.

Edelman will also be enlisting some help from Philip, his family’s Corgi who until recently had been a frequent wanderer of MIT’s halls and classrooms. “Philip is a well-known Julia expert whose image has been classified many times by Julia’s AI Systems,” says Edelman. “Students are always happy when Philip participates in the online classes.”

Read More

Researching from home: Science stays social, even at a distance

With all but a skeleton crew staying home from each lab to minimize the spread of Covid-19, scores of Picower Institute researchers are immersing themselves in the considerable amount of scientific work that can done away from the bench. With piles of data to analyze; plenty of manuscripts to write; new skills to acquire; and fresh ideas to conceive, share, and refine for the future, neuroscientists have full plates, even when they are away from their, well, plates. They are proving that science can remain social, even if socially distant.

Ever since the mandatory ramp down of on-campus research took hold March 20, for example, teams of researchers in the lab of Troy Littleton, the Menicon Professor of Neuroscience, have sharpened their focus on two data-analysis projects that are every bit as essential to their science as acquiring the data in the lab in the first place. Research scientist Yulia Akbergenova and graduate student Karen Cunningham, for example, are poring over a huge amount of imaging data showing how the strength of connections between neurons, or synapses, mature and how that depends on the molecular components at the site. Another team, comprised of Picower postdoc Suresh Jetti and graduate students Andres Crane and Nicole Aponte-Santiago, is analyzing another large dataset, this time of gene transcription, to learn what distinguishes two subclasses of motor neurons that form synapses of characteristically different strength.

Work is similarly continuing among researchers in the lab of Elly Nedivi, the William R. (1964) and Linda R. Young Professor of Neuroscience. Since heading home, Senior Research Support Associate Kendyll Burnell has been looking at microscope images tracking how inhibitory interneurons innervate the visual cortex of mice throughout their development. By studying the maturation of inhibition, the lab hopes to improve understanding of the role of inhibitory circuitry in the experience-dependent changes, or plasticity, and development of the visual cortex, she says. As she’s worked, her poodle Soma (named for the central body structure of a neuron) has been by her side.

Despite extra time with comforts of home, though, it’s clear that nobody wanted this current mode of socially distant science. For every lab, it’s tremendously disruptive and costly. But labs are finding many ways to make progress nonetheless.

“Although we are certainly hurting because our lab work is at a standstill, the Miller lab is fortunate to have a large library of multiple-electrode neurophysiological data,” says Picower Professor Earl Miller. “The datasets are very rich. As our hypotheses and analytical tools develop, we can keep going back to old data to ask new questions. We are taking advantage of the wet lab downtime to analyze data and write papers. We have three under review and are writing at least three more right now.”

Miller is inviting new collaborations regardless of the physical impediment of social distancing. A recent lab meeting held via the videoconferencing app Zoom included MIT Department of Brain and Cognitive Sciences Associate Professor Ila Fiete and her graduate student, Mikail Khona. The Miller lab has begun studying how neural rhythms move around the cortex and what that means for brain function. Khona presented models of how timing relationships affect those waves. While this kind of an interaction between labs of the Picower Institute and the McGovern Institute for Brain Research would normally have taken place in person in MIT’s Building 46, neither lab let the pandemic get in the way.

Similarly, the lab of Li-Huei Tsai, Picower Professor and director of the Picower Institute, has teamed up with that of Manolis Kellis, professor in the MIT Computer Science and Artificial Intelligence Laboratory. They’re forming several small squads of experimenters and computational experts to launch analyses of gene expression and other data to illuminate the fate of individual cell types like interneurons or microglia in the context of the Alzheimer’s disease-afflicted brain. Other teams are focusing on analyses of questions such as how pathology varies in brain samples carrying different degrees of genetic risk factors. These analyses will prove useful for stages all along the scientific process, Tsai says, from forming new hypotheses to wrapping up papers that are well underway.

Remote collaboration and communication are proving crucial to researchers in other ways, too, proving that online interactions, though distant, can be quite personally fulfilling.

Nicholas DiNapoli, a research engineer in the lab of Associate Professor Kwanghun Chung, is making the best of time away from the bench by learning about the lab’s computational pipeline for processing the enormous amounts of imaging data it generates. He’s also taking advantage of a new program within the lab in which Senior Computer Scientist Lee Kamentsky is teaching Python computer programming principles to anyone in the lab who wants to learn. The training occurs via Zoom two days a week.

As part of a crowded calendar of Zoom meetings, or “Zeetings” as the lab has begun to call them, Newton Professor Mriganka Sur says he makes sure to have one-to-one meetings with everyone in the lab. The team also has organized into small subgroups around different themes of the lab’s research.

But also, the lab has continued to maintain its cohesion by banding together informally creating novel work and social experiences.

Graduate student Ning Leow, for example, used Zoom to create a co-working session in which participants kept a video connection open for hours at a time, just to be in each other’s virtual presence while they worked. Among a group of Sur lab friends, she read a paper related to her thesis and did a substantial amount of data analysis. She also advised a colleague on an analysis technique via the connection.

“I’ve got to say that it worked out really well for me personally because I managed to get whatever I wanted to complete on my list done,” she says, “and there was also a sense of healthy accountability along with the sense of community.”

Whether in person or via an officially imposed distance, science is social. In that spirit, graduate student K. Guadalupe “Lupe” Cruz organized a collaborative art event via Zoom for female scientists in brain and cognitive sciences at MIT. She took a photo of Rosalind Franklin, the scientist whose work was essential for resolving the structure of DNA, and divided it into nine squares to distribute to the event attendees. Without knowing the full picture, everyone drew just their section, talking all the while about how the strange circumstances of Covid-19 have changed their lives. At the end, they stitched their squares together to reconstruct the image.

Examples abound of how Picower scientists, though mostly separate and apart, are still coming together to advance their research and to maintain the fabric of their shared experiences.

Read More

Accelerating data-driven discoveries

As technologies like single-cell genomic sequencing, enhanced biomedical imaging, and medical “internet of things” devices proliferate, key discoveries about human health are increasingly found within vast troves of complex life science and health data.

But drawing meaningful conclusions from that data is a difficult problem that can involve piecing together different data types and manipulating huge data sets in response to varying scientific inquiries. The problem is as much about computer science as it is about other areas of science. That’s where Paradigm4 comes in.

The company, founded by Marilyn Matz SM ’80 and Turing Award winner and MIT Professor Michael Stonebraker, helps pharmaceutical companies, research institutes, and biotech companies turn data into insights.

It accomplishes this with a computational database management system that’s built from the ground up to host the diverse, multifaceted data at the frontiers of life science research. That includes data from sources like national biobanks, clinical trials, the medical internet of things, human cell atlases, medical images, environmental factors, and multi-omics, a field that includes the study of genomes, microbiomes, metabolomes, and more.

On top of the system’s unique architecture, the company has also built data preparation, metadata management, and analytics tools to help users find the important patterns and correlations lurking within all those numbers.

In many instances, customers are exploring data sets the founders say are too large and complex to be represented effectively by traditional database management systems.

“We’re keen to enable scientists and data scientists to do things they couldn’t do before by making it easier for them to deal with large-scale computation and machine-learning on diverse data,” Matz says. “We’re helping scientists and bioinformaticists with collaborative, reproducible research to ask and answer hard questions faster.”

A new paradigm

Stonebraker has been a pioneer in the field of database management systems for decades. He has started nine companies, and his innovations have set standards for the way modern systems allow people to organize and access large data sets.

Much of Stonebraker’s career has focused on relational databases, which organize data into columns and rows. But in the mid 2000s, Stonebraker realized that a lot of data being generated would be better stored not in rows or columns but in multidimensional arrays.

For example, satellites break the Earth’s surface into large squares, and GPS systems track a person’s movement through those squares over time. That operation involves vertical, horizontal, and time measurements that aren’t easily grouped or otherwise manipulated for analysis in relational database systems.

Stonebraker recalls his scientific colleagues complaining that available database management systems were too slow to work with complex scientific datasets in fields like genomics, where researchers study the relationships between population-scale multi-omics data, phenotypic data, and medical records.

“[Relational database systems] scan either horizontally or vertically, but not both,” Stonebraker explains. “So you need a system that does both, and that requires a storage manager down at the bottom of the system which is capable of moving both horizontally and vertically through a very big array. That’s what Paradigm4 does.”

In 2008, Stonebraker began developing a database management system at MIT that stored data in multidimensional arrays. He confirmed the approach offered major efficiency advantages, allowing analytical tools based on linear algebra, including many forms of machine learning and statistical data processing, to be applied to huge datasets in new ways.

Stonebraker decided to spin the project into a company in 2010, when he partnered with Matz, a successful entrepreneur who co-founded Cognex Corporation, a large industrial machine-vision company that went public in 1989. The founders and their team, including Alex Poliakov BS ’07, went to work building out key features of the system, including its distributed architecture that allows the system to run on low-cost servers, and its ability to automatically clean and organize data in useful ways for users.

The founders describe their database management system as a computational engine for scientific data, and they’ve named it SciDB. On top of SciDB, they developed an analytics platform, called the REVEAL discovery engine, based on users’ daily research activities and aspirations.

“If you’re a scientist or data scientist, Paradigm’s REVEAL and SciDB products take care of all the data wrangling and computational ‘plumbing and wiring,’ so you don’t have to worry about accessing data, moving data, or setting up parallel distributed computing,” Matz says. “Your data is science-ready. Just ask your scientific question and the platform orchestrates all of the data management and computation for you.”

SciDB is designed to be used by both scientists and developers, so users can interact with the system through graphical user interfaces or by leveraging statistical and programming languages like R and Python.

“It’s been very important to sell solutions, not building blocks,” Matz says. “A big part of our success in the life sciences with top pharmas and biotechs and research institutes is bringing them our REVEAL suite of application-specific solutions to problems. We’re not handing them an analytical platform that’s a set of Spark LEGO blocks; we’re giving them solutions that handle the data they deal with daily, and solutions that use their vocabulary and answer the questions they want to work on.”

Accelerating discovery

Today Paradigm4’s customers include some of the biggest pharmaceutical and biotech companies in the world as well as research labs at the National Institutes of Health, Stanford University, and elsewhere.

Customers can integrate genomic sequencing data, biometric measurements, data on environmental factors, and more into their inquiries to enable new discoveries across a range of life science fields.

Matz says SciDB did 1 billion linear regressions in less than an hour in a recent benchmark, and that it can scale well beyond that, which could speed up discoveries and lower costs for researchers who have traditionally had to extract their data from files and then rely on less efficient cloud-computing-based methods to apply algorithms at scale.

“If researchers can run complex analytics in minutes and that used to take days, that dramatically changes the number of hard questions you can ask and answer,” Matz says. “That is a force-multiplier that will transform research daily.”

Beyond life sciences, Paradigm4’s system holds promise for any industry dealing with multifaceted data, including earth sciences, where Matz says a NASA climatologist is already using the system, and industrial IoT, where data scientists consider large amounts of diverse data to understand complex manufacturing systems. Matz says the company will focus more on those industries next year.

In the life sciences, however, the founders believe they already have a revolutionary product that’s enabling a new world of discoveries. Down the line, they see SciDB and REVEAL contributing to national and worldwide health research that will allow doctors to provide the most informed, personalized care imaginable.

“The query that every doctor wants to run is, when you come into his or her office and display a set of symptoms, the doctor asks, ‘Who in this national database has genetics that look like mine, symptoms that look like mine, lifestyle exposures that look like mine? And what was their diagnosis? What was their treatment? And what was their morbidity?” Stonebraker explains. “This is cross correlating you with everybody else to do very personalized medicine, and I think this is within our grasp.”

Read More

Q&A: Markus Buehler on setting coronavirus and AI-inspired proteins to music

The proteins that make up all living things are alive with music. Just ask Markus Buehler: The musician and MIT professor develops artificial intelligence models to design new proteins, sometimes by translating them into sound. His goal is to create new biological materials for sustainable, non-toxic applications. In a project with the MIT-IBM Watson AI Lab, Buehler is searching for a protein to extend the shelf-life of perishable food. In a new study in Extreme Mechanics Letters, he and his colleagues offer a promising candidate: a silk protein made by honeybees for use in hive building. 

In another recent study, in APL Bioengineering, he went a step further and used AI discover an entirely new protein. As both studies went to print, the Covid-19 outbreak was surging in the United States, and Buehler turned his attention to the spike protein of SARS-CoV-2, the appendage that makes the novel coronavirus so contagious. He and his colleagues are trying to unpack its vibrational properties through molecular-based sound spectra, which could hold one key to stopping the virus. Buehler recently sat down to discuss the art and science of his work.

Q: Your work focuses on the alpha helix proteins found in skin and hair. Why makes this protein so intriguing? 

A: Proteins are the bricks and mortar that make up our cells, organs, and body. Alpha helix proteins are especially important. Their spring-like structure gives them elasticity and resilience, which is why skin, hair, feathers, hooves, and even cell membranes are so durable. But they’re not just tough mechanically, they have built-in antimicrobial properties. With IBM, we’re trying to harness this biochemical trait to create a protein coating that can slow the spoilage of quick-to-rot foods like strawberries.

Q: How did you enlist AI to produce this silk protein?

A: We trained a deep learning model on the Protein Data Bank, which contains the amino acid sequences and three-dimensional shapes of about 120,000 proteins. We then fed the model a snippet of an amino acid chain for honeybee silk and asked it to predict the protein’s shape, atom-by-atom. We validated our work by synthesizing the protein for the first time in a lab — a first step toward developing a thin antimicrobial, structurally-durable coating that can be applied to food. My colleague, Benedetto Marelli, specializes in this part of the process. We also used the platform to predict the structure of proteins that don’t yet exist in nature. That’s how we designed our entirely new protein in the APL Bioengineering study. 

Q: How does your model improve on other protein prediction methods? 

A: We use end-to-end prediction. The model builds the protein’s structure directly from its sequence, translating amino acid patterns into three-dimensional geometries. It’s like translating a set of IKEA instructions into a built bookshelf, minus the frustration. Through this approach, the model effectively learns how to build a protein from the protein itself, via the language of its amino acids. Remarkably, our method can accurately predict protein structure without a template. It outperforms other folding methods and is significantly faster than physics-based modeling. Because the Protein Data Bank is limited to proteins found in nature, we needed a way to visualize new structures to make new proteins from scratch.

Q: How could the model be used to design an actual protein?

A: We can build atom-by-atom models for sequences found in nature that haven’t yet been studied, as we did in the APL Bioengineering study using a different method. We can visualize the protein’s structure and use other computational methods to assess its function by analyzing its stablity and the other proteins it binds to in cells. Our model could be used in drug design or to interfere with protein-mediated biochemical pathways in infectious disease.

Q: What’s the benefit of translating proteins into sound?

A: Our brains are great at processing sound! In one sweep, our ears pick up all of its hierarchical features: pitch, timbre, volume, melody, rhythm, and chords. We would need a high-powered microscope to see the equivalent detail in an image, and we could never see it all at once. Sound is such an elegant way to access the information stored in a protein. 

Typically, sound is made from vibrating a material, like a guitar string, and music is made by arranging sounds in hierarchical patterns. With AI we can combine these concepts, and use molecular vibrations and neural networks to construct new musical forms. We’ve been working on methods to turn protein structures into audible representations, and translate these representations into new materials. 

Q: What can the sonification of SARS-CoV-2’s “spike” protein tell us?

A: Its protein spike contains three protein chains folded into an intriguing pattern. These structures are too small for the eye to see, but they can be heard. We represented the physical protein structure, with its entangled chains, as interwoven melodies that form a multi-layered composition. The spike protein’s amino acid sequence, its secondary structure patterns, and its intricate three-dimensional folds are all featured. The resulting piece is a form of counterpoint music, in which notes are played against notes. Like a symphony, the musical patterns reflect the protein’s intersecting geometry realized by materializing its DNA code.

Q: What did you learn?

A: The virus has an uncanny ability to deceive and exploit the host for its own multiplication. Its genome hijacks the host cell’s protein manufacturing machinery, and forces it to replicate the viral genome and produce viral proteins to make new viruses. As you listen, you may be surprised by the pleasant, even relaxing, tone of the music. But it tricks our ear in the same way the virus tricks our cells. It’s an invader disguised as a friendly visitor. Through music, we can see the SARS-CoV-2 spike from a new angle, and appreciate the urgent need to learn the language of proteins.  

Q: Can any of this address Covid-19, and the virus that causes it?

A: In the longer term, yes. Translating proteins into sound gives scientists another tool to understand and design proteins. Even a small mutation can limit or enhance the pathogenic power of SARS-CoV-2. Through sonification, we can also compare the biochemical processes of its spike protein with previous coronaviruses, like SARS or MERS. 

In the music we created, we analyzed the vibrational structure of the spike protein that infects the host. Understanding these vibrational patterns is critical for drug design and much more. Vibrations may change as temperatures warm, for example, and they may also tell us why the SARS-CoV-2 spike gravitates toward human cells more than other viruses. We’re exploring these questions in current, ongoing research with my graduate students. 

We might also use a compositional approach to design drugs to attack the virus. We could search for a new protein that matches the melody and rhythm of an antibody capable of binding to the spike protein, interfering with its ability to infect.

Q: How can music aid protein design?

A: You can think of music as an algorithmic reflection of structure. Bach’s Goldberg Variations, for example, are a brilliant realization of counterpoint, a principle we’ve also found in proteins. We can now hear this concept as nature composed it, and compare it to ideas in our imagination, or use AI to speak the language of protein design and let it imagine new structures. We believe that the analysis of sound and music can help us understand the material world better. Artistic expression is, after all, just a model of the world within us and around us.  

Co-authors of the study in Extreme Mechanics Letters are: Zhao Qin, Hui Sun, Eugene Lim and Benedetto Marelli at MIT; and Lingfei Wu, Siyu Huo, Tengfei Ma and Pin-Yu Chen at IBM Research. Co-author of the study in APL Bioengineering is Chi-Hua Yu. Buehler’s sonification work is supported by MIT’s Center for Art, Science and Technology (CAST) and the Mellon Foundation. 

Read More