Jim Collins receives funding to harness AI for drug discovery

Housed at TED and supported by leading social impact advisor The Bridgespan Group, The Audacious Project is a collaborative funding initiative that’s catalyzing social impact on a grand scale by convening funders and social entrepreneurs, with the goal of supporting bold solutions to the world’s most urgent challenges.

Among this year’s carefully selected change-makers is Jim Collins and a team at MIT’s Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), including co-principal investigator Regina Barzilay. The funding provided through The Audacious Project will support the response to the antibiotic resistance crisis through the development of new classes of antibiotics to protect patients against some of the world’s deadliest bacterial pathogens.

“The work of Jim Collins and his colleagues is more relevant now than ever before,” says Anantha P. Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “We are grateful for the commitment from The Audacious Project and its contributors, to both support and foster the research around AI and drug discovery, and to join our efforts in the School of Engineering to realize the potential global impact of this incredible work.” 

Collins’ and Barzilay’s Antibiotics-AI Project seeks to produce the first new classes of antibiotics society has seen in three decades, by calling in an interdisciplinary team of world-class bioengineers, microbiologists, computer scientists, and chemists.

Collins is the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and the Department of Biological Engineering, faculty co-lead of Jameel Clinic, faculty lead of the MIT-Takeda Program, and a member of the Harvard-MIT Health Sciences and Technology faculty. He is also a core founding faculty member of the Wyss Institute for Biologically Inspired Engineering at Harvard University and an Institute member of the Broad Institute of MIT and Harvard.

Barzilay is the Delta Electronics Professor in MIT’s Department of Electrical Engineering and Computer Science, faculty co-lead of Jameel Clinic, and a member of the Computer Science and Artificial Intelligence Laboratory at MIT.

Earlier this year, Collins and Barzilay along with Tommi Jaakkola, Thomas Siebel Professor of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society, and postdoc Jonathan Stokes were part of a research team that successfully used a deep-learning model to identify a new antibiotic. Over the next seven years, The Audacious Project’s commitment will support Collins and Barzilay as they continue to use the same process to rapidly explore over a billion molecules to identify and design novel antibiotics.

Read More

With lidar and artificial intelligence, road status clears up after a disaster

Consider the days after a hurricane strikes. Trees and debris are blocking roads, bridges are destroyed, and sections of roadway are washed out. Emergency managers soon face a bevy of questions: How can supplies get delivered to certain areas? What’s the best route for evacuating survivors? Which roads are too damaged to remain open?

Without concrete data on the state of the road network, emergency managers often have to base their answers on incomplete information. The Humanitarian Assistance and Disaster Relief Systems Group at MIT Lincoln Laboratory hopes to use its airborne lidar platform, paired with artificial intelligence (AI) algorithms, to fill this information gap.  

“For a truly large-scale catastrophe, understanding the state of the transportation system as early as possible is critical,” says Chad Council, a researcher in the group. “With our particular approach, you can determine road viability, do optimal routing, and also get quantified road damage. You fly it, you run it, you’ve got everything.”

Since the 2017 hurricane season, the team has been flying its advanced lidar platform over stricken cities and towns. Lidar works by pulsing photons down over an area and measuring the time it takes for each photon to bounce back to the sensor. These time-of-arrival data points paint a 3D “point cloud” map of the landscape — every road, tree, and building — to within about a foot of accuracy.

To date, they’ve mapped huge swaths of the Carolinas, Florida, Texas, and all of Puerto Rico. In the immediate aftermath of hurricanes in those areas, the team manually sifted through the data to help the Federal Emergency Management Agency (FEMA) find and quantify damage to roads, among other tasks. The team’s focus now is on developing AI algorithms that can automate these processes and find ways to route around damage.

What’s the road status?

Information about the road network after a disaster comes to emergency managers in a “mosaic of different information streams,” Council says, namely satellite images, aerial photographs taken by the Civil Air Patrol, and crowdsourcing from vetted sources.

“These various efforts for acquiring data are important because every situation is different. There might be cases when crowdsourcing is fastest, and it’s good to have redundancy. But when you consider the scale of disasters like Hurricane Maria on Puerto Rico, these various streams can be overwhelming, incomplete, and difficult to coalesce,” he says.

During these times, lidar can act as an all-seeing eye, providing a big-picture map of an area and also granular details on road features. The laboratory’s platform is especially advanced because it uses Geiger-mode lidar, which is sensitive to a single photon. As such, its sensor can collect each of the millions of photons that trickle through openings in foliage as the system is flown overhead. This foliage can then be filtered out of the lidar map, revealing roads that would otherwise be hidden from aerial view.

To provide the status of the road network, the lidar map is first run through a neural network. This neural network is trained to find and extract the roads, and to determine their widths. Then, AI algorithms search these roads and flag anomalies that indicate the roads are impassable. For example, a cluster of lidar points extending up and across a road is likely a downed tree. A sudden drop in the elevation is likely a hole or washed out area in a road.

The extracted road network, with its flagged anomalies, is then merged with an OpenStreetMap of the area (an open-access map similar to Google Maps). Emergency managers can use this system to plan routes, or in other cases to identify isolated communities — those that are cut off from the road network. The system will show them the most efficient route between two specified locations, finding detours around impassable roads. Users can also specify how important it is to stay on the road; on the basis of that input, the system provides routes through parking lots or fields.  

This process, from extracting roads to finding damage to planning routes, can be applied to the data at the scale of a single neighborhood or across an entire city.

How fast and how accurate?

To gain an idea of how fast this system works, consider that in a recent test, the team flew the lidar platform, processed the data, and got AI-based analytics in 36 hours. That sortie covered an area of 250 square miles, an area about the size of Chicago, Illinois.

But accuracy is equally as important as speed. “As we incorporate AI techniques into decision support, we’re developing metrics to characterize an algorithm’s performance,” Council says.

For finding roads, the algorithm determines if a point in the lidar point cloud is “road” or “not road.” The team ran a performance evaluation of the algorithm against 50,000 square meters of suburban data, and the resulting ROC curve indicated that the current algorithm provided an 87 percent true positive rate (that is, correctly labeled a point as “road”), with a 20 percent false positive rate (that is, labeling a point as “road” that may not be road). The false positives are typically areas that geometrically look like a road but aren’t.

“Because we have another data source for identifying the general location of roads, OpenStreetMaps, these false positives can be excluded, resulting in a highly accurate 3D point cloud representation of the road network,” says Dieter Schuldt, who has been leading the algorithm-testing efforts.

For the algorithm that detects road damage, the team is in the process of further aggregating ground truth data to evaluate its performance. In the meantime, preliminary results have been promising. Their damage-finding algorithm recently flagged for review a potentially blocked road in Bedford, Massachusetts, which appeared to be a hole measuring 10 meters wide by 7 meters long by 1 meter deep. The town’s public works department and a site visit confirmed that construction blocked the road.

“We actually didn’t go in expecting that this particular sortie would capture examples of blocked roads, and it was an interesting find,” says Bhavani Ananthabhotla, a contributor to this work. “With additional ground truth annotations, we hope to not only evaluate and improve performance, but also to better tailor future models to regional emergency management needs, including informing route planning and repair cost estimation.”

The team is continuing to test, train, and tweak their algorithms to improve accuracy. Their hope is that these techniques may soon be deployed to help answer important questions during disaster recovery.

“We picture lidar as a 3D scaffold that other data can be draped over and that can be trusted,” Council says. “The more trust, the more likely an emergency manager, and a community in general, will use it to make the best decisions they can.”

Read More

Professor Daniela Rus named to White House science council

This week the White House announced that MIT Professor Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), has been selected to serve on the President’s Council of Advisors on Science and Technology (PCAST).

The council provides advice to the White House on topics critical to U.S. security and the economy, including policy recommendations on the future of work, American leadership in science and technology, and the support of U.S. research and development. 

PCAST operates under the aegis of the White House Office of Science and Technology Policy (OSTP), which was established in law in 1976. However, the council has existed more informally going back to Franklin Roosevelt’s Science Advisory Board in 1933.

“I’m grateful to be able to add my perspective as a computer scientist to this group at a time when so many issues involving AI and other aspects of computing raise important scientific and policy questions for the nation and the world,” says Rus.
 
Rus is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and the deputy dean of research for the MIT Stephen A. Schwarzman College of Computing. Her research in robotics, artificial intelligence, and data science focuses primarily on developing the science and engineering of autonomy, with the long-term objective of enabling a future where machines are integrated into daily life to support both cognitive and physical tasks. The applications of her work are broad and include transportation, manufacturing, medicine, and urban planning. 
 
More than a dozen MIT faculty and alumni have served on PCAST during past presidential administrations. These include former MIT president Charles Vest; Institute Professors Phillip Sharp and John Deutch; Ernest Moniz, professor of physics and former U.S. Secretary of Energy; and Eric Lander, director of the Broad Institute of MIT and Harvard and professor of biology, who co-chaired PCAST during the Obama administration. Previous councils have offered advice on topics ranging from data privacy and nanotechnology to job training and STEM education.

Read More

AI for Medicine Specialization featuring TensorFlow

AI for Medicine Specialization featuring TensorFlow

Posted by Laurence Moroney, AI Advocate

I’m excited to share an important aspect of the TensorFlow community: when educators and domain experts teach and train developers how to use machine learning technology to solve important tasks for a variety of scenarios, including in health care. To this end, deeplearning.ai and Coursera have launched an “AI for Medicine” specialization using TensorFlow.
Nothing excites our team more than when we see how others are using TensorFlow to solve real-world problems. In this three course specialization introduced by Andrew Ng and taught by Pranav Rajpurkar, we hope to widen access so that more people can understand the needs of medical Machine Learning problems.

Deeplearning.ai and Coursera have designed a specialization that is divided into three courses. The first Machine Learning for Medical Diagnosis will take you through some hypothetical Machine Learning scenarios for diagnosis of medical issues. In the first week, you’ll explore scenarios like detecting skin cancer, eye disease and histopathology. You’ll get hands-on with how you can write code in TensorFlow using convolutional neural networks to examine images, which, for example can be used to identify different conditions in an X-Ray.

The course does require some knowledge of TensorFlow, using techniques such as convolutional neural networks, transfer learning, natural language processing, and more. I recommend that you take the TensorFlow: In Practice specialization to understand the coding skills behind it, and the Deep Learning Specialization to go deeper into how the underlying technology works. Another great resource to learn the techniques used in this course is the book “Hands on Machine Learning with SciKit-Learn, Keras and TensorFlow” by Aurelien Geron.

One of the things I really enjoyed about the course is the balance of medical terminology and using common machine learning techniques from TensorFlow, such as data augmentation, to improve your models. Note: all of the data used in the course is de-identified.

Exercises from Rajpurkar’s and Ng’s course: Using image augmentation to extend the effective size of a dataset.

The course continues with techniques such as evaluation metrics and isolating key ones and understanding how to interpret confidence intervals accurately.

The first course wraps up with another deep dive into image processing, this time using segmentation in MRI images, wrapping up with a programming assignment in doing brain tumor auto segmentation on MRIs.

The second course in the specialization will be on Machine Learning for Medical Prognosis where you learn to build models to predict future patient health. You’ll learn techniques to extract data from reports such as a patient’s health metrics, history, and demographics to predict their risk of a major event such as a heart attack.

The third, and final, course will be on Machine Learning for Medical Treatment, where models may be used to assist in medical care to predict what the potential effect of a medical treatment might be on a patient. It will also go into using machine learning for text so that you can use NLP techniques to extract information from radiography reports to get labels or get the basis for a bot for answering medical questions.

In the words of Andrew Ng, “Even if your current work is not in medicine, I think you will find the application scenarios and the practice of these scenarios to be really useful, and maybe this specialization will inspire you to get more interested in medicine”.

The specialization is available at Coursera, and like all courses can be audited for free. You can learn more about deeplearning.ai at their website, and about TensorFlow at tensorflow.org.
Read More

PyTorch library updates including new model serving library

Along with the PyTorch 1.5 release, we are announcing new libraries for high-performance PyTorch model serving and tight integration with TorchElastic and Kubernetes. Additionally, we are releasing updated packages for torch_xla (Google Cloud TPUs), torchaudio, torchvision, and torchtext. All of these new libraries and enhanced capabilities are available today and accompany all of the core features released in PyTorch 1.5.

TorchServe (Experimental)

TorchServe is a flexible and easy to use library for serving PyTorch models in production performantly at scale. It is cloud and environment agnostic and supports features such as multi-model serving, logging, metrics, and the creation of RESTful endpoints for application integration. TorchServe was jointly developed by engineers from Facebook and AWS with feedback and engagement from the broader PyTorch community. The experimental release of TorchServe is available today. Some of the highlights include:

  • Support for both Python-based and TorchScript-based models
  • Default handlers for common use cases (e.g., image segmentation, text classification) as well as the ability to write custom handlers for other use cases
  • Model versioning, the ability to run multiple versions of a model at the same time, and the ability to roll back to an earlier version
  • The ability to package a model, learning weights, and supporting files (e.g., class mappings, vocabularies) into a single, persistent artifact (a.k.a. the “model archive”)
  • Robust management capability, allowing full configuration of models, versions, and individual worker threads via command line, config file, or run-time API
  • Automatic batching of individual inferences across HTTP requests
  • Logging including common metrics, and the ability to incorporate custom metrics
  • Ready-made Dockerfile for easy deployment
  • HTTPS support for secure deployment

To learn more about the APIs and the design of this feature, see the links below:

  • See for a full multi-node deployment reference architecture.
  • The full documentation can be found here.

TorchElastic integration with Kubernetes (Experimental)

TorchElastic is a proven library for training large scale deep neural networks at scale within companies like Facebook, where having the ability to dynamically adapt to server availability and scale as new compute resources come online is critical. Kubernetes enables customers using machine learning frameworks like PyTorch to run training jobs distributed across fleets of powerful GPU instances like the Amazon EC2 P3. Distributed training jobs, however, are not fault-tolerant, and a job cannot continue if a node failure or reclamation interrupts training. Further, jobs cannot start without acquiring all required resources, or scale up and down without being restarted. This lack of resiliency and flexibility results in increased training time and costs from idle resources. TorchElastic addresses these limitations by enabling distributed training jobs to be executed in a fault-tolerant and elastic manner. Until today, Kubernetes users needed to manage Pods and Services required for TorchElastic training jobs manually.

Through the joint collaboration of engineers at Facebook and AWS, TorchElastic, adding elasticity and fault tolerance, is now supported using vanilla Kubernetes and through the managed EKS service from AWS.

To learn more see the TorchElastic repo for the controller implementation and docs on how to use it.

torch_xla 1.5 now available

torch_xla is a Python package that uses the XLA linear algebra compiler to accelerate the PyTorch deep learning framework on Cloud TPUs and Cloud TPU Pods. torch_xla aims to give PyTorch users the ability to do everything they can do on GPUs on Cloud TPUs as well while minimizing changes to the user experience. The project began with a conversation at NeurIPS 2017 and gathered momentum in 2018 when teams from Facebook and Google came together to create a proof of concept. We announced this collaboration at PTDC 2018 and made the PyTorch/XLA integration broadly available at PTDC 2019. The project already has 28 contributors, nearly 2k commits, and a repo that has been forked more than 100 times.

This release of torch_xla is aligned and tested with PyTorch 1.5 to reduce friction for developers and to provide a stable and mature PyTorch/XLA stack for training models using Cloud TPU hardware. You can try it for free in your browser on an 8-core Cloud TPU device with Google Colab, and you can use it at a much larger scaleon Google Cloud.

See the full torch_xla release notes here. Full docs and tutorials can be found here and here.

PyTorch Domain Libraries

torchaudio, torchvision, and torchtext complement PyTorch with common datasets, models, and transforms in each domain area. We’re excited to share new releases for all three domain libraries alongside PyTorch 1.5 and the rest of the library updates. For this release, all three domain libraries are removing support for Python2 and will support Python3 only.

torchaudio 0.5

The torchaudio 0.5 release includes new transforms, functionals, and datasets. Highlights for the release include:

  • Added the Griffin-Lim functional and transform, InverseMelScale and Vol transforms, and DB_to_amplitude.
  • Added support for allpass, fade, bandpass, bandreject, band, treble, deemph, and riaa filters and transformations.
  • New datasets added including LJSpeech and SpeechCommands datasets.

See the release full notes here and full docs can be found here.

torchvision 0.6

The torchvision 0.6 release includes updates to datasets, models and a significant number of bug fixes. Highlights include:

  • Faster R-CNN now supports negative samples which allows the feeding of images without annotations at training time.
  • Added aligned flag to RoIAlign to match Detectron2.
  • Refactored abstractions for C++ video decoder

See the release full notes here and full docs can be found here.

torchtext 0.6

The torchtext 0.6 release includes a number of bug fixes and improvements to documentation. Based on user’s feedback, dataset abstractions are currently being redesigned also. Highlights for the release include:

  • Fixed an issue related to the SentencePiece dependency in conda package.
  • Added support for the experimental IMDB dataset to allow a custom vocab.
  • A number of documentation updates including adding a code of conduct and a deduplication of the docs on the torchtext site.

Your feedback and discussions on the experimental datasets API are welcomed. You can send them to issue #664. We would also like to highlight the pull request here where the latest dataset abstraction is applied to the text classification datasets. The feedback can be beneficial to finalizing this abstraction.

See the release full notes here and full docs can be found here.

We’d like to thank the entire PyTorch team, the Amazon team and the community for all their contributions to this work.

Cheers!

Team PyTorch

Read More

PyTorch 1.5 released, new and updated APIs including C++ frontend API parity with Python

Today, we’re announcing the availability of PyTorch 1.5, along with new and updated libraries. This release includes several major new API additions and improvements. PyTorch now includes a significant update to the C++ frontend, ‘channels last’ memory format for computer vision models, and a stable release of the distributed RPC framework used for model-parallel training. The release also has new APIs for autograd for hessians and jacobians, and an API that allows the creation of Custom C++ Classes that was inspired by pybind.

You can find the detailed release notes here.

C++ Frontend API (Stable)

The C++ frontend API is now at parity with Python, and the features overall have been moved to ‘stable’ (previously tagged as experimental). Some of the major highlights include:

  • Now with ~100% coverage and docs for C++ torch::nn module/functional, users can easily translate their model from Python API to C++ API, making the model authoring experience much smoother.
  • Optimizers in C++ had deviated from the Python equivalent: C++ optimizers can’t take parameter groups as input while the Python ones can. Additionally, step function implementations were not exactly the same. With the 1.5 release, C++ optimizers will always behave the same as the Python equivalent.
  • The lack of tensor multi-dim indexing API in C++ is a well-known issue and had resulted in many posts in PyTorch Github issue tracker and forum. The previous workaround was to use a combination of narrow / select / index_select / masked_select, which was clunky and error-prone compared to the Python API’s elegant tensor[:, 0, ..., mask] syntax. With the 1.5 release, users can use tensor.index({Slice(), 0, "...", mask}) to achieve the same purpose.

‘Channels last’ memory format for Computer Vision models (Experimental)

‘Channels last’ memory layout unlocks ability to use performance efficient convolution algorithms and hardware (NVIDIA’s Tensor Cores, FBGEMM, QNNPACK). Additionally, it is designed to automatically propagate through the operators, which allows easy switching between memory layouts.

Learn more here on how to write memory format aware operators.

Custom C++ Classes (Experimental)

This release adds a new API, torch::class_, for binding custom C++ classes into TorchScript and Python simultaneously. This API is almost identical in syntax to pybind11. It allows users to expose their C++ class and its methods to the TorchScript type system and runtime system such that they can instantiate and manipulate arbitrary C++ objects from TorchScript and Python. An example C++ binding:

template <class T>
struct MyStackClass : torch::CustomClassHolder {
  std::vector<T> stack_;
  MyStackClass(std::vector<T> init) : stack_(std::move(init)) {}

  void push(T x) {
    stack_.push_back(x);
  }
  T pop() {
    auto val = stack_.back();
    stack_.pop_back();
    return val;
  }
};

static auto testStack =
  torch::class_<MyStackClass<std::string>>("myclasses", "MyStackClass")
      .def(torch::init<std::vector<std::string>>())
      .def("push", &MyStackClass<std::string>::push)
      .def("pop", &MyStackClass<std::string>::pop)
      .def("size", [](const c10::intrusive_ptr<MyStackClass>& self) {
        return self->stack_.size();
      });

Which exposes a class you can use in Python and TorchScript like so:

@torch.jit.script
def do_stacks(s : torch.classes.myclasses.MyStackClass):
    s2 = torch.classes.myclasses.MyStackClass(["hi", "mom"])
    print(s2.pop()) # "mom"
    s2.push("foobar")
    return s2 # ["hi", "foobar"]

You can try it out in the tutorial here.

Distributed RPC framework APIs (Now Stable)

The Distributed RPC framework was launched as experimental in the 1.4 release and the proposal is to mark Distributed RPC framework as stable and no longer experimental. This work involves a lot of enhancements and bug fixes to make the distributed RPC framework more reliable and robust overall, as well as adding a couple of new features, including profiling support, using TorchScript functions in RPC, and several enhancements for ease of use. Below is an overview of the various APIs within the framework:

RPC API

The RPC API allows users to specify functions to run and objects to be instantiated on remote nodes. These functions are transparently recorded so that gradients can backpropagate through remote nodes using Distributed Autograd.

Distributed Autograd

Distributed Autograd connects the autograd graph across several nodes and allows gradients to flow through during the backwards pass. Gradients are accumulated into a context (as opposed to the .grad field as with Autograd) and users must specify their model’s forward pass under a with dist_autograd.context() manager in order to ensure that all RPC communication is recorded properly. Currently, only FAST mode is implemented (see here for the difference between FAST and SMART modes).

Distributed Optimizer

The distributed optimizer creates RRefs to optimizers on each worker with parameters that require gradients, and then uses the RPC API to run the optimizer remotely. The user must collect all remote parameters and wrap them in an RRef, as this is required input to the distributed optimizer. The user must also specify the distributed autograd context_id so that the optimizer knows in which context to look for gradients.

Learn more about distributed RPC framework APIs here.

New High level autograd API (Experimental)

PyTorch 1.5 brings new functions including jacobian, hessian, jvp, vjp, hvp and vhp to the torch.autograd.functional submodule. This feature builds on the current API and allows the user to easily perform these functions.

Detailed design discussion on GitHub can be found here.

Python 2 no longer supported

Starting PyTorch 1.5.0, we will no longer support Python 2, specifically version 2.7. Going forward support for Python will be limited to Python 3, specifically Python 3.5, 3.6, 3.7 and 3.8 (first enabled in PyTorch 1.4.0).

We’d like to thank the entire PyTorch team and the community for all their contributions to this work.

Cheers!

Team PyTorch

Read More

Specification gaming: the flip side of AI ingenuity

Specification gaming is a behaviour that satisfies the literal specification of an objective without achieving the intended outcome. We have all had experiences with specification gaming, even if not by this name. Readers may have heard the myth of King Midas and the golden touch, in which the king asks that anything he touches be turned to gold – but soon finds that even food and drink turn to metal in his hands. In the real world, when rewarded for doing well on a homework assignment, a student might copy another student to get the right answers, rather than learning the material – and thus exploit a loophole in the task specification.Read More

MIT team races to fill Covid-19-related ventilator shortage

It was clear early on in the unfolding Covid-19 pandemic that a critical need in the coming weeks and months would be for ventilators, the potentially life-saving devices that keep air flowing into a patient whose ability to breathe is failing.

Seeing a potential shortfall of hundreds of thousands of such units, professor of mechanical engineering Alex Slocum Sr. and other engineers at MIT swung into action, rapidly pulling together a team of volunteers with expertise in mechanical design, electronics, and controls, and a team of doctors with clinical experience in treating respiratory conditions. They started working together nonstop to develop an inexpensive alternative and share what they learned along the way. The goal was a design that could be produced quickly enough, potentially worldwide, to make a real difference in the immediate crisis.

In a very short time, they succeeded.

Just four weeks since the team convened, production of the first devices based directly on its work has begun in New York City. A group including 10XBeta, Boyce Technologies, and Newlab has begun production of a version called Spiro Wave, in close collaboration with the MIT team. The consortium expects to quickly deliver hundreds of units to meet the immediate needs of hospitals in New York and, eventually, other hospitals around the country.

Meanwhile, the team, called MIT Emergency Ventilator, has continued their research to develop the design further. The next iteration will be more compact, have a slightly different drive system, and add a key respiratory function. Their overarching goal is to focus on safety and straightforward functionality and fabrication. 10XBeta, in both New York and Johannesburg, along with Vecna Technologies and NN Life Sciences in the Boston Area, are participating in this effort. 10XBeta was founded by MIT alumnus Marcel Botha SM ’06.

One version of the MIT Emergency Ventilator team’s emergency ventilator design undergoes testing in their lab. Courtesy of MIT Emergency Ventilator​ Team

A complex design challenge

Alexander Slocum Jr. SB ’08, SM ’10, PhD ’13, a mechanical engineer who is now a surgical resident at the Medical College of Wisconsin, worked closely with his father, Slocum Sr., and MIT research scientist Nevan Hanumara MS ’06, PhD ’12 to help lead the initial ramp up.

“The numbers are frightening, to put it bluntly,” Slocum Jr. says. “This project started around the time of news reports from Italy describing ventilators being rationed due to shortages, and available data at that time suggested about 10 percent of Covid patients would require an ICU.” One of his first tasks was to estimate the potential ventilator shortage, using resources like the CDC’s Pandemic Response Plan, and literature on critical care resource utilization. “We estimated a shortage of around 100,000 to 200,000 ventilators was possible by April or May,” he says.

Hanumara, who is one of the project leads for the Emergency Ventilator team, says the team intends to offer open-source guidelines, rather than detailed plans or kits, that will serve as resources to enable skilled teams around the country and world — such as hospital-based engineering groups, biomedical device manufacturing companies, and industry groups — to develop their own specific versions, taking into account local supply chains.

“There’s a reason we don’t have a single exact plan [on the website],” Hanumara says. “We have information and reference designs, because this isn’t something a home hobbyist should be making. We want to emphasize that it’s not trivial to create a system that can provide ventilation safely.”

“We saw all these designs being posted online, which is awesome that so many people wanted to help,” says Slocum Jr. “We thought the best first step would be to identify the minimum clinical functional requirements for safe ventilation, compare that to reported methods for managing ventilated patients with Covid, and use that to help us choose a design.”

The principle behind the existing device is certainly simple enough: Take an emergency resuscitator bag (Ambu is a common brand), which hospitals already have in large numbers and which is designed to be squeezed by hand. Automating the squeezing — using a pair of curved paddles driven by a motor — would allow rapid scale-up. But there’s a lot more to it, Hanumara says: “The controls are really tricky, and they have required many iterations as our understanding of the clinical and safety challenge grew.”

Slocum Jr. adds, “Covid patients often require ventilation for a week or more, and in longer cases that would mean about a million breaths. The paddles are specifically designed to encourage rolling contact in order to minimize wear on the bag.”

The starting point was a design developed a decade ago as a student team project in MIT class 2.75 (Medical Device Design), taught by Slocum Sr. and Hanumara. The team’s paper gave the new project a significant head start in tackling the design problem now, as they make rapid progress in close consultation with clinical practitioners.

That integral involvement of clinicians “is one key difference between us and a lot of the others” working on this engineering problem, says Kimberly Jung, an MIT master’s student in mechanical engineering.

Jung — who previously served five years in the U.S. Army, earned an MBA at Harvard University, and started a spice business that is currently the largest employer of women in Afghanistan — has been acting as the team’s executive officer as well as part of the engineering team. She says “there’s a lot of individuals and many small companies who are trying to make solutions for low-cost ventilators. The problem is that they just haven’t adhered to clinical guidelines, such as the tidal volume, inspiration-to-expiration ratio, breath per minute rate, maximum pressures, and key monitoring for safety. Developing these clinical requirements and translating them into engineering design requirements takes a lot of time and effort. This is a year-long research and development process that has been condensed into several weeks.”

A team assembles

Others got pulled into the team as the project ramped up. Coby Unger, an industrial designer and instructor at the MIT Hobby Shop, started building the first prototypes in the machine shop. Jung recruited her classmate and neighbor, Shakti Shaligram SM ’19, to help with machining, and also brought in Michael Detienne, an electrical engineer and member of the MITERS makerspace. Two students at the MIT Maker Workshop helped with initial fabrication with stock borrowed from the MIT Laboratory for Manufacturing and Productivity shop. Looking for pressure sensors, Hanumara reached out to David Hagan PhD ’20, CEO of an MIT spinoff company called QuantAQ, and he joined the team. The website was rapidly deployed by Eric Norman, a communications expert who had worked with Hanumara on another MIT project.

Realizing that feedback and control systems were crucial to the device’s safe operation, the team early on decided they needed help from specialists in that area. Daniela Rus, head of MIT’s Computer Science and Artificial Intelligence Laboratory, joined the team and took responsibility for the control system along with several members of her research group. Rus also suggested research scientist Murad Abu-Khalaf and graduate students Teddy Ort and Brandon Araki join the volunteer team. They eagerly accepted the invictation. Ort’s roommate, Amado Antonini SM ’18, also joined the team to assist with motor controls.

Meanwhile, alumnus Albert Kwon SB ’08, HST ’13, an anesthesiologist at Westchester Medical Center and assistant professor of anesthesiology at New York Medical College, was recruited by Slocum Jr. to join the project early on. Kwon was granted leave from his job at Westchester to devote time to the project, providing clinical guidance on the kinds of controls and safety systems needed for the device to work safely. “Westchester Medical Center gave him up, which is very special, and he’s been working to translate the technical to clinical, and explain the scenarios that fit with a stripped-down system like this,” Hanumara says. Jay Connor, a surgeon at Mt. Auburn Hospital and part of the Medical Device Design course teaching team, Christoph Nabzdyk, a cardiothoracic anesthesiologist and critical care physician at Mayo Clinic and long-time colleague of Kwon, and Dirk Varelmann, another anesthesiologist from Brigham and Women’s Hospital, and many other clinicians advised the MIT Emergency Ventilator team.

A spark to help others fill the gap

“While our design cannot replace a full featured ventilator,” Hanumara stresses, “it does provide key ventilation functions that will allow health care facilities under pressure to better ration their ICU ventilators and human resources, in a bad scenario.”

In a way, he says, “we’re turning the clock back, going back to the core parameters of ventilation.” Before today’s electronic sensors and controls were available, “doctors were trained to adjust the ventilators based on looking directly at physiological responses of the patient. So, we know that’s doable. … The patient himself is a reasonable sensor.”

While the federal government has now established contracts with large manufacturing companies to start producing ventilators to help meet the urgent need, that process will take time, Jung says, leaving a significant gap for something to meet the need in the meantime. “The fastest these large manufacturers can spin up is about two months,” she says.

“This need will probably be even more pronounced in the emerging markets,” Hanumara adds.

The team doesn’t plan to directly launch their own production, or even to provide a single, detailed set of plans. “Our goal is to put out a really solid reference design,” Hanumara says “and to a limited extent help big groups scale it. We have shared great learnings with our local industry collaborators.” It will be up to local teams to adapt the design to the materials and parts that they can reliably obtain and the particular needs of their hospitals.

He says “your mechanical and electrical engineering team will have to inquire as to what’s in their supply chain and what fabrication methods they have easily available to them and adapt the design. The base designs are intended to be really adaptable, but it may require modifications. What motors can they source? What motor drivers and controllers do electrical team need to look at? What level of controls and safeties do their clinicians require for their patient population and how should this be reflected in the code? So, we can’t put out an exact kit,” says Hanumara.

The hope is to provide a spark to start teams everywhere to further develop and adapt the concept, Hanumara says. “Provided clinical safety is shown, we’ll probably see many of these around the world, with some shared DNA from us, as well as local flavors. And I think that will be beautiful, because it will mean that people all over are working hard to help their communities.”

“I’m super proud of the team,” Jung says, “for how each of us has stepped up to the plate and stuck with it despite the internal and external challenges. All of us have one mission in mind, which is to save lives, and that’s what has kept us together and turned us into a quirky MIT family.”

This article has been updated to reflect a change to the project’s name, from MIT E-Vent to MIT Emergency Ventilator.

Read More