New US postage stamp highlights MIT research

Letter writers across the country will soon have a fun and beautiful new Forever stamp to choose from, featuring novel research from the Media Lab’s Biomechatronics research group. 

The stamp is part of a new U.S. Postal Service (USPS) series on innovation, representing computing, biomedicine, genome sequencing, robotics, and solar technology. For the robotics category, the USPS chose the bionic prosthesis designed and built by Matt Carney PhD ’20 and members of the Biomechatronics group, led by Professor Hugh Herr.

The image used in the stamp was taken by photographer Andy Ryan, whose portfolio spans images from around the world, and who for many years has been capturing the MIT experience — from stunning architectural shots to the research work of labs across campus. Ryan suggested the bionic work of the biomechatronics group to USPS to represent the future of robotics. Ryan also created the images that became the computing and solar technology stamps in the series. 

“I was aware that Hugh Herr and his research team were incorporating robotic elements into the prosthetic legs they were developing and testing,” Ryan notes. “This vision of robotics was, in my mind, a true depiction of how robots and robotics would manifest and impact society in the future.” 

With encouragement from Herr, Ryan submitted high-definition, stylized, and close-up images of Matt Carney working on the group’s latest designs. 

Carney, who recently completed his PhD in media arts and sciences at the Media Lab, views bionic limbs as the ultimate humanoid robot, and an ideal innovation to represent and portray robotics in 2020. He was all-in for sharing that work with the world.

“Robotic prostheses integrate biomechanics, mechanical, electrical, and software engineering, and no piece is off-the-shelf,” Carney says. “To attempt to fit within the confines of the human form, and to match the bandwidth and power density of the human body, we must push the bounds of every discipline: computation, strength of materials, magnetic energy densities, sensors, biological interfaces, and so much more.”

In his childhood, Carney himself collected stamps from different corners of the globe, and so the selection of his research for a U.S. postal stamp has been especially meaningful. 

“It’s a freakin’ honor to have my PhD work featured as a USPS stamp,” Carney says, breaking into a big smile. “I hope this feat is an inspiration to young students everywhere to crush their homework, and to build the skills to make a positive impact on the world. And while I worked insane hours to build this thing — and really tried to inspire with its design as much as its engineering — it’s truly the culmination of powered prosthesis work pioneered by Dr. Hugh Herr and our entire team at the Media Lab’s Biomechatronics group, and it expands on work from a global community over more than a decade of development.”

The new MIT stamp joins a venerable list of other stamps associated with the Institute. Formerly issued stamps have featured Apollo 11 astronaut and moonwalker Buzz Aldrin ScD ’63, Nobel Prize winner Richard Feynman ’39, and architect Robert Robinson Taylor, who graduated from MIT in 1892 and is considered the nation’s first academically trained African American architect, followed by Pritzker Prize-winning architect I.M. Pei ’40, whose work includes the Louvre Glass Pyramid and the East Building on the National Gallery in Washington, as well as numerous buildings on the MIT campus. 

The new robotics stamp, however, is the first to feature MIT research, as well as members of the MIT community.

“I’m deeply honored that a USPS Forever stamp has been created to celebrate technologically-advanced robotic prostheses, and along with that, the determination to alleviate human impairment,” Herr says. “Through the marriage of human physiology and robotics, persons with leg amputation can now walk with powered prostheses that closely emulate the biological leg. By integrating synthetic sensors, artificial computation, and muscle-like actuation, these technologies are already improving people’s lives in profound ways, and may one day soon bring about the end of disability.”

The Innovation Stamp series will be available for purchase through the U.S. Postal Service later this month.

Read More

An automated health care system that understands when to step in

In recent years, entire industries have popped up that rely on the delicate interplay between human workers and automated software. Companies like Facebook work to keep hateful and violent content off their platforms using a combination of automated filtering and human moderators. In the medical field, researchers at MIT and elsewhere have used machine learning to help radiologists better detect different forms of cancer

What can be tricky about these hybrid approaches is understanding when to rely on the expertise of people versus programs. This isn’t always merely a question of who does a task “better;” indeed, if a person has limited bandwidth, the system may have to be trained to minimize how often it asks for help.

To tackle this complex issue, researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have developed a machine learning system that can either make a prediction about a task, or defer the decision to an expert. Most importantly, it can adapt when and how often it defers to its human collaborator, based on factors such as its teammate’s availability and level of experience.

The team trained the system on multiple tasks, including looking at chest X-rays to diagnose specific conditions such as atelectasis (lung collapse) and cardiomegaly (an enlarged heart). In the case of cardiomegaly, they found that their human-AI hybrid model performed 8 percent better than either could on their own (based on AU-ROC scores).  

“In medical environments where doctors don’t have many extra cycles, it’s not the best use of their time to have them look at every single data point from a given patient’s file,” says PhD student Hussein Mozannar, lead author with David Sontag, the Von Helmholtz Associate Professor of Medical Engineering in the Department of Electrical Engineering and Computer Science, of a new paper about the system that was recently presented at the International Conference of Machine Learning. “In that sort of scenario, it’s important for the system to be especially sensitive to their time and only ask for their help when absolutely necessary.”

The system has two parts: a “classifier” that can predict a certain subset of tasks, and a “rejector” that decides whether a given task should be handled by either its own classifier or the human expert.

Through experiments on tasks in medical diagnosis and text/image classification, the team showed that their approach not only achieves better accuracy than baselines, but does so with a lower computational cost and with far fewer training data samples.

“Our algorithms allow you to optimize for whatever choice you want, whether that’s the specific prediction accuracy or the cost of the expert’s time and effort,” says Sontag, who is also a member of MIT’s Institute for Medical Engineering and Science. “Moreover, by interpreting the learned rejector, the system provides insights into how experts make decisions, and in which settings AI may be more appropriate, or vice-versa.”

The system’s particular ability to help detect offensive text and images could also have interesting implications for content moderation. Mozanner suggests that it could be used at companies like Facebook in conjunction with a team of human moderators. (He is hopeful that such systems could minimize the amount of hateful or traumatic posts that human moderators have to review every day.)

Sontag clarified that the team has not yet tested the system with human experts, but instead developed a series of “synthetic experts” so that they could tweak parameters such as experience and availability. In order to work with a new expert it’s never seen before, the system would need some minimal onboarding to get trained on the person’s particular strengths and weaknesses.

In future work, the team plans to test their approach with real human experts, such as radiologists for X-ray diagnosis. They will also explore how to develop systems that can learn from biased expert data, as well as systems that can work with — and defer to — several experts at once. For example, Sontag imagines a hospital scenario where the system could collaborate with different radiologists who are more experienced with different patient populations.

“There are many obstacles that understandably prohibit full automation in clinical settings, including issues of trust and accountability,” says Sontag. “We hope that our method will inspire machine learning practitioners to get more creative in integrating real-time human expertise into their algorithms.” 

Mozanner is affiliated with both CSAIL and the MIT Institute for Data, Systems and Society (IDSS). The team’s work was supported, in part, by the National Science Foundation.

Read More

Algorithm finds hidden connections between paintings at the Met

Art is often heralded as the greatest journey into the past, solidifying a moment in time and space; the beautiful vehicle that lets us momentarily escape the present. 

With the boundless treasure trove of paintings that exist, the connections between these works of art from different periods of time and space can often go overlooked. It’s impossible for even the most knowledgeable of art critics to take in millions of paintings across thousands of years and be able to find unexpected parallels in themes, motifs, and visual styles. 

To streamline this process, a group of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Microsoft created an algorithm to discover hidden connections between paintings at the Metropolitan Museum of Art (the Met) and Amsterdam’s Rijksmuseum. 

Inspired by a special exhibit “Rembrandt and Velazquez” in the Rijksmuseum, the new “MosAIc” system finds paired or “analogous” works from different cultures, artists, and media by using deep networks to understand how “close” two images are. In that exhibit, the researchers were inspired by an unlikely, yet similar pairing: Francisco de Zurbarán’s “The Martyrdom of Saint Serapion” and Jan Asselijn’s “The Threatened Swan,” two works that portray scenes of profound altruism with an eerie visual resemblance.

“These two artists did not have a correspondence or meet each other during their lives, yet their paintings hinted at a rich, latent structure that underlies both of their works,” says CSAIL PhD student Mark Hamilton, the lead author on a paper about “MosAIc.” 

To find two similar paintings, the team used a new algorithm for image search to unearth the closest match by a particular artist or culture. For example, in response to a query about “which musical instrument is closest to this painting of a blue-and-white dress,” the algorithm retrieves an image of a blue-and-white porcelain violin. These works are not only similar in pattern and form, but also draw their roots from a broader cultural exchange of porcelain between the Dutch and Chinese. 

“Image retrieval systems let users find images that are semantically similar to a query image, serving as the backbone of reverse image search engines and many product recommendation engines,” says Hamilton. “Restricting an image retrieval system to particular subsets of images can yield new insights into relationships in the visual world. We aim to encourage a new level of engagement with creative artifacts.” 

How it works 

For many, art and science are irreconcilable: one grounded in logic, reasoning, and proven truths, and the other motivated by emotion, aesthetics, and beauty. But recently, AI and art took on a new flirtation that, over the past 10 years, developed into something more serious. 

A large branch of this work, for example, has previously focused on generating new art using AI. There was the GauGAN project developed by researchers at MIT, NVIDIA, and the University of California at Berkeley; Hamilton and others’ previous GenStudio project; and even an AI-generated artwork that sold at Sotheby’s for $51,000

MosAIc, however, doesn’t aim to create new art so much as help explore existing art. One similar tool, Google’s “X Degrees of Separation,” finds paths of art that connect two works of art, but MosAIc differs in that it only requires a single image. Instead of finding paths, it uncovers connections in whatever culture or media the user is interested in, such as finding the shared artistic form of “Anthropoides paradisea” and “Seth Slaying a Serpent, Temple of Amun at Hibis.” 

Hamilton notes that building out their algorithm was a tricky endeavor, because they wanted to find images that were similar not just in color or style, but in meaning and theme. In other words, they’d want dogs to be close to other dogs, people to be close to other people, and so forth. To achieve this, they probe a deep network’s inner “activations” for each image in the combined open access collections of the Met and the Rijksmuseum. Distance between the “activations” of this deep network, which are commonly called “features,” was how they judged image similarity.

To find analogous images between different cultures, the team used a new image-search data structure called a “conditional KNN tree” that groups similar images together in a tree-like structure. To find a close match, they start at the tree’s “trunk” and follow the most promising “branch” until they are sure they’ve found the closest image. The data structure improves on its predecessors by allowing the tree to quickly “prune” itself to a particular culture, artist, or collection, quickly yielding answers to new types of queries.

What Hamilton and his colleagues found surprising was that this approach could also be applied to helping find problems with existing deep networks, related to the surge of “deepfakes” that have recently cropped up. They applied this data structure to find areas where probabilistic models, such as the generative adversarial networks (GANs) that are often used to create deepfakes, break down. They coined these problematic areas “blind spots,” and note that they give us insight into how GANs can be biased. Such blind spots further show that GANs struggle to represent particular areas of a dataset, even if most of their fakes can fool a human. 

Testing MosAIc 

The team evaluated MosAIc’s speed, and how closely it aligned with our human intuition about visual analogies.

For the speed tests, they wanted to make sure that their data structure provided value over simply searching through the collection with quick, brute-force search. 

To understand how well the system aligned with human intuitions, they made and released two new datasets for evaluating conditional image retrieval systems. One dataset challenged algorithms to find images with the same content even after they had been “stylized” with a neural style transfer method. The second dataset challenged algorithms to recover English letters across different fonts. A bit less than two-thirds of the time, MosAIc was able to recover the correct image in a single guess from a “haystack” of 5,000 images.

“Going forward, we hope this work inspires others to think about how tools from information retrieval can help other fields like the arts, humanities, social science, and medicine,” says Hamilton. “These fields are rich with information that has never been processed with these techniques and can be a source for great inspiration for both computer scientists and domain experts. This work can be expanded in terms of new datasets, new types of queries, and new ways to understand the connections between works.” 

Hamilton wrote the paper on MosAIc alongside Professor Bill Freeman and MIT undergraduates Stefanie Fu and Mindren Lu. The MosAIc website was built by MIT, Fu, Lu, Zhenbang Chen, Felix Tran, Darius Bopp, Margaret Wang, Marina Rogers, and Johnny Bui, at the Microsoft Garage winter externship program.

Read More

Looking into the black box

Deep learning systems are revolutionizing technology around us, from voice recognition that pairs you with your phone to autonomous vehicles that are increasingly able to see and recognize obstacles ahead. But much of this success involves trial and error when it comes to the deep learning networks themselves. A group of MIT researchers recently reviewed their contributions to a better theoretical understanding of deep learning networks, providing direction for the field moving forward.

“Deep learning was in some ways an accidental discovery,” explains Tommy Poggio, investigator at the McGovern Institute for Brain Research, director of the Center for Brains, Minds, and Machines (CBMM), and the Eugene McDermott Professor in Brain and Cognitive Sciences. “We still do not understand why it works. A theoretical framework is taking form, and I believe that we are now close to a satisfactory theory. It is time to stand back and review recent insights.”

Climbing data mountains

Our current era is marked by a superabundance of data — data from inexpensive sensors of all types, text, the internet, and large amounts of genomic data being generated in the life sciences. Computers nowadays ingest these multidimensional datasets, creating a set of problems dubbed the “curse of dimensionality” by the late mathematician Richard Bellman.

One of these problems is that representing a smooth, high-dimensional function requires an astronomically large number of parameters. We know that deep neural networks are particularly good at learning how to represent, or approximate, such complex data, but why? Understanding why could potentially help advance deep learning applications.

“Deep learning is like electricity after Volta discovered the battery, but before Maxwell,” explains Poggio, who is the founding scientific advisor of The Core, MIT Quest for Intelligence, and an investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. “Useful applications were certainly possible after Volta, but it was Maxwell’s theory of electromagnetism, this deeper understanding that then opened the way to the radio, the TV, the radar, the transistor, the computers, and the internet.”

The theoretical treatment by Poggio, Andrzej Banburski, and Qianli Liao points to why deep learning might overcome data problems such as “the curse of dimensionality.” Their approach starts with the observation that many natural structures are hierarchical. To model the growth and development of a tree doesn’t require that we specify the location of every twig. Instead, a model can use local rules to drive branching hierarchically. The primate visual system appears to do something similar when processing complex data. When we look at natural images — including trees, cats, and faces — the brain successively integrates local image patches, then small collections of patches, and then collections of collections of patches. 

“The physical world is compositional — in other words, composed of many local physical interactions,” explains Qianli Liao, an author of the study, and a graduate student in the Department of Electrical Engineering and Computer Science and a member of the CBMM. “This goes beyond images. Language and our thoughts are compositional, and even our nervous system is compositional in terms of how neurons connect with each other. Our review explains theoretically why deep networks are so good at representing this complexity.”

The intuition is that a hierarchical neural network should be better at approximating a compositional function than a single “layer” of neurons, even if the total number of neurons is the same. The technical part of their work identifies what “better at approximating” means and proves that the intuition is correct.

Generalization puzzle

There is a second puzzle about what is sometimes called the unreasonable effectiveness of deep networks. Deep network models often have far more parameters than data to fit them, despite the mountains of data we produce these days. This situation ought to lead to what is called “overfitting,” where your current data fit the model well, but any new data fit the model terribly. This is dubbed poor generalization in conventional models. The conventional solution is to constrain some aspect of the fitting procedure. However, deep networks do not seem to require this constraint. Poggio and his colleagues prove that, in many cases, the process of training a deep network implicitly “regularizes” the solution, providing constraints.

The work has a number of implications going forward. Though deep learning is actively being applied in the world, this has so far occurred without a comprehensive underlying theory. A theory of deep learning that explains why and how deep networks work, and what their limitations are, will likely allow development of even much more powerful learning approaches.

“In the long term, the ability to develop and build better intelligent machines will be essential to any technology-based economy,” explains Poggio. “After all, even in its current — still highly imperfect — state, deep learning is impacting, or about to impact, just about every aspect of our society and life.”

Read More

Commentary: America must invest in its ability to innovate

In July of 1945, in an America just beginning to establish a postwar identity, former MIT vice president Vannevar Bush set forth a vision that guided the country to decades of scientific dominance and economic prosperity. Bush’s report to the president of the United States, “Science: The Endless Frontier,” called on the government to support basic research in university labs. Its ideas, including the creation of the National Science Foundation (NSF), are credited with helping to make U.S. scientific and technological innovation the envy of the world.

Today, America’s lead in science and technology is being challenged as never before, write MIT President L. Rafael Reif and Indiana University President Michael A. McRobbie in an op-ed published today by The Chicago Tribune. They describe a “triple challenge” of bolder foreign competitors, faster technological change, and a merciless race to get from lab to market.

The government’s decision to adopt Bush’s ideas was bold and controversial at the time, and similarly bold action is needed now, they write.

“The U.S. has the fundamental building blocks for success, including many of the world’s top research universities that are at the forefront of the fight against COVID-19,” reads the op-ed. “But without a major, sustained funding commitment, a focus on key technologies and a faster system for transforming discoveries into new businesses, products and quality jobs, in today’s arena, America will not prevail.”

McRobbie and Reif believe a bipartisan bill recently introduced in both chambers of Congress can help America’s innovation ecosystem meet the challenges of the day. Named the “Endless Frontier Act,” the bill would support research focused on advancing key technologies like artificial intelligence and quantum computing. It does not seek to alter or replace the NSF, but to “create new strength in parallel,” they write. 

The bill would also create scholarships, fellowships, and other forms of assistance to help build an American workforce ready to develop and deploy the latest technologies. And, it would facilitate experiments to help commercialize new ideas more quickly.

“Today’s leaders have the opportunity to display the far-sighted vision their predecessors showed after World War II — to expand and shape of our institutions, and to make the investments to adapt to a changing world,” Reif and McRobbie write.

Both university presidents acknowledge that measures such as the Endless Frontier Act require audacious choices. But if leaders take the right steps now, they write, those choices will seem, in retrospect, obvious and wise.

“Now as then, our national prosperity hinges on the next generation of technical triumphs,” Reif and Mcrobbie write. “Now as then, that success is not inevitable, and it will not come by chance. But with focused funding and imaginative policy, we believe it remains in reach.”

Read More

Neural vulnerability in Huntington’s disease tied to release of mitochondrial RNA

In the first study to comprehensively track how different types of brain cells respond to the mutation that causes Huntington’s disease (HD), MIT neuroscientists found that a significant cause of death for an especially afflicted kind of neuron might be an immune response to genetic material errantly released by mitochondria, the cellular components that provide cells with energy.

In different cell types at different stages of disease progression, the researchers measured how levels of RNA differed from normal in brain samples from people who died with Huntington’s disease and in mice engineered with various degrees of the genetic mutation. Among several novel observations in both species, one that particularly stood out is that RNA from mitochondria were misplaced within the brain cells, called spiny projection neurons (SPNs), that are ravaged in the disease, contributing to its fatal neurological symptoms. The scientists observed that these stray RNAs, which look different to cells than RNA derived from the cell nucleus, triggered a problematic immune reaction.

“When these RNAs are released from the mitochondria, to the cell they can look just like viral RNAs, and this triggers innate immunity and can lead to cell death,” says study senior author Myriam Heiman, associate professor in MIT’s Department of Brain and Cognitive Sciences, the Picower Institute for Learning and Memory, and the Broad Institute of MIT and Harvard. “We believe this to be part of the pathway that triggers inflammatory signaling, which has been seen in HD before.”

Picower Fellow Hyeseung Lee and former visiting scientist Robert Fenster are co-lead authors of the study published in Neuron.

Mitochondrial mishap

The team’s two different screening methods, “TRAP,” which can be used in mice, and single-nucleus RNA sequencing, which can also be used in mice and humans, not only picked up the presence of mitochondrial RNAs most specifically in the SPNs but also showed a deficit in the expression of genes for a process called oxidative phosphorylation that fuel-hungry neurons employ to make energy. The mouse experiments showed that this downregulation of oxidative phosphorylation and increase in mitochondrial RNA release both occurred very early in disease, before most other gene expression differences were manifest.

Moreover, the researchers found increased expression of an immune system protein called PKR, which has been shown to be a sensor of the released mitochondrial RNA. In fact, the team found that PKR was not only elevated in the neurons, but also activated and bound to mitochondrial RNAs.

The new findings appear to converge with other clinical conditions that, like Huntington’s disease, lead to damage in a brain region called the striatum, Heiman said. In a condition called Aicardi-Goutières syndrome, the same brain region can be damaged because of a misregulated innate immune response. In addition, children with thiamine deficiency suffer mitochondrial dysfunction, and a prior study has shown that mice with thiamine deficiency show PKR activation, much like Heiman’s team found.

“These non-HD human disorders that are characterized by striatal cell death extend the significance of our findings by linking both the oxidative metabolism deficits and autoinflammatory activation phenomena described here directly to human striatal cell death absent the [Huntington’s mutation] context,” they wrote in Neuron.

Other observations

Though the mitochondrial RNA release discovery was the most striking, the study produced several other potentially valuable findings, Heiman says.

One is that the study produced a sweeping catalog of substantial differences in gene expression, including ones related to important neural functions such as their synapse circuit connections and circadian clock function. Another, based on some of the team’s analysis of their results, is that a master regulator of these alterations to gene transcription in neurons may be the retinoic acid receptor b (or “Rarb”) transcription factor. Heiman said that this could be a clinically useful finding because there are drugs that can activate Rarb.

“If we can inhibit transcriptional misregulation, we might be able to alter the outcome of the disease,” Heiman speculates. “It’s an important hypothesis to test.”

Another, more basic, finding in the study is that many of the gene expression differences the researchers saw in neurons in the human brain samples matched well with the changes they saw in mouse neurons, providing additional assurance that mouse models are indeed useful for studying this disease, Heiman says. The question has dogged the field somewhat because mice typically don’t show as much neuron death as people do.

“What we see is that actually the mouse models recapitulate the gene-expression changes that are occurring in these stage HD human neurons very well,” she says. “Interestingly, some of the other, non-neuronal, cell types did not show as much conservation between the human disease and mouse models, information that our team believes will be helpful to other investigators in future studies.”

The single-nucleus RNA sequencing study was part of a longstanding collaboration with Manolis Kellis’s group in MIT’s Computer Science and Artificial Intelligence Laboratory. Together, the two labs hope to expand these studies in the near future to further understand Huntington’s disease mechanisms.

In addition to Heiman, Lee, and Fenster, the paper’s other authors are Sebastian Pineda, Whitney Gibbs, Shahin Mohammadi, Jose-Davila-Velderrain, Francisco Garcia, Martine Therrien, Hailey Novis, Hilary Wilkinson, Thomas Vogt, Manolis Kellis, and Matthew LaVoie.

The CHDI Foundation, the U.S. National Institutes of Health, Broderick Fund for Phytocannabinoid Research at MIT, and the JPB Foundation funded the study.

Read More

MIT Schwarzman College of Computing announces first named professorships

The MIT Stephen A. Schwarzman College of Computing announced its first two named professorships, beginning July 1, to Frédo Durand and Samuel Madden in the Department of Electrical Engineering and Computer Science (EECS). These named positions recognize the outstanding achievements and future potential of their academic careers.

“I’m thrilled to acknowledge Frédo and Sam for their outstanding contributions in research and education. These named professorships recognize them for their extraordinary achievements,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing.

Frédo Durand, a professor of computer science and engineering in EECS, has been named the inaugural Amar Bose Professor of Computing. The professorship, named after Amar Bose, former longtime member of the MIT faculty and the founder of Bose Corporation, is granted in recognition of the recipient’s excellence in teaching, research, and mentorship in the field of computing. A member of the Computer Science and Artificial Intelligence Laboratory, Durand’s research interests span most aspects of picture generation and creation, including rendering and computational photography. His recent focus includes video magnification for revealing the invisible, differentiable rendering, and compilers for productive high-performance imaging.

He received an inaugural Eurographics Young Researcher Award in 2004; an NSF CAREER Award in 2005; an inaugural Microsoft Research New Faculty Fellowship in 2005; a Sloan Foundation Fellowship in 2006; a Spira Award for distinguished teaching in 2007; and the ACM SIGGRAPH Computer Graphics Achievement Award in 2016.

Samuel Madden has been named the inaugural College of Computing Distinguished Professor of Computing. A professor of electrical engineering and computer science in EECS, Madden is being honored as an outstanding faculty member who is recognized as a leader and innovator. His research is in the area of database systems, focusing on database analytics and query processing, ranging from clouds to sensors to modern high-performance server architectures. He co-directs the Data Systems for AI Lab initiative and the Data Systems Group, investigating issues related to systems and algorithms for data focusing on applying new methodologies for processing data, including applying machine learning methods to data systems and engineering data systems for applying machine learning at scale. 

Madden was named one of MIT Technology Review‘s “35 Innovators Under 35” in 2005, and received an NSF CAREER Award in 2004 and a Sloan Foundation Fellowship in 2007. He has also received several best paper awards in VLDB 2004 and 2007 and in MobiCom 2006. In addition, he was recognized with a “test of time” award in SIGMOD 2013 for his work on acquisitional query processing and a 10-year best paper award in VLDB 2015 for his work on the C-Store system.

Read More

Better simulation meshes well for design software (and more)

The digital age has spurred the rise of entire industries aimed at simulating our world and the objects in it. Simulation is what helps movies have realistic effects, automakers test cars virtually, and scientists analyze geophysical data.

To simulate physical systems in 3D, researchers often program computers to divide objects into sets of smaller elements, a procedure known as “meshing.” Most meshing approaches tile 2D objects with patterns of triangles or quadrilaterals (quads), and tile 3D objects with patterns of triangular pyramids (tetrahedra) or bent cubes (hexahedra, or “hexes”).

While much progress has been made in the fields of computational geometry and geometry processing, scientists surprisingly still don’t fully understand the math of stacking together cubes when they are allowed to bend or stretch a bit. Many questions remain about the patterns that can be formed by gluing cube-shaped elements together, which relates to an area of math called topology.

New work out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aims to explore several of these questions. Researchers have published a series of papers that address shortcomings of existing meshing tools by seeking out mathematical structure in the problem. In collaboration with scientists at the University of Bern and the University of Texas at Austin, their work shows how areas of math like algebraic geometry, topology, and differential geometry could improve physical simulations used in computer-aided design (CAD), architecture, gaming, and other sectors.

“Simulation tools that are being deployed ‘in the wild’ don’t always fail gracefully,” says MIT Associate Professor Justin Solomon, senior author on the three new meshing-related papers. “If one thing is wrong with the mesh, the simulation might not agree with real-world physics, and you might have to throw the whole thing out.” 

In one paper, a team led by MIT undergraduate Zoë Marschner developed an algorithm to repair issues that can often trip up existing approaches for hex meshing, specifically.

For example, some meshes contain elements that are partially inside-out or that self-intersect in ways that can’t be detected from their outer surfaces. The team’s algorithm works in iterations to repair those meshes in a way that untangles any such inversions while remaining faithful to the original shape.

“Thorny unsolved topology problems show up all over the hex-meshing universe,” says Marschner. “Until we figure them out, our algorithms will often fail in subtle ways.”

Marschner’s algorithm uses a technique called “sum-of-squares (SOS) relaxation” to pinpoint exactly where hex elements are inverted (which researchers describe as being “invalid”). It then moves the vertices of the hex element so that the hex is valid at the point where it was previously most invalid. The algorithm repeats this procedure to repair the hex.

In addition to being published at this week’s Symposium on Geometry Processing, Marschner’s work earned her MIT’s 2020 Anna Pogosyants UROP Award.

A second paper spearheaded by PhD student Paul Zhang improves meshing by incorporating curves, edges, and other features that provide important cues for the human visual system and pattern recognition algorithms. 

It can be difficult for computers to find these features reliably, let alone incorporate them into meshes. By using an existing construction called an “octahedral frame field” that is traditionally used for meshing 3D volumes, Zhang and his team have been able to develop 2D surface meshes without depending on unreliable methods that try to trace out features ahead of time. 

Zhang says that they’ve shown that these so-called “feature-aligned” constructions automatically create visually accurate quad meshes, which are widely used in computer graphics and virtual reality applications.

“As the goal of meshing is to simultaneously simplify the object and maintain accuracy to the original domain, this tool enables a new standard in feature-aligned quad meshing,” says Zhang. 

A third paper led by PhD student David Palmer links Zhang and Marschner’s work, advancing the theory of octahedral fields and showing how better math provides serious practical improvement for hex meshing. 

In physics and geometry, velocities and flows are represented as “vector fields,” which attach an arrow to every point in a region of space. In 3D, these fields can twist, knot around, and cross each other in remarkably complicated ways. Further complicating matters, Palmer’s research studies the structure of “frame fields,” in which more than one arrow appears at each point.

Palmer’s work gives new insight into the ways frames can be described and uses them to design methods for placing frames in 3D space. Building off of existing work, his methods produce smooth, stable fields that can guide the design of high-quality meshes.

Solomon says that his team aims to eventually characterize all the ways that octahedral frames twist and knot around each other to create structures in space. 

“This is a cool area of computational geometry where theory has a real impact on the quality of simulation tools,” says Solomon. 

Palmer cites organizations like Sandia National Labs that conduct complicated physical simulations involving phenomena like nonlinear elasticity and object deformation. He says that, even today, engineering teams often build or repair hex meshes almost completely by hand. 

“Existing software for automatic meshing often fails to produce a complete mesh, even if the frame field guidance ensures that the mesh pieces that are there look good,” Palmer says. “Our approach helps complete the picture.”

Marschner’s paper was co-written by Solomon, Zhang, and Palmer. Zhang’s paper was co-written by Solomon, Josh Vekhter, and Etienne Vouga at the University of Texas at Austin, Professor David Bommes of the University of Bern in Germany, and CSAIL postdoc Edward Chien. Palmer’s paper was co-written by Solomon and Bommes. Zhang and Palmer’s papers will be presented at the SIGGRAPH computer graphics conference later this month.

The projects were supported, in part, by Adobe Systems, the U.S. Air Force Office of Scientific Research, the U.S. Army Research Office, the U.S. Department of Energy, the Fannie and John Hertz Foundation, MathWorks, the MIT-IBM Watson AI Laboratory, the National Science Foundation, the Skoltech-MIT Next Generation program, and the Toyota-CSAIL Joint Research Center.

Read More

Tackling the misinformation epidemic with “In Event of Moon Disaster”

Can you recognize a digitally manipulated video when you see one? It’s harder than most people realize. As the technology to produce realistic “deepfakes” becomes more easily available, distinguishing fact from fiction will only get more challenging. A new digital storytelling project from MIT’s Center for Advanced Virtuality aims to educate the public about the world of deepfakes with “In Event of Moon Disaster.”

This provocative website showcases a “complete” deepfake (manipulated audio and video) of U.S. President Richard M. Nixon delivering the real contingency speech written in 1969 for a scenario in which the Apollo 11 crew were unable to return from the moon. The team worked with a voice actor and a company called Respeecher to produce the synthetic speech using deep learning techniques. They also worked with the company Canny AI to use video dialogue replacement techniques to study and replicate the movement of Nixon’s mouth and lips. Through these sophisticated AI and machine learning technologies, the seven-minute film shows how thoroughly convincing deepfakes can be. 

“Media misinformation is a longstanding phenomenon, but, exacerbated by deepfake technologies and the ease of disseminating content online, it’s become a crucial issue of our time,” says D. Fox Harrell, professor of digital media and of artificial intelligence at MIT and director of the MIT Center for Advanced Virtuality. “With this project — and a course curriculum on misinformation being built around it — our powerfully talented XR Creative Director Francesca Panetta is pushing forward one of the center’s broad aims: using AI and technologies of virtuality to support creative expression and truth.”

Alongside the film, moondisaster.org features an array of interactive and educational resources on deepfakes. Led by Francesca Panetta and Halsey Burgund, a fellow at MIT Open Documentary Lab, an interdisciplinary team of artists, journalists, filmmakers, designers, and computer scientists has created a robust, interactive resource site where educators and media consumers can deepen their understanding of deepfakes: how they are made and how they work; their potential use and misuse; what is being done to combat deepfakes; and teaching and learning resources. 

“This alternative history shows how new technologies can obfuscate the truth around us, encouraging our audience to think carefully about the media they encounter daily,” says Panetta, XR creative director at the Center for Advanced Virtuality, part of MIT Open Learning. 

Also part of the launch is a new documentary, “To Make a Deepfake,” a 30-minute film by Scientific American, that uses “In Event of Moon Disaster” as a jumping-off point to explain the technology behind AI-generated media. The documentary features prominent scholars and thinkers on the state of deepfakes, on the stakes for the spread of misinformation and the twisting of our digital reality, and on the future of truth.

The project is supported by the MIT Open Documentary Lab and the Mozilla Foundation, which awarded “In Event of Moon Disaster” a Creative Media Award last year. These awards are part of Mozilla’s mission to realize more trustworthy AI in consumer technology. The latest cohort of awardees uses art and advocacy to examine AI’s effect on media and truth.

Says J. Bob Alotta, Mozilla’s vice president of global programs: “AI plays a central role in consumer technology today — it curates our news, it recommends who we date, and it targets us with ads. Such a powerful technology should be demonstrably worthy of trust, but often it is not. Mozilla’s Creative Media Awards draw attention to this, and also advocate for more privacy, transparency, and human well-being in AI.” 

“In Event of Moon Disaster” previewed last fall as a physical art installation at the International Documentary Film Festival Amsterdam, where it won the Special Jury Prize for Digital Storytelling; it was selected for the 2020 Tribeca Film Festival and Cannes XR. The new website is the project’s global digital launch, making the film and associated materials available for free to all audiences.

The past few months have seen the world move almost entirely online: schools, talk shows, museums, election campaigns, doctor’s appointments — all have made a rapid transition to virtual. When every interaction we have with the world is seen through a digital filter, it becomes more important than ever to learn how to distinguish between authentic and manipulated media. 

“It’s our hope that this project will encourage the public to understand that manipulated media plays a significant role in our media landscape,” says co-director Burgund, “and that, with further understanding and diligence, we can all reduce the likelihood of being unduly influenced by it.”

Read More

Faculty receive funding to develop artificial intelligence techniques to combat Covid-19

Artificial intelligence has the power to help put an end to the Covid-19 pandemic. Not only can techniques of machine learning and natural language processing be used to track and report Covid-19 infection rates, but other AI techniques can also be used to make smarter decisions about everything from when states should reopen to how vaccines are designed. Now, MIT researchers working on seven groundbreaking projects on Covid-19 will be funded to more rapidly develop and apply novel AI techniques to improve medical response and slow the pandemic spread.

Earlier this year, the C3.ai Digital Transformation Institute (C3.ai DTI) formed, with the goal of attracting the world’s leading scientists to join in a coordinated and innovative effort to advance the digital transformation of businesses, governments, and society. The consortium is dedicated to accelerating advances in research and combining machine learning, artificial intelligence, internet of things, ethics, and public policy — for enhancing societal outcomes. MIT, under the auspices of the School of Engineering, joined the C3.ai DTI consortium, along with C3.ai, Microsoft Corporation, the University of Illinois at Urbana-Champaign, the University of California at Berkeley, Princeton University, the University of Chicago, Carnegie Mellon University, and, most recently, Stanford University.

The initial call for project proposals aimed to embrace the challenge of abating the spread of Covid-19 and advance the knowledge, science, and technologies for mitigating the impact of pandemics using AI. Out of a total of 200 research proposals, 26 projects were selected and awarded $5.4 million to continue AI research to mitigate the impact of Covid-19 in the areas of medicine, urban planning, and public policy.

The first round of grant recipients was recently announced, and among them are five projects led by MIT researchers from across the Institute: Saurabh Amin, associate professor of civil and environmental engineering; Dimitris Bertsimas, the Boeing Leaders for Global Operations Professor of Management; Munther Dahleh, the William A. Coolidge Professor of Electrical Engineering and Computer Science and director of the MIT Institute for Data, Systems, and Society; David Gifford, professor of biological engineering and of electrical engineering and computer science; and Asu Ozdaglar, the MathWorks Professor of Computer Science and Engiineering, head of the Department of Electrical Engineering and Computer Science, and deputy dean of academics for MIT Schwarzman College of Computing.

“We are proud to be a part of this consortium, and to collaborate with peers across higher education, industry, and health care to collectively combat the current pandemic, and to mitigate risk associated with future pandemics,” says Anantha P. Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “We are so honored to have the opportunity to accelerate critical Covid-19 research through resources and expertise provided by the C3.ai DTI.”

Additionally, three MIT researchers will collaborate with principal investigators from other institutions on projects blending health and machine learning. Regina Barzilay, the Delta Electronics Professor in the Department of Electrical Engineering and Computer Science, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science, join Ziv Bar-Joseph from Carnegie Mellon University for a project using machine learning to seek treatment for Covid-19. Aleksander Mądry, professor of computer science in the Department of Electrical Engineering and Computer Science, joins Sendhil Mullainathan of the University of Chicago for a project using machine learning to support emergency triage of pulmonary collapse due to Covid-19 on the basis of X-rays.

Bertsimas’s project develops automated, interpretable, and scalable decision-making systems based on machine learning and artificial intelligence to support clinical practices and public policies as they respond to the Covid-19 pandemic. When it comes to reopening the economy while containing the spread of the pandemic, Ozdaglar’s research provides quantitative analyses of targeted interventions for different groups that will guide policies calibrated to different risk levels and interaction patterns. Amin is investigating the design of actionable information and effective intervention strategies to support safe mobilization of economic activity and reopening of mobility services in urban systems. Dahleh’s research innovatively uses machine learning to determine how to safeguard schools and universities against the outbreak. Gifford was awarded funding for his project that uses machine learning to develop more informed vaccine designs with improved population coverage, and to develop models of Covid-19 disease severity using individual genotypes.

“The enthusiastic support of the distinguished MIT research community is making a huge contribution to the rapid start and significant progress of the C3.ai Digital Transformation Institute,” says Thomas Siebel, chair and CEO of C3.ai. “It is a privilege to be working with such an accomplished team.”

The following projects are the MIT recipients of the inaugural C3.ai DTI Awards: 

“Pandemic Resilient Urban Mobility: Learning Spatiotemporal Models for Testing, Contact Tracing, and Reopening Decisions” — Saurabh Amin, associate professor of civil and environmental engineering; and Patrick Jaillet, the Dugald C. Jackson Professor of Electrical Engineering and Computer Science

“Effective Cocktail Treatments for SARS-CoV-2 Based on Modeling Lung Single Cell Response Data” — Regina Barzilay, the Delta Electronics Professor in the Department of Electrical Engineering and Computer Science, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science (Principal investigator: Ziv Bar-Joseph of Carnegie Mellon University)

“Toward Analytics-Based Clinical and Policy Decision Support to Respond to the Covid-19 Pandemic” — Dimitris Bertsimas, the Boeing Leaders for Global Operations Professor of Management and associate dean for business analytics; and Alexandre Jacquillat, assistant professor of operations research and statistics

“Reinforcement Learning to Safeguard Schools and Universities Against the Covid-19 Outbreak” — Munther Dahleh, the William A. Coolidge Professor of Electrical Engineering and Computer Science and director of MIT Institute for Data, Systems, and Society; and Peko Hosoi, the Neil and Jane Pappalardo Professor of Mechanical Engineering and associate dean of engineering

“Machine Learning-Based Vaccine Design and HLA Based Risk Prediction for Viral Infections” — David Gifford, professor of biological engineering and of electrical engineering and computer science

“Machine Learning Support for Emergency Triage of Pulmonary Collapse in Covid-19” — Aleksander Mądry, professor of computer science in the Department of Electrical Engineering and Computer Science (Principal investigator: Sendhil Mullainathan of the University of Chicago)

“Targeted Interventions in Networked and Multi-Risk SIR Models: How to Unlock the Economy During a Pandemic” — Asu Ozdaglar, the MathWorks Professor of Electrical Engineering and Computer Science, department head of electrical engineering and computer science, and deputy dean of academics for MIT Schwarzman College of Computing; and Daron Acemoglu, Institute Professor

Read More