Anticipating heart failure with machine learning

Anticipating heart failure with machine learning

Every year, roughly one out of eight U.S. deaths is caused at least in part by heart failure. One of acute heart failure’s most common warning signs is excess fluid in the lungs, a condition known as “pulmonary edema.” 

A patient’s exact level of excess fluid often dictates the doctor’s course of action, but making such determinations is difficult and requires clinicians to rely on subtle features in X-rays that sometimes lead to inconsistent diagnoses and treatment plans.

To better handle that kind of nuance, a group led by researchers at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) has developed a machine learning model that can look at an X-ray to quantify how severe the edema is, on a four-level scale ranging from 0 (healthy) to 3 (very, very bad). The system determined the right level more than half of the time, and correctly diagnosed level 3 cases 90 percent of the time.

Working with Beth Israel Deaconess Medical Center (BIDMC) and Philips, the team plans to integrate the model into BIDMC’s emergency-room workflow this fall.

“This project is meant to augment doctors’ workflow by providing additional information that can be used to inform their diagnoses as well as enable retrospective analyses,” says PhD student Ruizhi Liao, who was the co-lead author of a related paper with fellow PhD student Geeticka Chauhan and MIT professors Polina Golland and Peter Szolovits. 

The team says that better edema diagnosis would help doctors manage not only acute heart issues, but other conditions like sepsis and kidney failure that are strongly associated with edema. 

As part of a separate journal article, Liao and colleagues also took an existing public dataset of X-ray images and developed new annotations of severity labels that were agreed upon by a team of four radiologists. Liao’s hope is that these consensus labels can serve as a universal standard to benchmark future machine learning development.

An important aspect of the system is that it was trained not just on more than 300,000 X-ray images, but also on the corresponding text of reports about the X-rays that were written by radiologists. The team was pleasantly surprised that their system found such success using these reports, most of which didn’t have labels explaining the exact severity level of the edema.

“By learning the association between images and their corresponding reports, the method has the potential for a new way of automatic report generation from the detection of image-driven findings,says Tanveer Syeda-Mahmood, a researcher not involved in the project who serves as chief scientist for IBM’s Medical Sieve Radiology Grand Challenge. “Of course, further experiments would have to be done for this to be broadly applicable to other findings and their fine-grained descriptors.”

Chauhan’s efforts focused on helping the system make sense of the text of the reports, which could often be as short as a sentence or two. Different radiologists write with varying tones and use a range of terminology, so the researchers had to develop a set of linguistic rules and substitutions to ensure that data could be analyzed consistently across reports. This was in addition to the technical challenge of designing a model that can jointly train the image and text representations in a meaningful manner.

“Our model can turn both images and text into compact numerical abstractions from which an interpretation can be derived,” says Chauhan. “We trained it to minimize the difference between the representations of the X-ray images and the text of the radiology reports, using the reports to improve the image interpretation.”

On top of that, the team’s system was also able to “explain” itself, by showing which parts of the reports and areas of X-ray images correspond to the model prediction. Chauhan is hopeful that future work in this area will provide more detailed lower-level image-text correlations, so that clinicians can build a taxonomy of images, reports, disease labels and relevant correlated regions. 

“These correlations will be valuable for improving search through a large database of X-ray images and reports, to make retrospective analysis even more effective,” Chauhan says.

Chauhan, Golland, Liao and Szolovits co-wrote the paper with MIT Assistant Professor Jacob Andreas, Professor William Wells of Brigham and Women’s Hospital, Xin Wang of Philips, and Seth Berkowitz and Steven Horng of BIDMC. The paper will be presented Oct. 5 (virtually) at the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). 

The work was supported in part by the MIT Deshpande Center for Technological Innovation, the MIT Lincoln Lab, the National Institutes of Health, Philips, Takeda, and the Wistron Corporation.

Read More

Milo Phillips-Brown receives inaugural MAC3 Society and Ethics in Computing Research Award

Milo Phillips-Brown, a postdoc in MIT Philosophy, was recently named the inaugural recipient of the MAC3 Society and Ethics in Computing Research Award, which provides support to promising PhD candidates or postdocs conducting interdisciplinary research on the societal and ethical dimensions of computing.

Phillips-Brown, the Distinguished Postdoctoral Scholar in Ethics and Technology within the MIT Stephen A. Schwarzman College of Computing — a position that is supported, in part, by the MIT Quest for Intelligence — is being recognized for his work teaching responsible engineering practices to computer scientists. He teaches two courses, 24.131 (Ethics of Technology) and 24.133 (Experiential Ethics), and has been an active participant in the activities of the Social and Ethical Responsibilities of Computing (SERC), a new cross-cutting area in the MIT Stephen A. Schwarzman College of Computing that aims to actively weave social, ethical, and policy considerations into the teaching, research, and implementation of computing.

“We are delighted to be able to work so closely with Milo,” says Julie Shah, an associate professor in the Department of Aeronautics and Astronautics, who along with David Kaiser, the Germeshausen Professor of the History of Science and professor of physics, serves as associate dean of SERC. “Over this past spring semester, Milo was a great thought partner in the design of SERC-related materials, including original homework assignments and in-class demonstrations for instructors to embed into a wide variety of courses at MIT,” says Shah.

“We knew we had an exceptional colleague when we selected Milo as our inaugural postdoc. We look forward to collaborating with him and his continued contributions to SERC,” adds Kaiser.

In addition to active learning projects, Phillips-Brown has been working with Shah and Kaiser on preparing the first set of original case studies on social and ethical responsibilities of computing for release in the coming months. Commissioned and curated by SERC, each case study will be brief and appropriate for use in undergraduate instruction and will also be available to the public via MIT’s open access channels.

“I’m thrilled to be the inaugural recipient of the MAC3 Society and Ethics in Computing Research Award. This is a time when we need to be exploring all possible avenues for how to teach MIT students to build technologies ethically, and the award is enabling me to help just do that: work with professors and students across the Institute to develop new models for ethical engineering pedagogy,” says Phillips-Brown.

Phillips-Brown PhD ’19 received his doctorate in philosophy from MIT and his bachelor’s in philosophy from Reed College. He is a research fellow in digital ethics and governance at the Jain Family Institute and a member of the Society for Philosophy and Disability. From 2015 to 2018, he directed the Philosophy in an Inclusive Key (PIKSI) Boston, a summer program for undergraduates from underrepresented groups. In January 2021, he will begin an appointment at Oxford University as an associate professor of philosophy in the Faculty of Philosophy and the Department of Computer Science.

The MAC3 Society and Ethics in Computing Research Award was established through the MAC3 Impact Philanthropies which provides targeted support to organizations and initiatives that impact early childhood, health and education, as well as the environment and the oceans.

Read More

Provably exact artificial intelligence for nuclear and particle physics

The Standard Model of particle physics describes all the known elementary particles and three of the four fundamental forces governing the universe; everything except gravity. These three forces — electromagnetic, strong, and weak — govern how particles are formed, how they interact, and how the particles decay.

Studying particle and nuclear physics within this framework, however, is difficult, and relies on large-scale numerical studies. For example, many aspects of the strong force require numerically simulating the dynamics at the scale of 1/10th to 1/100th the size of a proton to answer fundamental questions about the properties of protons, neutrons, and nuclei.

“Ultimately, we are computationally limited in the study of proton and nuclear structure using lattice field theory,” says assistant professor of physics Phiala Shanahan. “There are a lot of interesting problems that we know how to address in principle, but we just don’t have enough compute, even though we run on the largest supercomputers in the world.”

To push past these limitations, Shanahan leads a group that combines theoretical physics with machine learning models. In their paper “Equivariant flow-based sampling for lattice gauge theory,” published this month in Physical Review Letters, they show how incorporating the symmetries of physics theories into machine learning and artificial intelligence architectures can provide much faster algorithms for theoretical physics. 

“We are using machine learning not to analyze large amounts of data, but to accelerate first-principles theory in a way which doesn’t compromise the rigor of the approach,” Shanahan says. “This particular work demonstrated that we can build machine learning architectures with some of the symmetries of the Standard Model of particle and nuclear physics built in, and accelerate the sampling problem we are targeting by orders of magnitude.” 

Shanahan launched the project with MIT graduate student Gurtej Kanwar and with Michael Albergo, who is now at NYU. The project expanded to include Center for Theoretical Physics postdocs Daniel Hackett and Denis Boyda, NYU Professor Kyle Cranmer, and physics-savvy machine-learning scientists at Google Deep Mind, Sébastien Racanière and Danilo Jimenez Rezende.

This month’s paper is one in a series aimed at enabling studies in theoretical physics that are currently computationally intractable. “Our aim is to develop new algorithms for a key component of numerical calculations in theoretical physics,” says Kanwar. “These calculations inform us about the inner workings of the Standard Model of particle physics, our most fundamental theory of matter. Such calculations are of vital importance to compare against results from particle physics experiments, such as the Large Hadron Collider at CERN, both to constrain the model more precisely and to discover where the model breaks down and must be extended to something even more fundamental.”

The only known systematically controllable method of studying the Standard Model of particle physics in the nonperturbative regime is based on a sampling of snapshots of quantum fluctuations in the vacuum. By measuring properties of these fluctuations, once can infer properties of the particles and collisions of interest.

This technique comes with challenges, Kanwar explains. “This sampling is expensive, and we are looking to use physics-inspired machine learning techniques to draw samples far more efficiently,” he says. “Machine learning has already made great strides on generating images, including, for example, recent work by NVIDIA to generate images of faces ‘dreamed up’ by neural networks. Thinking of these snapshots of the vacuum as images, we think it’s quite natural to turn to similar methods for our problem.”

Adds Shanahan, “In our approach to sampling these quantum snapshots, we optimize a model that takes us from a space that is easy to sample to the target space: given a trained model, sampling is then efficient since you just need to take independent samples in the easy-to-sample space, and transform them via the learned model.”

In particular, the group has introduced a framework for building machine-learning models that exactly respect a class of symmetries, called “gauge symmetries,” crucial for studying high-energy physics.

As a proof of principle, Shanahan and colleagues used their framework to train machine-learning models to simulate a theory in two dimensions, resulting in orders-of-magnitude efficiency gains over state-of-the-art techniques and more precise predictions from the theory. This paves the way for significantly accelerated research into the fundamental forces of nature using physics-informed machine learning.

The group’s first few papers as a collaboration discussed applying the machine-learning technique to a simple lattice field theory, and developed this class of approaches on compact, connected manifolds which describe the more complicated field theories of the Standard Model. Now they are working to scale the techniques to state-of-the-art calculations.

“I think we have shown over the past year that there is a lot of promise in combining physics knowledge with machine learning techniques,” says Kanwar. “We are actively thinking about how to tackle the remaining barriers in the way of performing full-scale simulations using our approach. I hope to see the first application of these methods to calculations at scale in the next couple of years. If we are able to overcome the last few obstacles, this promises to extend what we can do with limited resources, and I dream of performing calculations soon that give us novel insights into what lies beyond our best understanding of physics today.”

This idea of physics-informed machine learning is also known by the team as “ab-initio AI,” a key theme of the recently launched MIT-based National Science Foundation Institute for Artificial Intelligence and Fundamental Interactions (IAIFI), where Shanahan is research coordinator for physics theory.

Led by the Laboratory for Nuclear Science, the IAIFI is comprised of both physics and AI researchers at MIT and Harvard, Northeastern, and Tufts universities.

“Our collaboration is a great example of the spirit of IAIFI, with a team with diverse backgrounds coming together to advance AI and physics simultaneously” says Shanahan. As well as research like Shanahan’s targeting physics theory, IAIFI researchers are also working to use AI to enhance the scientific potential of various facilities, including the Large Hadron Collider and the Laser Interferometer Gravity Wave Observatory, and to advance AI itself. 

Read More

MIT undergraduates pursue research opportunities through the pandemic

Even in ordinary times, scientific process is stressful, with its demand for open-ended exploration and persistence in the face of failure. But the pandemic has added to the strain. In this new world of physical isolation, there are fewer opportunities for spontaneity and connection, and fewer distractions and events to mark the passage of time. Days pass in a numbing blur of sameness.

Working from home this summer, students participating in MIT’s Undergraduate Research Opportunities Program (UROP) did their best to overcome these challenges. Checking in with their advisors over Zoom and Slack, from as far west as Los Angeles, California and as far east as Skopje, North Macedonia, they completed two dozen projects sponsored by the MIT Quest for Intelligence. Four student projects are highlighted here.

Defending code-processing AI models against adversarial attacks 

Computer vision models have famously been fooled into classifying turtles as rifles, and planes as pigs, simply by making subtle changes to the objects and images the models are asked to interpret. But models that analyze computer code, which are a part of recent efforts to build automated tools to design programs efficiently, are also susceptible to so-called adversarial examples. 

The lab of Una-May O’Reilly, a principal research scientist at MIT, is focused on finding and fixing the weaknesses in code-processing models that can cause them to misbehave. As automated programming methods become more common, researchers are looking for ways to make this class of deep learning model more secure.

“Even small changes like giving a different name to a variable in a computer program can completely change how the model interprets the program,” says Tamara Mitrovska, a third-year student who worked on a UROP project this summer with Shashank Srikant, a graduate student in O’Reilly’s lab.

The lab is investigating two types of models used to summarize bits of a program as part of a broader effort to use machine learning to write new programs. One such model is Google’s seq2seq, originally developed for machine translation. A second is code2seq, which creates abstract representations of programs. Both are vulnerable to attacks due to a simple programming quirk: captions that let humans know what the code is doing, like assigning names to variables, give attackers an opening to exploit the model. By simply changing a variable name in a program or adding a print statement, the program may function normally, yet force the model processing it to give an incorrect answer.

This summer, from her home near Skopje, in North Macedonia, Mitrovska learned how to sift through a database of more than 100,000 programs in Java and Python and modify them algorithmically to try to fool seq2seq and code2seq. “These systems are challenging to implement,” she says. “Finding even the smallest bug can take a significant amount of time. But overall, I’ve been having fun and the project has been a very good learning experience for me.”

One exploit that she uncovered: Both models could be tricked by inserting “print” commands in the programs they process. That exploit, and others discovered by the lab, will be used to update the models to make them more robust.

What everyday adjectives can tell us about human reasoning

Embedded in the simplest of words are assumptions about the world that vary even among closely related languages. Take the word “biggest.” Like other superlatives in English, this adjective has no equivalent in French or Spanish. Speakers simply use the comparative form, “bigger” — plus grand in French or más grande in Spanish — to differentiate among objects of various sizes.

To understand what these words mean and how they are actually used, Helena Aparicio, formerly a postdoc at MIT and now a professor at Cornell University, devised a set of psychology experiments with MIT Associate Professor Roger Levy and Boston University Professor Elizabeth Coppock. Curtis Chen, a second-year student at MIT interested in the four topics that converge in Levy’s lab — computer science, psychology, linguistics, and cognitive science — joined on as a UROP student.

From his home in Hillsborough, New Jersey, Chen orchestrated experiments to identify why English speakers prefer superlatives in some cases and comparatives in others. He found that in scenes with more similarly sized objects, the more likely his human subjects were to prefer the word “biggest” to describe the largest object in the set. When objects appeared to fall within two clearly defined groups, subjects preferred the less-precise “bigger.” Chen also built an AI model to simulate the inferences made by his human subjects and found that it showed a similar preference for the superlative in ambiguous situations.

Designing a successful experiment can take several tries. To ensure consistency among the shapes that subjects were asked to describe, Chen generated them on the computer using HTML Canvas and JavaScript. “This way, the size differentials were exact, and we could simply report the formula used to make them,” he says.

After discovering that some subjects seemed confused by rectangle and line shapes, he replaced them with circles. He also removed the default option on his reporting scale after realizing that some subjects were using it to breeze through the tasks. Finally, he switched to the crowdsourcing platform Prolific after a number of participants on Amazon’s Mechanical Turk failed at tasks designed to ensure they were taking the experiments seriously.

“It was discouraging, but Curtis went through the process of exploring the data and figuring out what was going wrong,” says his mentor, Aparicio. 

In the end, he wound up with strong results and promising ideas for follow-up experiments this fall. “There’s still a lot to be done,” he says. “I had a lot of fun cooking up and tweaking the model, designing the experiment, and learning about this deceptively simple puzzle.”

Levy says he looks forward to the results. “Ultimately, this line of inquiry helps us understand how different vocabularies and grammatical resources of English and thousands of other languages support flexible communication by their native speakers,” he says.

Reconstructing real-world scenes from sensor data

AI systems that have become expert at sizing up scenes in photos and video may soon be able to do the same for real-world scenes. It’s a process that involves stitching together snapshots of a scene from varying viewpoints into a coherent picture. The brain performs these calculations effortlessly as we move through the world, but computers require sophisticated algorithms and extensive training. 

MIT Associate Professor Justin Solomon focuses on developing methods to help computers understand 3D environments. He and his lab look for new ways to take point cloud data gathered by sensors — essentially, reflections of infrared light bounced off the surfaces of objects — to create a holistic representation of a real-world scene. Three-dimensional scene analysis has many applications in computer graphics, but the one that drove second-year student Kevin Shao to join Solomon’s lab was its potential as a navigation tool for self-driving cars.

“Working on autonomous cars has been a childhood dream for me,” says Shao.

In the first phase of his UROP project, Shao downloaded the most important papers on 3D scene reconstruction and tried to reproduce their results. This improved his knowledge of PyTorch, the Python library that provides tools for training, testing, and evaluating models. It also gave him a deep understanding of the literature. In the second phase of the project, Shao worked with his mentor, PhD student Yue Wang, to improve on existing methods.

“Kevin implemented most of the ideas, and explained in detail why they would or wouldn’t work,” says Wang. “He didn’t give up on an idea until we had a comprehensive analysis of the problem.”

One idea they explored was the use of computer-drawn scenes to train a multi-view registration model. So far, the method works in simulation, but not on real-world scenes. Shao is now trying to incorporate real-world data to bridge the gap, and will continue the work this fall.

Wang is excited to see the results. “It sometimes takes PhD students a year to have a reasonable result,” he says. “Although we are still in the exploration phase, I think Kevin has made a successful transition from a smart student to a well-qualified researcher.”

When do infants become attuned to speech and music?

The ability to perceive speech and music has been traced to specialized parts of the brain, with infants as young as four months old showing sensitivity to speech-like sounds. MIT Professor Nancy Kanwisher and her lab are investigating how this special ear for speech and music arises in the infant brain.

Somaia Saba, a second-year student at MIT, was introduced to Kanwisher’s research last year in an intro to neuroscience class and immediately wanted to learn more. “The more I read up about cortical development, the more I realized how little we know about the development of the visual and auditory pathways,” she says. “I became very excited and met with [PhD student] Heather Kosakowski, who explained the details of her projects.”

Signing on for a project, Saba plunged into the “deep end” of cortical development research. Initially overwhelmed, she says she gained confidence through regular Zoom meetings with Kosakowski, who helped her to navigate MATLAB and other software for analyzing brain-imaging data. “Heather really helped motivate me to learn these programs quickly, which has also primed me to learn more easily in the future,” she says.

Before the pandemic shut down campus, Kanwisher’s lab collected functional magnetic resonance imaging (fMRI) data from two- to eight-week-old sleeping infants exposed to different sounds. This summer, from her home on Long Island, New York, Saba helped to analyze the data. She is now learning how to process fMRI data for awake infants, looking toward the study’s next phase. “This is a crucial and very challenging task that’s harder than processing child and adult fMRI data,” says Kosakowski. “Discovering how these specialized regions emerge in infants may be the key to unlocking mysteries about the origin of the mind.”

MIT Quest for Intelligence summer UROP projects were funded, in part, by the MIT-IBM Watson AI Lab and by Eric Schmidt, technical advisor to Alphabet Inc., and his wife, Wendy.

Read More

Regina Barzilay wins $1M Association for the Advancement of Artificial Intelligence Squirrel AI award

Regina Barzilay wins $1M Association for the Advancement of Artificial Intelligence Squirrel AI award

For more than 100 years Nobel Prizes have been given out annually to recognize breakthrough achievements in chemistry, literature, medicine, peace, and physics. As these disciplines undoubtedly continue to impact society, newer fields like artificial intelligence (AI) and robotics have also begun to profoundly reshape the world.

In recognition of this, the world’s largest AI society — the Association for the Advancement of Artificial Intelligence (AAAI) — announced today the winner of their new Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity, a $1 million award given to honor individuals whose work in the field has had a transformative impact on society.

The recipient, Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science at MIT and a member of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), is being recognized for her work developing machine learning models to develop antibiotics and other drugs, and to detect and diagnose breast cancer at early stages.

In February, AAAI will officially present Barzilay with the award, which comes with an associated prize of $1 million provided by the online education company Squirrel AI

“Only world-renowned recognitions, such as the Association of Computing Machinery’s A.M. Turing Award and the Nobel Prize, carry monetary rewards at the million-dollar level,” says AAAI awards committee chair Yolanda Gil. “This award aims to be unique in recognizing the positive impact of artificial intelligence for humanity.” 

Barzilay has conducted research on a range of topics in computer science, ranging from explainable machine learning to deciphering dead languages. Since surviving breast cancer in 2014, she has increasingly focused her efforts on health care. She created algorithms for early breast cancer diagnosis and risk assessment that have been tested at multiple hospitals around the globe, including in Sweden, Taiwan, and at Boston’s Massachusetts General Hospital. She is now working with breast cancer organizations such as Institute Protea in Brazil to make her diagnostic tools available for underprivileged populations around the world. (She realized from doing her work that, if a system like hers had existed at the time, her doctors actually could have detected her cancer two or three years earlier.) 

In parallel, she has been working on developing machine learning models for drug discovery: with collaborators she’s created models for selecting molecule candidates for therapeutics that have been able to speed up drug development, and last year helped discover a new antibiotic called Halicin that was shown to be able to kill many species of disease-causing bacteria that are antibiotic-resistant, including Acinetobacter baumannii and clostridium difficile (“c-diff”). 

“Through my own life experience, I came to realize that we can create technology that can alleviate human suffering and change our understanding of diseases,“ says Barzilay, who is also a member of the Koch Institute for Integrative Cancer Research. “I feel lucky to have found collaborators who share my passion and who have helped me realize this vision.”

Barzilay also serves as a member of MIT’s Institute for Medical Engineering and Science, and as faculty co-lead for MIT’s Abdul Latif Jameel Clinic for Machine Learning in Health. One of the J-Clinic’s most recent efforts is “AI Cures,” a cross-institutional initiative focused on developing affordable Covid-19 antivirals. 

“Regina has made truly-changing breakthroughs in imaging breast cancer and predicting the medicinal activity of novel chemicals,” says MIT professor of biology Phillip Sharp, a Nobel laureate who has served as director of both the McGovern Institute for Brain Research and the MIT Center for Cancer Research, predecessor to the Koch Institute. “I am honored to have as a colleague someone who is such a pioneer in using deeply creative machine learning methods to transform the fields of health care and biological science.”

Barzilay joined the MIT faculty in 2003 after earning her undergraduate at Ben-Gurion University of the Negev, Israel and her PhD at Columbia University. She is also the recipient of a MacArthur “genius grant”, the National Science Foundation Career Award, a Microsoft Faculty Fellowship, multiple “best paper” awards in her field, and MIT’s Jamieson Award for excellence in teaching.

“We believe AI advances will benefit a great many fields, from health care and education to smart cities and the environment,” says Derek Li, founder and chairman of Squirrel AI. “We believe that Dr. Barzilay and other future awardees will inspire the AI community to continue to contribute to and advance AI’s impact on the world.”

AAAI’s Gil says the organization was very excited to partner with Squirrel AI for this new award to recognize the positive impacts of artificial intelligence “to protect, enhance, and improve human life in meaningful ways.” With more than 300 elected fellows and 6,000 members from 50 countries across the globe, AAAI is the world’s largest scientific society devoted to artificial intelligence. Its officers have included many AI pioneers, including Allen Newell and John McCarthy. AAAI confers several influential AI awards including the Feigenbaum Prize, the Newell Award (jointly with ACM), and the Engelmore Award. 

“Regina has been a trailblazer in the field of health care AI by asking the important questions about how we can use machine learning to treat and diagnose diseases,” says Daniela Rus, director of CSAIL and the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science. “She has been both a brilliant researcher and a devoted educator, and all of us at CSAIL are so inspired by her work and proud to have her as a colleague.” 

Read More

Examining racial attitudes in virtual spaces through gaming

Examining racial attitudes in virtual spaces through gaming

The national dialogue on race has progressed powerfully and painfully in the past year, and issues of racial bias in the news have become ubiquitous. However, for over a decade, researchers from MIT’s Imagination, Computation, and Expression Laboratory (ICE Lab) have been developing systems to model, simulate, and analyze such issues of identity. 

In recent years there’s been a rise in popularity of video games or virtual reality (VR) experiences addressing racial issues for educational or training purposes, coinciding with the rapid development of the academic field of serious or “impact” games such as “Walk a Mile in Digital Shoes” or “1000 Cut Journey.” 

Now researchers from the ICE Lab, part of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Center for Advanced Virtuality, have updated a 2019 computational model to better understand our behavioral choices, by way of a video game simulation of a discriminatory racial encounter between a Black student and her white teacher. 

A paper on the game will be presented this week the 2020 Foundations of Digital Games conference. 

The system, which was informed by the social science research of collaborators at the University of Michigan’s Engaging, Managing, and Bonding through Race (EMBRace) lab, is supported by the Racial Encounter Coping Appraisal and Socialization Theory (RECAST). RECAST provides a way of understanding how racial socialization, or the way one has been taught to think about race, cushions the influence between racial stress and coping.

The game, called “Passage Home,” is used to help understand the attitudes of PreK-12 educators, with the eventual goal of providing an innovative tool for clinicians to better understand the behavioral choices adolescents make when encountered with racial injustice. 

Following user studies conducted with the original version of Passage Home in 2019, the team worked with Riana Elyse Anderson, assistant professor in the Department of Health Behavior and Health Education at the University of Michigan’s School of Public Health, and Nkemka Anyiwo, vice provost and National Science Foundation Postdoctoral Fellow in the Graduate School of Education at the University of Pennsylvania, to iterate on the original prototype and improve it to align more closely with RECAST theory. Since creating the latest version of “Passage Home” VR, they sought to understand the opportunities and challenges for using it as a tool for capturing insights about how individuals perceive and respond to racialized encounters. 

Experiments from “Passage Home” revealed that players’ existing colorblind racial attitudes and their ethnic identity development hindered their ability to accurately interpret racist subtexts.

The interactive game puts the player into the first-person perspective of “Tiffany,” a Black student who is falsely accused of plagiarism by her white female English teacher, “Mrs. Smith.” In the game, Mrs. Smith holds the inherently racist belief that Black students are incapable of producing high-quality work as the basis of her accusation. 

“There has been much focus on understanding the efficacy of these systems as interventions to reduce racial bias, but there’s been less attention on how individuals’ prior physical-world racial attitudes influence their experiences of such games about racial issues,” says MIT CSAIL PhD student Danielle Olson, lead author on the paper being presented this week.

“Danielle Olson is at the forefront of computational modeling of social phenomena, including race and racialized experiences,” says her thesis supervisor D. Fox Harrell, professor of digital media and AI in CSAIL and director of the ICE Lab and MIT Center for Advanced Virtuality. “What is crucial about her dissertation research and system ‘Passage Home’ is that it does not only model race as physical experience, rather it simulates how people are socialized to think about race, which often has more a profound impact on their racial biases regarding others and themselves than merely what they look like.”

Many mainstream strategies for portraying race in VR experiences are often rooted in negative racial stereotypes, and the questions are often focused on “right” and “wrong” actions. In contrast, with “Passage Home,” the researchers aimed to take into account the nuance and complexity of how people think about race, which involves systemic social structures, history, lived experiences, interpersonal interactions, and discourse.

In the game, prior to the discriminatory interaction, the player is provided with a note that they (Tiffany) are  academically high-achieving and did not commit plagiarism. The player is prompted to make a series of choices to capture their thoughts, feelings, and desired actions in response to the allegation. 

The player then chooses which internal thoughts are most closely aligned with their own, and the verbal responses, body language, or gesture they want to express. These combinations contribute to how the narrative unfolds. 

One educator, for example, expressed that, “This situation could have happened to any student of any race, but the way [the student] was raised, she took it as being treated unfairly.” 

The game makes it clear that the student did not cheat, and the student never complains of unfairness, so in this case, the educator’s prior racial attitude results in not only misreading the situation, but actually imputing an attitude to the student that was never there. (The team notes that many people failed to recognize the racist nature of the comments because their racial literacy inhibited them from decoding anti-Black subtexts.)

The results of the game demonstrated statistically significant relationships within the following categories:

  • Competence (players’ feelings of skillfulness and success in the game)
    • Positively associated with unawareness of racial privilege
  • Negative affect (players’ feelings of boredom and monotony in the game)
    • Positively associated with unawareness of blatant racial issues
  • Empathy (players’ feelings of empathy towards Mrs. Smith, who is racially biased towards Tiffany)
    • Negatively associated with ethnic identity search, and positively associated with unawareness of racial privilege, blatant racial issues, and institutional discrimination
  • Perceived competence of Tiffany, the student 
    • How well did the player think she handled the situation? 
  • Perceived unfairness of Mrs. Smith, the teacher
    • Was Mrs. Smith unfair to Tiffany? 

“Even if developers create these games to attempt to encourage white educators to understand how racism negatively impacts their Black students, their prior worldviews may cause them to identify with the teacher who is the perpetrator of racial violence, not the student who is the target,” says Olson. “These results can aid developers in avoiding assumptions about players’ racial literacy by creating systems informed by evidence-based research on racial socialization and coping.” 

While this work demonstrates a promising tool, the team notes that because racism exists at individual, cultural, institutional and systemic levels, there are limitations to which levels and how much impact emergent technologies such as VR can make. 

Future games could be personalized to attend to differences in players’ racial socialization and attitudes, rather than assuming players will interpret racialized content in a similar way. By improving players’ in-game experiences, the hope is that this will increase the possibility for transformative learning with educators, and aid in the pursuit of racial equity for students.

This material is based upon work supported by the following grant programs: National Science Foundation Graduate Research Fellowship Program, the Ford Foundation Predoctoral Fellowship Program, the MIT Abdul Latif Jameel World Education Lab pK-12 Education Innovation Grant, and the International Chapter of the P.E.O. Scholar Award. 

Read More

Helping robots avoid collisions

Helping robots avoid collisions

George Konidaris still remembers his disheartening introduction to robotics.

“When you’re a young student and you want to program a robot, the first thing that hits you is this immense disappointment at how much you can’t do with that robot,” he says.

Most new roboticists want to program their robots to solve interesting, complex tasks — but it turns out that just moving them through space without colliding with objects is more difficult than it sounds.

Fortunately, Konidaris is hopeful that future roboticists will have a more exciting start in the field. That’s because roughly four years ago, he co-founded Realtime Robotics, a startup that’s solving the “motion planning problem” for robots.

The company has invented a solution that gives robots the ability to quickly adjust their path to avoid objects as they move to a target. The Realtime controller is a box that can be connected to a variety of robots and deployed in dynamic environments.

“Our box simply runs the robot according to the customer’s program,” explains Konidaris, who currently serves as Realtime’s chief roboticist. “It takes care of the movement, the speed of the robot, detecting obstacles, collision detection. All [our customers] need to say is, ‘I want this robot to move here.’”

Realtime’s key enabling technology is a unique circuit design that, when combined with proprietary software, has the effect of a plug-in motor cortex for robots. In addition to helping to fulfill the expectations of starry-eyed roboticists, the technology also represents a fundamental advance toward robots that can work effectively in changing environments.

Helping robots get around

Konidaris was not the first person to get discouraged about the motion planning problem in robotics. Researchers in the field have been working on it for 40 years. During a four-year postdoc at MIT, Konidaris worked with School of Engineering Professor in Teaching Excellence Tomas Lozano-Perez, a pioneer in the field who was publishing papers on motion planning before Konidaris was born.

Humans take collision avoidance for granted. Konidaris points out that the simple act of grabbing a beer from the fridge actually requires a series of tasks such as opening the fridge, positioning your body to reach in, avoiding other objects in the fridge, and deciding where to grab the beer can.

“You actually need to compute more than one plan,” Konidaris says. “You might need to compute hundreds of plans to get the action you want. … It’s weird how the simplest things humans do hundreds of times a day actually require immense computation.”

In robotics, the motion planning problem revolves around the computational power required to carry out frequent tests as robots move through space. At each stage of a planned path, the tests help determine if various tiny movements will make the robot collide with objects around it. Such tests have inspired researchers to think up ever more complicated algorithms in recent years, but Konidaris believes that’s the wrong approach.

“People were trying to make algorithms smarter and more complex, but usually that’s a sign that you’re going down the wrong path,” Konidaris says. “It’s actually not that common that super technically sophisticated techniques solve problems like that.”

Konidaris left MIT in 2014 to join the faculty at Duke University, but he continued to collaborate with researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Duke is also where Konidaris met Realtime co-founders Sean Murray, Dan Sorin, and Will Floyd-Jones. In 2015, the co-founders collaborated to make a new type of computer chip with circuits specifically designed to perform the frequent collision tests required to move a robot safely through space. The custom circuits could perform operations in parallel to more efficiently test short motion collisions.

“When I left MIT for Duke, one thing bugging me was this motion planning thing should really be solved by now,” Konidaris says. “It really did come directly out of a lot of experiences at MIT. I wouldn’t have been able to write a single paper on motion planning before I got to MIT.”

The researchers founded Realtime in 2016 and quickly brought on robotics industry veteran Peter Howard MBA ’87, who currently serves as Realtime’s CEO and is also considered a co-founder.

“I wanted to start the company in Boston because I knew MIT and lot of robotics work was happening there,” says Konidaris, who moved to Brown University in 2016. “Boston is a hub for robotics. There’s a ton of local talent, and I think a lot of that is because MIT is here — PhDs from MIT became faculty at local schools, and those people started robotics programs. That network effect is very strong.”

Removing robot restraints

Today the majority of Realtime’s customers are in the automotive, manufacturing, and logistics industries. The robots using Realtime’s solution are doing everything from spot welding to making inspections to picking items from bins.

After customers purchase Realtime’s control box, they load in a file describing the configuration of the robot’s work cell, information about the robot such as its end-of-arm tool, and the task the robot is completing. Realtime can also help optimally place the robot and its accompanying sensors around a work area. Konidaris says Realtime can shorten the process of deploying robots from an average of 15 weeks to one week.

Once the robot is up and running, Realtime’s box controls its movement, giving it instant collision-avoidance capabilities.

“You can use it for any robot,” Konidaris says. “You tell it where it needs to go and we’ll handle the rest.”

Realtime is part of MIT’s Industrial Liaison Program (ILP), which helps companies make connections with larger industrial partners, and it recently joined ILP’s STEX25 startup accelerator.

With a few large rollouts planned for the coming months, the Realtime team’s excitement is driven by the belief that solving a problem as fundamental as motion planning unlocks a slew of new applications for the robotics field.

“What I find most exciting about Realtime is that we are a true technology company,” says Konidaris. “The vast majority of startups are aimed at finding a new application for existing technology; often, there’s no real pushing of the technical boundaries with a new app or website, or even a new robotics ‘vertical.’ But we really did invent something new, and that edge and that energy is what drives us. All of that feels very MIT to me.”

Read More

Monitoring sleep positions for a healthy rest

Monitoring sleep positions for a healthy rest

MIT researchers have developed a wireless, private way to monitor a person’s sleep postures — whether snoozing on their back, stomach, or sides — using reflected radio signals from a small device mounted on a bedroom wall.

The device, called BodyCompass, is the first home-ready, radio-frequency-based system to provide accurate sleep data without cameras or sensors attached to the body, according to Shichao Yue, who will introduce the system in a presentation at the UbiComp 2020 conference on Sept. 15. The PhD student has used wireless sensing to study sleep stages and insomnia for several years.

“We thought sleep posture could be another impactful application of our system” for medical monitoring, says Yue, who worked on the project under the supervision of Professor Dina Katabi in the MIT Computer Science and Artificial Intelligence Laboratory. Studies show that stomach sleeping increases the risk of sudden death in people with epilepsy, he notes, and sleep posture could also be used to measure the progression of Parkinson’s disease as the condition robs a person of the ability to turn over in bed.

In the future, people might also use BodyCompass to keep track of their own sleep habits or to monitor infant sleeping, Yue says: “It can be either a medical device or a consumer product, depending on needs.”

Other authors on the conference paper, published in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, include graduate students Yuzhe Yang and Hao Wang, and Katabi Lab affiliate Hariharan Rahul. Katabi is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT.

Restful reflections

BodyCompass works by analyzing the reflection of radio signals as they bounce off objects in a room, including the human body. Similar to a Wi-Fi router attached to the bedroom wall, the device sends and collects these signals as they return through multiple paths. The researchers then map the paths of these signals, working backward from the reflections to determine the body’s posture.

For this to work, however, the scientists needed a way to figure out which of the signals were bouncing off the sleeper’s body, and not bouncing off the mattress or a nightstand or an overhead fan. Yue and his colleagues realized that their past work in deciphering breathing patterns from radio signals could solve the problem.

Signals that bounce off a person’s chest and belly are uniquely modulated by breathing, they concluded. Once that breathing signal was identified as a way to “tag” reflections coming from the body, the researchers could analyze those reflections compared to the position of the device to determine how the person was lying in bed. (If a person was lying on her back, for instance, strong radio waves bouncing off her chest would be directed at the ceiling and then to the device on the wall.) “Identifying breathing as coding helped us to separate signals from the body from environmental reflections, allowing us to track where informative reflections are,” Yue says.

Reflections from the body are then analyzed by a customized neural network to infer how the body is angled in sleep. Because the neural network defines sleep postures according to angles, the device can distinguish between a sleeper lying on the right side from one who has merely tilted slightly to the right. This kind of fine-grained analysis would be especially important for epilepsy patients for whom sleeping in a prone position is correlated with sudden unexpected death, Yue says.

BodyCompass has some advantages over other ways of monitoring sleep posture, such as installing cameras in a person’s bedroom or attaching sensors directly to the person or their bed. Sensors can be uncomfortable to sleep with, and cameras reduce a person’s privacy, Yue notes. “Since we will only record essential information for detecting sleep posture, such as a person’s breathing signal during sleep,” he says, “it is nearly impossible for someone to infer other activities of the user from this data.”

An accurate compass

The research team tested BodyCompass’ accuracy over 200 hours of sleep data from 26 healthy people sleeping in their own bedrooms. At the start of the study, the subjects wore two accelerometers (sensors that detect movement) taped to their chest and stomach, to train the device’s neural network with “ground truth” data on their sleeping postures.

BodyCompass was most accurate — predicting the correct body posture 94 percent of the time — when the device was trained on a week’s worth of data. One night’s worth of training data yielded accurate results 87 percent of the time. BodyCompass could achieve 84 percent accuracy with just 16 minutes’ worth of data collected, when sleepers were asked to hold a few usual sleeping postures in front of the wireless sensor.

Along with epilepsy and Parkinson’s disease, BodyCompass could prove useful in treating patients vulnerable to bedsores and sleep apnea, since both conditions can be alleviated by changes in sleeping posture. Yue has his own interest as well: He suffers from migraines that seem to be affected by how he sleeps. “I sleep on my right side to avoid headache the next day,” he says, “but I’m not sure if there really is any correlation between sleep posture and migraines. Maybe this can help me find out if there is any relationship.”

For now, BodyCompass is a monitoring tool, but it may be paired someday with an alert that can prod sleepers to change their posture. “Researchers are working on mattresses that can slowly turn a patient to avoid dangerous sleep positions,” Yue says. “Future work may combine our sleep posture detector with such mattresses to move an epilepsy patient to a safer position if needed.”

Read More

Helping companies prioritize their cybersecurity investments

Helping companies prioritize their cybersecurity investments

One reason that cyberattacks have continued to grow in recent years is that we never actually learn all that much about how they happen. Companies fear that reporting attacks will tarnish their public image, and even those who do report them don’t share many details because they worry that their competitors will gain insight into their security practices. 

“It’s really a nice gift that we’ve given to cyber-criminals,” says Taylor Reynolds, technology policy director at MIT’s Internet Policy Research Initiative (IPRI). “In an ideal world, these attacks wouldn’t happen over and over again, because companies would be able to use data from attacks to develop quantitative measurements of the security risk so that we could prevent such incidents in the future.”

In an economy where most industries are tightening their belts, many organizations don’t know which types of attacks lead to the largest financial losses, and therefore how to best deploy scarce security resources. 

But a new platform from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aims to change that, quantifying companies’ security risk without requiring them to disclose sensitive data about their systems to the research team, much less their competitors.

Developed by Reynolds alongside economist Andrew Lo and cryptographer Vinod Vaikuntanathan, the platform helps companies do multiple things:

  • quantify how secure they are;
  • understand how their security compares to peers; and
  • evaluate whether they’re spending the right amount of money on security, and if and how they should change their particular security priorities.

The team received internal data from seven large companies that averaged 50,000 employees and annual revenues of $24 billion. By securely aggregating 50 different security incidents that took place at the companies, the researchers were able to analyze which specific steps were not taken that could have prevented them. (Their analysis used a well-established set of nearly 200 security actions referred to as the Center for Internet Security Sub-Controls.) 

“We were able to paint a really thorough picture in terms of which security failures were costing companies the most money,” says Reynolds, who co-authored a related paper with professors Lo and Vaikuntanathan, MIT graduate student Leo de Castro, Principal Research Scientist Daniel J. Weitzner, PhD student Fransisca Susan, and graduate student Nicolas Zhang. “If you’re a chief information security officer at one of these organizations, it can be an overwhelming task to try to defend absolutely everything. They need to know where they should direct their attention.”

The team calls their platform “SCRAM,” for “Secure Cyber Risk Aggregation and Measurement.” Among other findings, they determined that the three following security vulnerabilities had the largest total losses, each in excess of $1 million:

Failures in preventing malware attacks

Malware attacks, like the one last month that reportedly forced the wearables company Garmin to pay a $10 million ransom, are still a tried-and-true method of gaining control of valuable consumer data. Reynolds says that companies continue to struggle to prevent such attacks, relying on regularly backing up their data and reminding their employees not to click on suspicious emails. 

Communication over unauthorized ports 

Curiously, the team found that every firm in their study said they had, in fact, implemented the security measure of blocking access to unauthorized ports — the digital equivalent of companies locking all their doors. Even still, attacks that involved gaining access to these ports accounted for a large number of high-cost losses. 

“Losses can arise even when there are defenses that are well-developed and understood,” says Weitzner, who also serves as director of MIT IPRI. “It’s important to recognize that improving common existing defenses should not be neglected in favor of expanding into new areas of defense.”

Failures in log management for security incidents 

Every day companies amass detailed “logs” denoting activity within their systems. Senior security officers often turn to these logs after an attack to audit the incident and see what happened. Reynolds says that there are many ways that companies could be using machine learning and artificial intelligence more efficiently to help understand what’s happening — including, crucially, during or even before a security attack. 

Two other key areas that warrant further analysis include taking inventory of hardware so that only authorized devices are given access, as well as boundary defenses like firewalls and proxies that aim to control the flow of traffic through network borders. 

The team developed their data aggregation platform in conjunction with MIT cryptography experts, using an existing method called multi-party computation (MPC) that allows them to perform calculations on data without themselves being able to read or unlock it. After computing its anonymized findings, the SCRAM system then asks each contributing company to help it unlock only the answer using their own secret cryptographic key.

“The power of this platform is that it allows firms to contribute locked data that would otherwise be too sensitive or risky to share with a third party,” says Reynolds.

As a next step, the researchers plan to expand the pool of participating companies, with representation from a range of different sectors that include electricity, finance, and biotech. Reynolds says that if the team can gather data from upwards of 70 or 80 companies, they’ll be able to do something unprecedented: put an actual dollar figure on the risk of particular defenses failing.

The project was a cross-campus effort involving affiliates at IPRI, CSAIL’s Theory of Computation group, and the MIT Sloan School of Management. It was funded by the Hewlett Foundation and CSAIL’s Financial Technology industry initiative (“FinTech@CSAIL”). 

Read More

MIT hosts seven distinguished MLK Professors and Scholars for 2020-21

MIT hosts seven distinguished MLK Professors and Scholars for 2020-21

In light of the Covid-19 pandemic, MIT has been charged with reimagining its campus, classes, and programs, including the Dr. Martin Luther King, Jr. (MLK) Visiting Professors and Scholars Program (VPSP).

Founded in 1990, MLK VPSP honors the life and legacy of Martin Luther King, Jr. by increasing the presence of and recognizing the contributions of scholars from underrepresented groups at MIT. MLK Visiting Professors and Scholars enhance their scholarship through intellectual engagement with the MIT community and enrich the cultural, academic, and professional experience of students. The program hosts between four and eight scholars each year. But what does a virtual year mean for a visiting scholar?

Even with the challenge of remote learning and limited in-person contact, MLK VPSP faculty hosts have articulated innovative ways to engage with the MIT community. Moya Bailey, for instance, will be a content contributor for the Program in Women’s and Gender Studies’ website and social media accounts. Charles Senteio will continue to collaborate with the Office of Minority Education on curriculum development that reflects a diverse student population with a focus on health and well-being, and he will also explore remote learning and its impact on curriculum.

With Provost Martin Schmidt’s steadfast institutional support, and with active oversight from Institute Community and Equity Officer John Dozier and Associate Provost Tim Jamison, the MLK VPSP continues to honor King’s legacy and be an institutional priority on campus and online. For Academic Year 2020-2021, MIT is hosting seven accomplished scholars representing different areas of interest from all over the United States and Canada.

2020-2021 MLK Visiting Professors and Scholars

Moya Bailey is an assistant professor at Northeastern University in the Department of Cultures, Societies, and Global Studies and in the program in Women’s, Gender, and Sexuality Studies. In 2010, Bailey coined the term “misogynoir,” widely adopted by scholars, which describes the anti-Black racist misogyny that Black women experience. In the spring, she will teach a course in the MIT Program in Women’s and Gender Studies called Black Feminist Health Science Studies. In April 2021, she will organize and host a daylong Black Feminist Health Science symposium.

Jamie Macbeth joins the program for another year in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) as a valuable member of the Genesis group, a research team mainly focused on building computer systems and computational models of human intelligence based on humans’ capability for understanding natural language. One of Macbeth’s research collaborations involves using computer systems in understanding natural language to detect aggressive language on social media with the eventual goal of violence prevention. He will continue to mentor and collaborate with women and underrepresented groups at the undergraduate, MS, and PhD levels.

Ben McDonald is returning for a second year as a postdoc in the Department of Chemistry. His research focuses on developing designer polymers for chemical warfare-responsive membranes and surfactants to control the function of dynamic, complex soft colloids. His role as a mentor will expand to include both undergraduate and graduate students in the Swager Lab. McDonald will continue to collaborate with Chemistry Alliance for Diversity and Inclusion at MIT to organize and host virtual seminars showcasing the work of underrepresented scholars of color in the fields of chemistry and chemical engineering.

Luis Gilberto Murillo-Urrutia, a research fellow hosted by the Environmental Solutions Initiative (ESI), joins us from the Center for Latin America and Latino Studies at American University. His research focuses on the intersection of peace and security with environmental conservation, particularly in Afro-Colombian territories. During his visit, Murillo-Urrutia will hold mentorship sessions at ESI for students conducting research on environmental planning and policy or with a minor in environment and sustainability.

Thomas Searles, recently promoted to associate professor with tenure, is visiting from the Department of Physics at Howard University. While at MIT, he will pursue numerical studies of topological materials for photonic and quantum technological applications. He will mentor students from his lab, the Black Students Union, National Society of Black Engineers, and the Black Graduate Student Association. Searles plans to meet with the MIT physics graduate admissions committee to formulate recruitment strategies with his home and other historically Black colleges and universities.

Charles Senteio joins the program from Rutgers University School of Communication and Information, where he is an assistant professor in library and information science. As a visiting scholar at the MIT Sloan School of Management, he will collaborate with the Operations Management Group to expand on his community health informatics research and investigate health equity barriers. He recently facilitated a workshop, “Healthcare, Technology, and Social Justice Converge — Applied Equity Research and Why It Matters to All of Us” at the MIT Day of Dialogue event in August.

Patricia Saulis is Wolastoqey (Maliseet) from Wolastoq Negotkuk (Tobique First Nation in New Brunswick, Canada). As an MLK Visiting Scholar, Saulis will collaborate with her faculty host, Professor James Paradis from Comparative Media Studies/Writing, on a course titled, “Transmedia Art, Extraction and Environmental Justice” and engage with MIT Center for Environmental Health Sciences on their EPA Superfund-related work in the Northeastern United States. She will work closely with the American Indian Science and Engineering Society (AISES) and the Native American Students Association in raising awareness of the challenges impacting our Indigenous students. Through dialogue and presentations, she will help promote the understanding of Indigenous Peoples’ culture and help identify strategies to create a more inclusive campus for our Indigenous community. 

Community engagement

This year’s scholars are eager to join our community and embark on a mutually rewarding journey of learning and engagement — wherever in the world we may be.  

MIT community members are invited to join the Institute Community and Equity Office in engaging the MLK Professors and Scholars through a signature monthly speaker series, where each scholar will present their research and hold discussions via Zoom. The first welcome event will be held on Sept. 16 from 12 to 1 p.m. Contact Rachel Ornitz rornitz@mit.edu for event details.

For more information about this year’s and previous scholars and the program, visit the newly redesigned MLK Visiting Professors and Scholars website.

Read More