Four Novel Approaches to Manipulating Fabric using Model-Free and Model-Based Deep Learning in Simulation

Four Novel Approaches to Manipulating Fabric using Model-Free and Model-Based Deep Learning in Simulation



Humans manipulate 2D deformable structures such as fabric on a daily basis,
from putting on clothes to making beds. Can robots learn to perform similar
tasks? Successful approaches can advance applications such as dressing
assistance for senior care, folding of laundry, fabric upholstery, bed-making,
manufacturing, and other tasks. Fabric manipulation is challenging, however,
because of the difficulty in modeling system states and dynamics, meaning that
when a robot manipulates fabric, it is hard to predict the fabric’s resulting
state or visual appearance.

In this blog post, we review four recent papers from two research labs (Pieter
Abbeel
’s and Ken Goldberg’s) at Berkeley AI Research (BAIR) that
investigate the following hypothesis: is it possible to employ learning-based
approaches to the problem of fabric manipulation?

We demonstrate promising results in support of this hypothesis by using a
variety of learning-based methods with fabric simulators to train smoothing
(and even folding) policies in simulation. We then perform sim-to-real transfer
to deploy the policies on physical robots. Examples of the learned policies in
action are shown in the GIFs above.

We show that deep model-free methods trained from exploration or from
demonstrations work reasonably well for specific tasks like smoothing, but it
is unclear how well they generalize to related tasks such as folding. On the
other hand, we show that deep model-based methods have more potential for
generalization to a variety of tasks, provided that the learned models are
sufficiently accurate. In the rest of this post, we summarize the papers,
emphasizing the techniques and tradeoffs in each approach.

Updates & Improvements to PyTorch Tutorials

Updates & Improvements to PyTorch Tutorials

PyTorch.org provides researchers and developers with documentation, installation instructions, latest news, community projects, tutorials, and more. Today, we are introducing usability and content improvements including tutorials in additional categories, a new recipe format for quickly referencing common topics, sorting using tags, and an updated homepage.

Let’s take a look at them in detail.

TUTORIALS HOME PAGE UPDATE

The tutorials home page now provides clear actions that developers can take. For new PyTorch users, there is an easy-to-discover button to take them directly to “A 60 Minute Blitz”. Right next to it, there is a button to view all recipes which are designed to teach specific features quickly with examples.

In addition to the existing left navigation bar, tutorials can now be quickly filtered by multi-select tags. Let’s say you want to view all tutorials related to “Production” and “Quantization”. You can select the “Production” and “Quantization” filters as shown in the image shown below:

The following additional resources can also be found at the bottom of the Tutorials homepage:

PYTORCH RECIPES

Recipes are new bite-sized, actionable examples designed to teach researchers and developers how to use specific PyTorch features. Some notable new recipes include:

View the full recipes here.

LEARNING PYTORCH

This section includes tutorials designed for users new to PyTorch. Based on community feedback, we have made updates to the current Deep Learning with PyTorch: A 60 Minute Blitz tutorial, one of our most popular tutorials for beginners. Upon completion, one can understand what PyTorch and neural networks are, and be able to build and train a simple image classification network. Updates include adding explanations to clarify output meanings and linking back to where users can read more in the docs, cleaning up confusing syntax errors, and reconstructing and explaining new concepts for easier readability.

DEPLOYING MODELS IN PRODUCTION

This section includes tutorials for developers looking to take their PyTorch models to production. The tutorials include:

FRONTEND APIS

PyTorch provides a number of frontend API features that can help developers to code, debug, and validate their models more efficiently. This section includes tutorials that teach what these features are and how to use them. Some tutorials to highlight:

MODEL OPTIMIZATION

Deep learning models often consume large amounts of memory, power, and compute due to their complexity. This section provides tutorials for model optimization:

PARALLEL AND DISTRIBUTED TRAINING

PyTorch provides features that can accelerate performance in research and production such as native support for asynchronous execution of collective operations and peer-to-peer communication that is accessible from Python and C++. This section includes tutorials on parallel and distributed training:

Making these improvements are just the first step of improving PyTorch.org for the community. Please submit your suggestions here.

Cheers,

Team PyTorch

Read More

Robots help some firms, even while workers across industries struggle

This is part 2 of a three-part series examining the effects of robots and automation on employment, based on new research from economist and Institute Professor Daron Acemoglu. 

Overall, adding robots to manufacturing reduces jobs — by more than three per robot, in fact. But a new study co-authored by an MIT professor reveals an important pattern: Firms that move quickly to use robots tend to add workers to their payroll, while industry job losses are more concentrated in firms that make this change more slowly.

The study, by MIT economist Daron Acemoglu, examines the introduction of robots to French manufacturing in recent decades, illuminating the business dynamics and labor implications in granular detail.

“When you look at use of robots at the firm level, it is really interesting because there is an additional dimension,” says Acemoglu. “We know firms are adopting robots in order to reduce their costs, so it is quite plausible that firms adopting robots early are going to expand at the expense of their competitors whose costs are not going down. And that’s exactly what we find.”

Indeed, as the study shows, a 20 percentage point increase in robot use in manufacturing from 2010 to 2015 led to a 3.2 percent decline in industry-wide employment. And yet, for firms adopting robots during that timespan, employee hours worked rose by 10.9 percent, and wages rose modestly as well.

A new paper detailing the study, “Competing with Robots: Firm-Level Evidence from France,” will appear in the May issue of the American Economic Association: Papers and Proceedings. The authors are Acemoglu, who is an Institute Professor at MIT; Clair Lelarge, a senior research economist at the Banque de France and the Center for Economic Policy Research; and Pascual Restrepo Phd ’16, an assistant professor of economics at Boston University.

A French robot census

To conduct the study, the scholars examined 55,390 French manufacturing firms, of which 598 purchased robots during the period from 2010 to 2015. The study uses data provided by France’s Ministry of Industry, client data from French robot suppliers, customs data about imported robots, and firm-level financial data concerning sales, employment, and wages, among other things.

The 598 firms that did purchase robots, while comprising just 1 percent of manufacturing firms, accounted for about 20 percent of manufacturing production during that five-year period.

“Our paper is unique in that we have an almost comprehensive [view] of robot adoption,” Acemoglu says.

The manufacturing industries most heavily adding robots to their production lines in France were pharmaceutical companies, chemicals and plastic manufacturers, food and beverage producers, metal and machinery manufacturers, and automakers.

The industries investing least in robots from 2010 to 2015 included paper and printing, textiles and apparel manufacturing, appliance manufacturers, furniture makers, and minerals companies.

The firms that did add robots to their manufacturing processes became more productive and profitable, and the use of automation lowered their labor share — the part of their income going to workers — between roughly 4 and 6 percentage points. However, because their investments in technology fueled more growth and more market share, they added more workers overall.

By contrast, the firms that did not add robots saw no change in the labor share, and for every 10 percentage point increase in robot adoption by their competitors, these firms saw their own employment drop 2.5 percent. Essentially, the firms not investing in technology were losing ground to their competitors.

This dynamic — job growth at robot-adopting firms, but job losses overall — fits with another finding Acemoglu and Restrepo made in a separate paper about the effects of robots on employment in the U.S. There, the economists found that each robot added to the work force essentially eliminated 3.3 jobs nationally.

“Looking at the result, you might think [at first] it’s the opposite of the U.S. result, where the robot adoption goes hand in hand with destruction of jobs, whereas in France, robot-adopting firms are expanding their employment,” Acemoglu says. “But that’s only because they’re expanding at the expense of their competitors. What we show is that when we add the indirect effect on those competitors, the overall effect is negative and comparable to what we find the in the U.S.”

Superstar firms and the labor share issue

The competitive dynamics the researchers found in France resemble those in another high-profile piece of economics research recently published by MIT professors. In a recent paper, MIT economists David Autor and John Van Reenen, along with three co-authors, published evidence indicating the decline in the labor share in the U.S. as a whole was driven by gains made by “superstar firms,” which find ways to lower their labor share and gain market power.

While those elite firms may hire more workers and even pay relatively well as they grow, labor share declines in their industries, overall.

“It’s very complementary,” Acemoglu observes about the work of Autor and Van Reenen. However, he notes, “A slight difference is that superstar firms [in the work of Autor and Van Reenen, in the U.S.] could come from many different sources. By having this individual firm-level technology data, we are able to show that a lot of this is about automation.”

So, while economists have offered many possible explanations for the decline of the labor share generally — including technology, tax policy, changes in labor market institutions, and more — Acemoglu suspects technology, and automation specifically, is the prime candidate, certainly in France.

“A big part of the [economic] literature now on technology, globalization, labor market institutions, is turning to the question of what explains the decline in the labor share,” Acemoglu says. “Many of those are reasonably interesting hypotheses, but in France it’s only the firms that adopt robots — and they are very large firms — that are reducing their labor share, and that’s what accounts for the entirety of the decline in the labor share in French manufacturing. This really emphasizes that automation, and in particular robots, is a critical part in understanding what’s going on.”

Read More

How many jobs do robots really replace?

This is part 1 of a three-part series examining the effects of robots and automation on employment, based on new research from economist and Institute Professor Daron Acemoglu.  

In many parts of the U.S., robots have been replacing workers over the last few decades. But to what extent, really? Some technologists have forecast that automation will lead to a future without work, while other observers have been more skeptical about such scenarios.

Now a study co-authored by an MIT professor puts firm numbers on the trend, finding a very real impact — although one that falls well short of a robot takeover. The study also finds that in the U.S., the impact of robots varies widely by industry and region, and may play a notable role in exacerbating income inequality.

“We find fairly major negative employment effects,” MIT economist Daron Acemoglu says, although he notes that the impact of the trend can be overstated.

From 1990 to 2007, the study shows, adding one additional robot per 1,000 workers reduced the national employment-to-population ratio by about 0.2 percent, with some areas of the U.S. affected far more than others.

This means each additional robot added in manufacturing replaced about 3.3 workers nationally, on average.

That increased use of robots in the workplace also lowered wages by roughly 0.4 percent during the same time period.

“We find negative wage effects, that workers are losing in terms of real wages in more affected areas, because robots are pretty good at competing against them,” Acemoglu says.

The paper, “Robots and Jobs: Evidence from U.S. Labor Markets,” appears in advance online form in the Journal of Political Economy. The authors are Acemoglu and Pascual Restrepo PhD ’16, an assistant professor of economics at Boston University.

Displaced in Detroit

To conduct the study, Acemoglu and Restrepo used data on 19 industries, compiled by the International Federation of Robotics (IFR), a Frankfurt-based industry group that keeps detailed statistics on robot deployments worldwide. The scholars combined that with U.S.-based data on population, employment, business, and wages, from the U.S. Census Bureau, the Bureau of Economic Analysis, and the Bureau of Labor Statistics, among other sources.

The researchers also compared robot deployment in the U.S. to that of other countries, finding it lags behind that of Europe. From 1993 to 2007, U.S. firms actually did introduce almost exactly one new robot per 1,000 workers; in Europe, firms introduced 1.6 new robots per 1,000 workers.

“Even though the U.S. is a technologically very advanced economy, in terms of industrial robots’ production and usage and innovation, it’s behind many other advanced economies,” Acemoglu says.

In the U.S., four manufacturing industries account for 70 percent of robots: automakers (38 percent of robots in use), electronics (15 percent), the plastics and chemical industry (10 percent), and metals manufacturers (7 percent).

Across the U.S., the study analyzed the impact of robots in 722 commuting zones in the continental U.S. — essentially metropolitan areas — and found considerable geographic variation in how intensively robots are utilized.

Given industry trends in robot deployment, the area of the country most affected is the seat of the automobile industry. Michigan has the highest concentration of robots in the workplace, with employment in Detroit, Lansing, and Saginaw affected more than anywhere else in the country.

“Different industries have different footprints in different places in the U.S.,” Acemoglu observes. “The place where the robot issue is most apparent is Detroit. Whatever happens to automobile manufacturing has a much greater impact on the Detroit area [than elsewhere].”

In commuting zones where robots were added to the workforce, each robot replaces about 6.6 jobs locally, the researchers found. However, in a subtle twist, adding robots in manufacturing benefits people in other industries and other areas of the country — by lowering the cost of goods, among other things. These national economic benefits are the reason the researchers calculated that adding one robot replaces 3.3 jobs for the country as a whole.

The inequality issue

In conducting the study, Acemoglu and Restrepo went to considerable lengths to see if the employment trends in robot-heavy areas might have been caused by other factors, such as trade policy, but they found no complicating empirical effects.

The study does suggest, however, that robots have a direct influence on income inequality. The manufacturing jobs they replace come from parts of the workforce without many other good employment options; as a result, there is a direct connection between automation in robot-using industries and sagging incomes among blue-collar workers.

“There are major distributional implications,” Acemoglu says. When robots are added to manufacturing plants, “The burden falls on the low-skill and especially middle-skill workers. That’s really an important part of our overall research [on robots], that automation actually is a much bigger part of the technological factors that have contributed to rising inequality over the last 30 years.”

So while claims about machines wiping out human work entirely may be overstated, the research by Acemoglu and Restrepo shows that the robot effect is a very real one in manufacturing, with significant social implications.

“It certainly won’t give any support to those who think robots are going to take all of our jobs,” Acemoglu says. “But it does imply that automation is a real force to be grappled with.”

Read More

Three from MIT elected to the National Academy of Sciences for 2020

On April 27, the National Academy of Sciences elected 120 new members and 26 international associates, including three professors from MIT — Abhijit Banerjee, Bonnie Berger, and Roger Summons — recognizing their “distinguished and continuing achievements in original research.” Current membership totals 2,403 active members and 501 international associates, including 190 Nobel Prize recipients.

The National Academy of Sciences is a private, nonprofit institution for scientific advancement established in 1863 by congressional charter and signed into law by President Abraham Lincoln. Together, with the National Academy of Engineering and the National Academy of Medicine, the 157-year-old society provides science, engineering, and health policy advice to the federal government and other organizations.

Abhijit Banerjee is the Ford Foundation International Professor of Economics, and in 2003 cofounded, with Esther Duflo and Sendhil Mullainathan, the Abdul Latif Jameel Poverty Action Lab (J-PAL). Banerjee’s groundbreaking research focuses on development economics and the alleviation of global poverty, work for which he shared the 2019 Nobel Prize in Economic Sciences.

He continues to serve as a director of J-PAL; he is also a past president of the Bureau for Research and Economic Analysis of Development, a research associate of the National Bureau of Economic Research, a Center for Economic and Policy Research research fellow, an international research fellow of the Kiel Institute, and a fellow of the American Academy of Arts and Sciences and the Econometric Society. He has been a Guggenheim fellow, an Alfred P. Sloan fellow, and a winner of the Infosys Prize.   

Banerjee’s scholarship, in collaboration with fellow NAS member and MIT Professor Esther Duflo, emphasizes the importance of field work in antipoverty initiatives, in order to recreate the precision of randomized controlled trials (RCTs) and laboratory-style data within the complexity of ever-evolving social realities. The resulting RCT evidence reveals which poverty interventions really work, enabling governments, non-governmental organizations, donors, and the private sector to plan effective programs and policies for poverty alleviation. When Banerjee began his career, development economics was considered marginal in economic studies, a view that Banerjee’s work and high-profile achievements have helped to correct.

Bonnie Berger is the Simons Professor of Mathematics and holds a joint appointment in the Department of Electrical Engineering and Computer Science. She is the head of the Computation and Biology group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). She is also a faculty member of the Harvard-MIT Program in Health Sciences and Technology and an associate member of the Broad Institute of MIT and Harvard .

After beginning her career working in algorithms at MIT, Berger was one of the pioneer researchers in the area of computational molecular biology and, together with the many students she has mentored, has been instrumental in defining the field. Her work addresses biological and biomedical questions by using computation in support of or in place of laboratory procedures, with a goal being to get more accurate answers at a greatly reduced cost. Combining genomic and health-related data from millions of patients will empower unprecedented insights into human health and disease risk. Berger transforms and creates techniques from algorithmic thinking to provide novel computational methods and software to enable biomedical data sharing and analysis at scale.

Berger is an elected fellow of the American Academy of Arts and Sciences, Association for Computing Machinery, International Society for Computational Biology (ISCB), American Institute of Medical and Biological Engineering, and American Mathematical Society. Recently she was recognized by ISCB with their Accomplishments by a Senior Scientist Award. She received the NIH Margaret Pittman Director’s Award, the SIAM Sonya Kovalevsky Lecture Prize, and an honorary doctorate from EPFL. Earlier in her career, she received an NSF Career Award, the Biophysical Society’s Dayhoff Award, and recognition as MIT Technology Review magazine’s inaugural TR100 top young innovators. She serves as vice president of ISCB, head of the steering committee for Research in Computational Molecular Biology, and member-at-large of the Section on Mathematics at American Association for the Advancement of Science (AAAS), as well as on multiple advisory committees and editorial boards.

Roger Summons is the Schlumberger Professor of Geobiology in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) at MIT.  

Working at the intersection of biogeochemistry, geobiology, and astrobiology, Summons’ work examines the origins and co-evolution of Earth’s early life and the environment, beginning with the first geological and geochemical records and microbially dominated ecosystems. As an investigator in the Simons Collaboration on the Origins of Life, he’s particularly focused on lipid chemistry of microbes important to understating Earth through deep time, organic and isotopic indicators of climate change, and biomarkers in sediments and petroleum. 

Summons applies findings from this research to understanding life on Earth and the search for it elsewhere in the universe, recently on Mars. As such, he has served on three committees of the National Research Council: Committee on Origin and Evolution of Life, the Committee on Limits of Life, and the Committee on Mars Astrobiology. As an emeritus member of the NASA Astrobiology Institute (NAI) Executive Council and the head of the MIT team of NAI called the Foundations of Complex Life: Evolution, Preservation, and Detection on Earth and Beyond, Summons helped integrate this research with international science communities. Here, his group investigated factors that led to the evolution of complex life by examining processes and conditions that preserve biological signatures. More recently, Summons has contributed to Mars rover missions Curiosity and Perseverance, providing expertise on the preservation of organic matter from different environments on Earth and the red planet.

Read More

Unsupervised Meta-Learning: Learning to Learn without Supervision

Unsupervised Meta-Learning: Learning to Learn without Supervision

This post is cross-listed on the CMU ML blog.

The history of machine learning has largely been a story of increasing
abstraction. In the dawn of ML, researchers spent considerable effort
engineering features. As deep learning gained popularity, researchers then
shifted towards tuning the update rules and learning rates for their
optimizers. Recent research in meta-learning has climbed one level of
abstraction higher: many researchers now spend their days manually constructing
task distributions, from which they can automatically learn good optimizers.
What might be the next rung on this ladder? In this post we introduce theory
and algorithms for unsupervised meta-learning, where machine learning
algorithms themselves propose their own task distributions. Unsupervised
meta-learning further reduces the amount of human supervision required to solve
tasks, potentially inserting a new rung on this ladder of abstraction.

A foolproof way to shrink deep learning models

As more artificial intelligence applications move to smartphones, deep learning models are getting smaller to allow apps to run faster and save battery power. Now, MIT researchers have a new and better way to compress models. 

It’s so simple that they unveiled it in a tweet last month: Train the model, prune its weakest connections, retrain the model at its fast, early training rate, and repeat, until the model is as tiny as you want. 

“That’s it,” says Alex Renda, a PhD student at MIT. “The standard things people do to prune their models are crazy complicated.” 

Renda discussed the technique when the International Conference of Learning Representations (ICLR) convened remotely this month. Renda is a co-author of the work with Jonathan Frankle, a fellow PhD student in MIT’s Department of Electrical Engineering and Computer Science (EECS), and Michael Carbin, an assistant professor of electrical engineering and computer science — all members of the Computer Science and Artificial Science Laboratory.  

The search for a better compression technique grew out of Frankle and Carbin’s award-winning Lottery Ticket Hypothesis paper at ICLR last year. They showed that a deep neural network could perform with only one-tenth the number of connections if the right subnetwork was found early in training. Their revelation came as demand for computing power and energy to train ever larger deep learning models was increasing exponentially, a trend that continues to this day. Costs of that growth include a rise in planet-warming carbon emissions and a potential drop in innovation as researchers not affiliated with big tech companies compete for scarce computing resources. Everyday users are affected, too. Big AI models eat up mobile-phone bandwidth and battery power.

But at a colleague’s suggestion, Frankle decided to see what lessons it might hold for pruning, a set of techniques for reducing the size of a neural network by removing unnecessary connections or neurons. Pruning algorithms had been around for decades, but the field saw a resurgence after the breakout success of neural networks at classifying images in the ImageNet competition. As models got bigger, with researchers adding on layers of artificial neurons to boost performance, others proposed techniques for whittling them down. 

Song Han, now an assistant professor at MIT, was one pioneer. Building on a series of influential papers, Han unveiled a pruning algorithm he called AMC, or AutoML for model compression, that’s still the industry standard. Under Han’s technique, redundant neurons and connections are automatically removed, and the model is retrained to restore its initial accuracy. 

In response to Han’s work, Frankle recently suggested in an unpublished paper that results could be further improved by rewinding the smaller, pruned model to its initial parameters, or weights, and retraining the smaller model at its faster, initial rate. 

In the current ICLR study, the researchers realized that the model could simply be rewound to its early training rate without fiddling with any parameters. In any pruning regimen, the tinier a model gets, the less accurate it becomes. But when the researchers compared this new method to Han’s AMC or Frankle’s weight-rewinding methods, it performed better no matter how much the model shrank. 

It’s unclear why the pruning technique works as well as it does. The researchers say they will leave that question for others to answer. As for those who wish to try it, the algorithm is as easy to implement as other pruning methods, without time-consuming tuning, the researchers say. 

“It’s the pruning algorithm from the ‘Book,’” says Frankle. “It’s clear, generic, and drop-dead simple.”

Han, for his part, has now partly shifted focus from compression AI models to channeling AI to design small, efficient models from the start. His newest method, Once for All, also debuts at ICLR. Of the new learning rate method, he says: “I’m happy to see new pruning and retraining techniques evolve, giving more people access to high-performing AI applications.” 

Support for the study came from the Defense Advanced Research Projects Agency, Google, MIT-IBM Watson AI Lab, MIT Quest for Intelligence, and the U.S. Office of Naval Research.

Read More