Optimizing Peptides in TensorFlow 2

Optimizing Peptides in TensorFlow 2

A guest post by Somesh Mohapatra, Rafael Gómez-Bombarelli of MIT

Introduction

A polymer is a material made up of long repeating chains of molecules, like plastic or rubber. Polymers are made up of subunits (monomers) that are chemically bound to one another. The chemical composition and arrangement of monomers dictate the properties of the polymer. A few examples of polymers in everyday use are water bottles, non-stick teflon coatings, and adhesives.

Figure 1. Conceptually, you can think of Peptimizer as generating a sequence of amino acids, then predicting a property of the peptide, then optimizing the sequence.

Peptides are short polymer chains made up of amino acids, analogous to words composed of letters. They are widely used for therapeutic applications, such as for the delivery of gene therapy by cell-penetrating peptides. Thanks to their modular chemistry amenable to automated synthesis and expansive design space, peptides are increasingly preferred over more conventional small molecule drugs, which are harder to synthesize. However, the vast sequence space (in terms of the amino acid arrangement) acts as an impediment to the design of functional peptides.

Synthetic accessibility, apart from functionality optimization, is a challenge. Peptides and other functional polymers with a precise arrangement of monomers are synthesized using methods such as flow chemistry. The synthesis involves monomer-by-monomer addition to a growing polymer chain. This process necessitates a high reaction yield for every step, thus making the accessibility of longer chains challenging.

Conventional approaches for optimization of functional polymers, such as peptides, in a lab environment involve the heuristic exploration of chemical space by trial-and-error. However, the number of possible polymers rises exponentially as mn, where m is the number of possible monomers, and n is the polymer length.

As an alternative to doing an experiment in a lab, you can design functional polymers using machine learning. In our work on optimizing cell-penetrating activity and synthetic accessibility, we design peptides using Peptimizer, a machine learning framework based on TensorFlow. Conceptually, you can think of Peptimizer as generating a sequence of amino acids, then predicting a property of the peptide, then optimizing the sequence.

Peptimizer can be used for the optimization of functionality (other than cell-penetrating activity as well) and synthetic accessibility of polymers. We use topological representations of monomers (amino acids) and matrix representations of polymer chains (peptide sequences) to develop interpretable (attribute the gain in property to a specific monomer and/or chemical substructure) machine learning models. The choice of representation and model architecture enables inference of biochemical design principles, such as monomer composition, sequence length or net charge of polymer, by using gradient-based attribution methods.

Key challenges for applying machine learning to advance functional peptide design include limited dataset size (usually less than 100 data points), choosing effective representations, and the ability to explain and interpret models.

Here, we use a dataset of peptides received from our experimental collaborators to demonstrate the utility of the codebase.

Optimization of functionality

Based on our work on designing novel and highly efficient cell-penetrating peptides, we present a framework for the discovery of functional polymers (Figure 1). The framework consists of a recurrent neural network generator, convolutional neural network predictor, and genetic algorithm optimizer.

The generator is trained on a dataset of peptide sequences using Teacher Forcing, and enables sampling of novel sequences similar to the ones in the training dataset. The predictor is trained over matrix representations of sequences and experimentally determined biological activity. The optimizer is seeded with sequences sampled utilizing the generator. It optimizes by evaluating an objective function that involves the predicted activity and other parameters such as length and arginine content. The outcome is a list of optimized sequences with high predicted activity, which may be validated in wet-lab experiments.

Each of these components can be accessed from the tutorial notebook to train on a custom dataset. The scripts for the individual components have been designed in a modular fashion and can be modified with relative ease.

Optimization of synthetic accessibility

Apart from functionality optimization, Peptimizer allows for the optimization of synthetic accessibility of a wild-type sequence (Figure 2). The framework consists of a multi-modal convolutional neural network predictor and a brute force optimizer. The predictor is trained over experimental synthesis parameters such as pre-synthesized chain, incoming monomer, temperature, flow rate, and catalysts. The optimizer evaluates single-point mutants of the wild-type sequence for higher theoretical yield.

The choice of a brute force optimizer for optimization of synthetic accessibility is based on the linearly growing sequence space (m x n) for the variations of the wild-type sequence. This sequence space is relatively small in comparison to the exponentially growing sequence space (mn) encountered in optimization of functionality.

This framework may be adapted for other stepwise chemical reaction platforms with in-line monitoring by specifying the different input and output variables and respective data types. It can be accessed using a tutorial notebook.

Figure 2. Outline of synthetic accessibility optimization.

Interpretability of models

A key feature of Peptimizer is the gradient-based attribution for the interpretation of model predictions (Figure 3). Taking the gradient of the predicted activity with the input sequence representation, we visualize both positive and negative activations for each input feature. Fingerprint indices corresponding to substructures that positively contribute to the activity have higher activation in the heatmap. This activation heatmap is averaged along the topological fingerprints axis to find key substructures or chemical motifs that contribute positively/negatively to the predicted activity. Averaging over the monomer position axis, we obtain the relative contribution of each monomer to the predicted functionality of the polymer. These visualizations provide in-depth insight into sequence-activity relationships and add to the contemporary understanding of biochemical design principles.

Figure 3. (left) Positive gradient activation heatmap, and (right) activated chemical substructure, for functional peptide sequence.

Outlook

Optimization of functional polymers using Peptimizer can inform experimental strategies and lead to significant savings in terms of time and costs. We believe that the tutorial notebooks will help bench scientists in chemistry, materials science, and the broader field of sequence design to run machine learning models over custom datasets, such as Khazana. In addition, the attribution methods will provide insights into the high-dimensional sequence-activity relationships and elucidation of design principles.

Experimental collaboration

This work was done in collaboration with the lab of Bradley Pentelute (Department of Chemistry, MIT). The collaborators for the optimization of functionality and synthetic accessibility were Carly Schissel and Dr. Nina Hartrampf, respectively. We thank them for providing the dataset, experimental validation, and the discussion during the development of the models.

Acknowledgment

We would like to acknowledge the support of Thiru Palanisamy and Josh Gordon at Google for their help with the blog post collaboration and with providing active feedback.Read More

The sound of India’s AI potential

The sound of India’s AI potential

On August 15, India’s Independence Day, it’s customary to sing Jana Gana Mana: the Indian national anthem, originally composed by the poet Rabindranath Tagore and adopted as the anthem after India gained full independence.  

This year, together with Prasar Bharati and Virtual Bharat, we offered Indians a new take on the familiar with Sounds of India, an AI-powered web app. Using the app, you sing Jana Gana Mana into your phone, karaoke-style, and it transforms your voice into one of three traditional Indian instruments. The day culminated in a rendition of the national anthem, combining many of the voices that Indians submitted through the app.

Sounds of India GIF

The Sounds of India experiment was made possible by machine learning models built with Google’s TensorFlow platform to convert sounds into musical instruments (in this case, the Bansuri, the Shehnai, and the Sarangi). 

It was a fun, fresh way for Indians to express their national pride, and showcase the traditions of Indian classical music. But it’s also an opportunity to think about AI’s bigger potential for India’s future—something Google is increasingly focused on. 

Last year, we started Google Research India, an AI lab based in Bangalore, to advance AI research and apply AI in solving some of India’s biggest challenges. We reinforced that commitment last month, announcing that leveraging technology and AI for social good would be one of the four focus areasfor our $10 billion Google for India Digitization Fund.

Supporting Indians’ health and wellbeing

In healthcare, we’re using AI to help people manage their health, focusing on wellbeing and a mobile app for cardio-vascular disease prevention. We’re also building on our efforts to apply AI in screening for the eye disease diabetic retinopathy, working with partners like Aravind Eye Hospital and Sankara Nethralaya. 

Improving environmental protection and forecasting

Our flood forecasting tools are already being used to send alerts to hundreds of millions of people, and we’re working on computer vision techniques that can analyze satellite imagery to assist with restoring water bodies and protecting forest cover.  

Harnessing AI for social good

As part of our commitment to the broader Indian research community, we’re supporting researchers and NGOsusing AI to make further progress on health and environmental problems. Nonprofit ARMMAN and a team from the Indian Institute of Technology Madras are collaborating on a project to predict the risk of expectant mothers dropping out of healthcare programs, while other projects aim to reduce the risk of HIV/Aids, minimize human-wildlife conflict, and improve water release from dams.  

One promising initiative is NGO Wadhwani AI’s work using AI to provide timely, local pest management advice to farmers. With a grant from Google.org’s AI Impact Challenge—and support from our Launchpad Accelerator— Wadhwani AI has started to roll out their solution to detect bollworm, helping farmers monitor pests, take action, and improve crop yield. 

Independence Day is always a time to reflect on both India’s past and its future. We’re looking forward to building on our progress so far, and working with our partners to bring the benefits of AI to many more Indians in years to come.

Read More

Fellowship 101: Facebook Fellow Daricia Wilkinson outlines the basics for PhDs

The Facebook Fellowship Program supports talented PhD students engaged in innovative research in any year of their PhD study. Applications for the 2021 Fellowship cohort recently opened on August 10, and they will close on October 1.

Apply
“Each year, the program gets more and more competitive. Last year, we received around 1,875 applications — double the amount as the year before,” says Sharon Ayalde, Program Manager for the Facebook Fellowship Program. “We’re looking forward to the high quality of applications that we see every year.”

To prepare for this year’s Fellowship applications, we connected with Daricia Wilkinson, 2019 Fellow in UX/Instagram, to discuss the fellowship basics. Wilkinson is a PhD student in the Human-Centered Computing program at Clemson University, advised by Dr. Bart Knijnenburg. Her research interests are at the intersection of people and technology, and she is passionate about solving problems from a user-centered perspective.

Inspired by Wilkinson’s Medium post about how to make a successful PhD fellowship, this Q&A outlines the most common questions Wilkinson receives about fellowships, research statements, and the application process.

Q: How do fellowships work?

Daricia Wilkinson: If you are an incoming PhD student, you will learn about assistantships from your university (either teaching or research assistantship) that support your tuition and stipend. In contrast, fellowships are a source of external funding that could be offered by a governmental organization or a company. However, not all fellowships are the same. There are some key differences that can help guide you when deciding which fellowships to apply to:

  • Fellowship amount: You will probably recognize this fairly early, but the amount being offered could vary significantly. The typical range for PhD fellowships is $10,000 to $40,000.
  • Type of support: The support given could contribute toward covering tuition, your stipend, or travel. Some fellowships may only offer one type of support. It should also be noted that the money is sometimes paid directly to the school and not to you. That might make it easier or more difficult depending on your situation.
  • Duration: You may be offered the amount in a one-time payment, or it may be offered over a set number of years.
  • Additional offers: Some fellowship programs are more robust and hands-on than others. It is quite possible to be offered the opportunity to interview for internships. Programs that could possibly lead to your being hired want you to know this. I’d recommend taking some time to comb through each program’s FAQ to see whether this is an option. Beyond internships, some organizations allow you to collaborate and network with their research teams, which could be an invaluable experience.

Q: How is the Facebook Fellowship different from others?

DW: First, the Facebook Fellowship is very prestigious. Unlike many other fellowship opportunities, the Facebook Fellowship offers a very generous level of support. Facebook pays your tuition, and you are provided with a very competitive stipend of $42,000, meant for living costs and travel support.

Second, the Fellowship offers incredibly valuable networking opportunities. The Facebook Fellowship Summit, hosted virtually this year, is one of these opportunities. At the summit, Fellows are invited to a paid trip to Facebook headquarters in Menlo Park, where they can present their research and meet other Fellows as well as top Facebook researchers.

Third, and it seems not many people know this, the program is open to PhD students from all around the world, with no limit per university.

Q: What kind of research is Facebook interested in supporting?

DW: Research at Facebook is typically grounded in real-world problems. Research teams work on cutting-edge topics with a practical focus, which ultimately means that focus areas could include multiple disciplines. Consider that the Facebook family includes Instagram and WhatsApp (as well as others), which could result in various products within human-computer interaction, computer vision, privacy, or data science. For a more detailed list, I would recommend looking at the list of available fellowships on the Fellowship page.

Q: How do you write a research statement?

DW: Start by taking some time to really think about the topic you are proposing. This would involve reading up on the latest publications but also borrowing from an inspiration to solve real-world problems around you. In your first draft, do not focus on the word limit. Rather, try to effectively communicate the problem and why it matters. Afterward, you could work on reframing and then editing to adhere to the word limit. Generally, I recommend following the structure below:

Paragraph 1: Introduction

  • Present the problem
  • Identify who this impacts and why this is relevant in general and more specifically relevant to the company
  • One sentence summarizing your idea/approach

Paragraph 2: Body

  • What you plan to do
  • How you plan to do it
  • What you’ve done to show you can do this (optional)

Paragraph 3: Conclusion

  • Contribution to the community (academic and public)
  • Relevance to the mission/values of the company

Q: What advice would you provide with regard to the application process?

DW: Having ample time always works in your favor. Therefore, starting to plan earlier rather than later would be in your best interest. However, don’t let this discourage you if you find out about an opportunity close to its deadline. My high-level advice would be the following:

  • Ensure that your research statement is on a topic you are passionate about and that you clearly communicate that passion. When I applied for fellowships in 2018, I had two complete sets of applications prepared for submission. Both were well-motivated and important work. In the end, my adviser recommended that I choose the one that I was without a doubt most passionate about. Ultimately, that application was successful. Being able to communicate your passion could help to convince others why your research direction is worthy of a fellowship award.
  • Apply to multiple fellowships. I could insert multiple cliches to stress that “it’s a numbers game” and that “you shouldn’t place all your eggs in one basket.” Fellowships are very competitive. I applied twice before being awarded the Facebook Fellowship, and I received Google’s Women TechMakers Scholarship on the third try. I recommend creating a document or spreadsheet with possible options to help you manage.
  • Feel free to reach out to past fellows. I’ve had numerous students reach out to me for advice, and I try to provide as much help as I can. You could also look at the type of research that is normally conducted by past fellows to get a sense of what that organization might be interested in. However, keep in mind that some companies like Facebook are rapidly evolving and interests might change year to year.

To learn more about Wilkinson’s background, research interests, publications, and speaking experiences, visit her Fellowship profile.

The post Fellowship 101: Facebook Fellow Daricia Wilkinson outlines the basics for PhDs appeared first on Facebook Research.

Read More

Language-Agnostic BERT Sentence Embedding

Language-Agnostic BERT Sentence Embedding

Posted by Yinfei Yang and Fangxiaoyu Feng, Software Engineers, Google Research

A multilingual embedding model is a powerful tool that encodes text from different languages into a shared embedding space, enabling it to be applied to a range of downstream tasks, like text classification, clustering, and others, while also leveraging semantic information for language understanding. Existing approaches for generating such embeddings, like LASER or m~USE, rely on parallel data, mapping a sentence from one language directly to another language in order to encourage consistency between the sentence embeddings. While these existing multilingual approaches yield good overall performance across a number of languages, they often underperform on high-resource languages compared to dedicated bilingual models, which can leverage approaches like translation ranking tasks with translation pairs as training data to obtain more closely aligned representations. Further, due to limited model capacity and the often poor quality of training data for low-resource languages, it can be difficult to extend multilingual models to support a larger number of languages while maintaining good performance.

Illustration of a multilingual embedding space.

Recent efforts to improve language models include the development of masked language model (MLM) pre-training, such as that used by BERT, ALBERT and RoBERTa. This approach has led to exceptional gains across a wide range of languages and a variety of natural language processing tasks since it only requires monolingual text. In addition, MLM pre-training has been extended to the multilingual setting by modifying MLM training to include concatenated translation pairs, known as translation language modeling (TLM), or by simply introducing pre-training data from multiple languages. However, while the internal model representations learned during MLM and TLM training are helpful when fine-tuning on downstream tasks, without a sentence level objective, they do not directly produce sentence embeddings, which are critical for translation tasks.

In “Language-agnostic BERT Sentence Embedding”, we present a multilingual BERT embedding model, called LaBSE, that produces language-agnostic cross-lingual sentence embeddings for 109 languages. The model is trained on 17 billion monolingual sentences and 6 billion bilingual sentence pairs using MLM and TLM pre-training, resulting in a model that is effective even on low-resource languages for which there is no data available during training. Further, the model establishes a new state of the art on multiple parallel text (a.k.a. bitext) retrieval tasks. We have released the pre-trained model to the community through tfhub, which includes modules that can be used as-is or can be fine-tuned using domain-specific data.

The collection of the training data for 109 supported languages

The Model
In previous work, we proposed the use of a translation ranking task to learn a multilingual sentence embedding space. This approach tasks the model with ranking the true translation over a collection of sentences in the target language, given a sentence in the source language. The translation ranking task is trained using a dual encoder architecture with a shared transformer encoder. The resulting bilingual models achieved state-of-the-art performance on multiple parallel text retrieval tasks (including United Nations and BUCC). However, the model suffered when the bi-lingual models were extended to support multiple languages (16 languages, in our test case) due to limitations in model capacity, vocabulary coverage, training data quality and more.

Translation ranking task. Given a sentence in a given source language, the task is to find the true translation over a collection of sentences in the target language.

For LaBSE, we leverage recent advances on language model pre-training, including MLM and TLM, on a BERT-like architecture and follow this with fine-tuning on a translation ranking task. A 12-layer transformer with a 500k token vocabulary pre-trained using MLM and TLM on 109 languages is used to increase the model and vocabulary coverage. The resulting LaBSE model offers extended support to 109 languages in a single model.

The dual encoder architecture, in which the source and target text are encoded using a shared transformer embedding network separately. The translation ranking task is applied, forcing the text that paraphrases each other to have similar representations. The transformer embedding network is initialized from a BERT checkpoint trained on MLM and TLM tasks.

Performance on Cross-lingual Text Retrieval
We evaluate the proposed model using the Tatoeba corpus, a dataset consisting of up to 1,000 English-aligned sentence pairs for 112 languages. For more than 30 of the languages in the dataset, the model has no training data. The model is tasked with finding the nearest neighbor translation for a given sentence, which it calculates using the cosine distance.

To understand the performance of the model for languages at the head or tail of the training data distribution, we divide the set of languages into several groups and compute the average accuracy for each set. The first 14-language group is selected from the languages supported by m~USE, which cover the languages from the head of the distribution (head languages). We also evaluate a second language group composed of 36 languages from the XTREME benchmark. The third 82-language group, selected from the languages covered by the LASER training data, includes many languages from the tail of the distribution (tail languages). Finally, we compute the average accuracy for all languages.

The table below presents the average accuracy achieved by LaBSE, compared to the m~USE and LASER models, for each language group. As expected, all models perform strongly on the 14-language group that covers most head languages. With more languages included, the averaged accuracy for both LASER and LaBSE declines. However, the reduction in accuracy from the LaBSE model with increasing numbers of languages is much less significant, outperforming LASER significantly, particularly when the full distribution of 112 languages is included (83.7% accuracy vs. 65.5%).

Model 14 Langs 36 Langs 82 Langs All Langs
m~USE* 93.9
LASER 95.3 84.4 75.9 65.5
LaBSE 95.3 95.0 87.3 83.7
Average Accuracy (%) on Tatoeba Datasets. The “14 Langs” group consists of languages supported by m~USE; the “36 Langs” group includes languages selected by XTREME; and the “82 Langs” group represents languages covered by the LASER model. The “All Langs” group includes all languages supported by Taoteba.
* The m~USE model comes in two varieties, one built on a convolutional neural network architecture and the other a Transformer-like architecture. Here, we compare only to the Transformer version.

Support to Unsupported Languages
The average performance of all languages included in Tatoeba is very promising. Interestingly, LaBSE even performs relatively well for many of the 30+ Tatoeba languages for which it has no training data (see below). For one third of these languages the LaBSE accuracy is higher than 75% and only 8 have accuracy lower than 25%, indicating very strong transfer performance to languages without training data. Such positive language transfer is only possible due to the massively multilingual nature of LaBSE.

LaBSE accuracy for the subset of Tatoeba languages (represented with ISO 639-1/639-2 codes) for which there was no training data.

Mining Parallel Text from WebLaBSE can be used for mining parallel text (bi-text) from web-scale data. For example, we applied LaBSE to CommonCrawl, a large-scale monolingual corpus, to process 560 million Chinese and 330 million German sentences for the extraction of parallel text. Each Chinese and German sentence pair is encoded using the LaBSE model and then the encoded embedding is used to find a potential translation from a pool of 7.7 billion English sentences pre-processed and encoded by the model. An approximate nearest neighbor search is employed to quickly search through the high-dimensional sentence embeddings. After a simple filtering, the model returns 261M and 104M potential parallel pairs for English-Chinese and English-German, respectively. The trained NMT model using the mined data reaches BLEU scores of 35.7 and 27.2 on the WMT translation tasks (wmt17 for English-to-Chinese and wmt14 for English-to-German). The performance is only a few points away from current state-of-art-models trained on high quality parallel data.

ConclusionWe’re excited to share this research, and the model, with the community. The pre-trained model is released at tfhub to support further research on this direction and possible downstream applications. We also believe that what we’re showing here is just the beginning, and there are more important research problems to be addressed, such as building better models to support all languages.

AcknowledgementsThe core team includes Wei Wang, Naveen Arivazhagan, Daniel Cer. We would like to thank the Google Research Language team, along with our partners in other Google groups for their feedback and suggestions. Special thanks goes to Sidharth Mudgal, and Jax Law for help with data processing; as well as Jialu Liu, Tianqi Liu, Chen Chen, and Anosh Raj for help on BERT pre-training.

Read More

Rewriting the rules of machine-generated art

Horses don’t normally wear hats, and deep generative models, or GANs, don’t normally follow rules laid out by human programmers. But a new tool developed at MIT lets anyone go into a GAN and tell the model, like a coder, to put hats on the heads of the horses it draws. 

In a new study appearing at the European Conference on Computer Vision this month, researchers show that the deep layers of neural networks can be edited, like so many lines of code, to generate surprising images no one has seen before.

“GANs are incredible artists, but they’re confined to imitating the data they see,” says the study’s lead author, David Bau, a PhD student at MIT. “If we can rewrite the rules of a GAN directly, the only limit is human imagination.”

Generative adversarial networks, or GANs, pit two neural networks against each other to create hyper-realistic images and sounds. One neural network, the generator, learns to mimic the faces it sees in photos, or the words it hears spoken. A second network, the discriminator, compares the generator’s outputs to the original. The generator then iteratively builds on the discriminator’s feedback until its fabricated images and sounds are convincing enough to pass for real.

GANs have captivated artificial intelligence researchers for their ability to create representations that are stunningly lifelike and, at times, deeply bizarre, from a receding cat that melts into a pile of fur to a wedding dress standing in a church door as if abandoned by the bride. Like most deep learning models, GANs depend on massive datasets to learn from. The more examples they see, the better they get at mimicking them. 

But the new study suggests that big datasets are not essential. If you understand how a model is wired, says Bau, you can edit the numerical weights in its layers to get the behavior you desire, even if no literal example exists. No dataset? No problem. Just create your own.

“We’re like prisoners to our training data,” he says. “GANs only learn patterns that are already in our data. But here I can manipulate a condition in the model to create horses with hats. It’s like editing a genetic sequence to create something entirely new, like inserting the DNA of a firefly into a plant to make it glow in the dark.”

Bau was a software engineer at Google, and had led the development of Google Hangouts and Google Image Search, when he decided to go back to school. The field of deep learning was exploding and he wanted to pursue foundational questions in computer science. Hoping to learn how to build transparent systems that would empower users, he joined the lab of MIT Professor Antonio Torralba. There, he began probing deep nets and their millions of mathematical operations to understand how they represent the world.

Bau showed that you could slice into a GAN, like layer cake, to isolate the artificial neurons that had learned to draw a particular feature, like a tree, and switch them off to make the tree disappear. With this insight, Bau helped create GANPaint, a tool that lets users add and remove features like doors and clouds from a picture. In the process, he discovered that GANs have a stubborn streak: they wouldn’t let you draw doors in the sky.

“It had some rule that seemed to say, ‘doors don’t go there,’” he says. “That’s fascinating, we thought. It’s like an ‘if’ statement in a program. To me, it was a clear signal that the network had some kind of inner logic.”

Over several sleepless nights, Bau ran experiments and picked through the layers of his models for the equivalent of a conditional statement. Finally, it dawned on him. “The neural network has different memory banks that function as a set of general rules, relating one set of learned patterns to another,” he says. “I realized that if you could identify one line of memory, you could write a new memory into it.” 

In a short version of his ECCV talk, Bau demonstrates how to edit the model and rewrite memories using an intuitive interface he designed. He copies a tree from one image and pastes it into another, placing it, improbably, on a building tower. The model then churns out enough pictures of tree-sprouting towers to fill a family photo album. With a few more clicks, Bau transfers hats from human riders to their horses, and wipes away a reflection of light from a kitchen countertop.

The researchers hypothesize that each layer of a deep net acts as an associative memory, formed after repeated exposure to similar examples. Fed enough pictures of doors and clouds, for example, the model learns that doors are entryways to buildings, and clouds float in the sky. The model effectively memorizes a set of rules for understanding the world.

The effect is especially striking when GANs manipulate light. When GANPaint added windows to a room, for example, the model automatically added nearby reflections. It’s as if the model had an intuitive grasp of physics and how light should behave on object surfaces. “Even this relationship suggests that associations learned from data can be stored as lines of memory, and not only located but reversed,” says Torralba, the study’s senior author. 

GAN editing has its limitations. It’s not easy to identify all of the neurons corresponding to objects and animals the model renders, the researchers say. Some rules also appear edit-proof; some changes the researchers tried to make failed to execute.

Still, the tool has immediate applications in computer graphics, where GANs are widely studied, and in training expert AI systems to recognize rare features and events through data augmentation. The tool also brings researchers closer to understanding how GANs learn visual concepts with minimal human guidance. If the models learn by imitating what they see, forming associations in the process, they may be a springboard for new kinds of machine learning applications. 

The study’s other authors are Steven Liu, Tongzhu Wang, and Jun-Yan Zhu.

Read More

Wherever You Go with Chromebook, GeForce NOW Lets You Bring Your Games with You 

Wherever You Go with Chromebook, GeForce NOW Lets You Bring Your Games with You 

Chromebooks, like GeForce NOW, are ready when you are.

With today’s beta launch on ChromeOS, Chromebooks now wield the power to play PC games using GeForce NOW.

Chromebook users join the millions on PC, Mac, SHIELD and Android mobile devices already playing their favorite games on our cloud gaming service with GeForce performance.

Getting started is simple. Head to play.geforcenow.com and log in with your GeForce NOW account. Signing up is easy, just choose either a paid Founders membership or a free account.

Right now is a great time to join. We just launched a six-month Founders membership that includes a Hyper Scape Season One Battle Pass token and exclusive Hyper Scape in-game content for $24.95. That’s a $64.94 value.

Once logged in, you’re only a couple clicks away from streaming a massive catalog of games. For the best experience, you’ll want to make those clicks with a USB mouse.

Distance Learning by Day, Distance Gaming by Night

Some students are heading back to school. Others are distance learning from home. However they’re learning, more students than ever rely on Chromebooks.

That’s because Chromebooks are great computers for studying. They’re fast, simple and secure devices that help you stay productive and connected.

Now, those same Chromebooks transform, instantly, into GeForce-powered distance gaming rigs, thanks to GeForce NOW.

Your Games on All Your Devices

Millions of GeForce NOW members play with and against their friends — no matter which platform they’re streaming on, whether that’s PC, Mac, Android or, now, Chromebooks.

That’s because when you stream games using GeForce NOW, you’re playing the PC version from digital stores like Steam, Epic Games Store and Ubisoft Uplay.

This is great for developers, who can bring their games to the cloud at launch, without adding development cycles.

And it’s great for the millions of GeForce NOW members. They’re tapping into an existing ecosystem anytime they stream one of more than 650 games instantly. That includes over 70 of the most-played free-to-play games.

When games like CD Projekt Red’s Cyberpunk 2077 come out later this year, members will be able to play using GeForce NOW servers the same day on their Chromebook.

Anywhere You Go

Chromebooks, of course, are lightweight devices that go where you do. From home to work to school. Or from your bedroom to the living room.

GeForce NOW is the perfect Chromebook companion. Simply plug in a mouse and go. Our beta release gives Chromebook owners the power to play their favorite PC games.

New to GeForce NOW? Check out our GeForce NOW Quick Start Guide to get gaming instantly.

Take game progress or character level-ups from a desktop to a phone and then onto Chromebook. You’re playing the games you own from your digital game store accounts. So your progress goes with you.

More PC Gaming Features Heading to the Cloud 

The heart of GeForce NOW is PC gaming. We continue to tap into the PC ecosystem to bring more PC features to the cloud.

PC gamers are intimately familiar with Steam. Many have massive libraries from the popular PC game store. To support them, we just launched Steam Game Sync so they can sync games from their Steam library with their library in GeForce NOW. It’s quickly become one of our most popular features for members playing on PC and Mac.

Soon, Chromebook owners will be able to take advantage of the feature, too.

Over the past few months, we’ve added two GeForce Experience features. Highlights delivers automatic video capture so you can share your best moments, and Freestyle provides gamers the ability to customize a game’s look. In the weeks ahead, we’ll add support for Ansel — a powerful in-game camera that lets gamers capture professional-grade screenshots. These features are currently only available on PC and Mac. Look for them to come to Chromebooks in future updates.

More games. More platforms. Legendary GeForce performance. And now on Chromebooks. That’s the power to play that only GeForce NOW can deliver.

The post Wherever You Go with Chromebook, GeForce NOW Lets You Bring Your Games with You  appeared first on The Official NVIDIA Blog.

Read More