Up Your Creative Game: GeForce RTX 30 Series GPUs Amp Up Performance

Up Your Creative Game: GeForce RTX 30 Series GPUs Amp Up Performance

Creative workflows are riddled with hurry up and wait.

GeForce RTX 30 Series GPUs, powered by our second-generation RTX architecture, aim to reduce the wait, giving creators more time to focus on what matters: creating amazing content.

These new graphics cards deliver faster ray tracing and the next generation of AI-powered tools, turning the tedious tasks in creative workflows into things of the past.

With up to 24GB of new, blazing-fast GDDR6X memory, they’re capable of powering the most demanding multi-app workflows, 8K HDR video editing and working with extra-large 3D models.

Plus, two new apps, available to all NVIDIA RTX users, are joining NVIDIA Studio. NVIDIA Broadcast turns any room into a home broadcast studio with AI-enhanced video and voice comms. NVIDIA Omniverse Machinima enables creators to tell amazing stories with video game assets, animated by AI.

Ray Tracing at the Speed of Light

The next generation of dedicated ray tracing cores and improved CUDA performance on GeForce RTX 30 Series GPUs speeds up 3D rendering times by up to 2x across top renderers.

chart showing relative performance of geforce 30 series gpus on creative apps

The RT Cores also feature new hardware acceleration for ray-traced motion blur rendering, a common but computationally intensive technique. It’s used to enhance 3D visuals with cinematic flair. But to date, it requires using either an inaccurate motion vector-based post-process, or an accurate but time-consuming rendering step. Now with RTX 30 Series and RT Core accelerated apps like Blender Cycles, creators can enjoy up to 5x faster motion blur rendering than prior generation RTX.

motion blur in blender cycles
Motion blur effect rendered in Blender Cycles.

Next-Gen AI Means Less Wait and More Create

GeForce RTX 30 Series GPUs are enabling the next wave of AI-powered creative features, reducing or even eliminating repetitive creative tasks such as image denoising, reframing and retiming of video, and creation of textures and materials.

Along with the release of our next-generation RTX GPUs, NVIDIA is bringing DLSS — real-time super resolution that uses the power of AI to boost frame rates — to creative apps. D5 Render and SheenCity Mars are the first design apps to add DLSS support, enabling crisp, real-time exploration of designs.

Render of living space created by D5 Render using GeForce RTX 30 Series GPUs
Image courtesy of D5 Render.

Hardware That Zooms

Increasingly, complex digital content creation requires hardware that can run multiple apps concurrently. This requires a large frame buffer on the GPU. Without sufficient memory, systems start to chug, wasting precious time as they swap geometry and textures in and out of each app.

The new GeForce RTX 3090 GPU houses a massive 24GB of video memory. This lets animators and 3D artists work with the largest 3D models. Video editors can tackle the toughest 8K scenes. And creators of all types can stay hyper-productive in multi-app workflows.

GeForce RTX 3080 Series GPU
Model, edit and export larger scenes faster with GeForce RTX 30 Series GPUs.

The new GPUs also use PCIe 4.0, doubling the connection speed between the GPU and the rest of the PC. This improves performance when working with ultra-high-resolution and HDR video.

GeForce RTX 30 Series graphics cards are also the first discrete GPUs with decode support for the AV1 codec, enabling playback of high-resolution video streams up to 8K HDR using significantly less bandwidth.

AI-Accelerated Studio Apps

Two new Studio apps are making their way into creatives’ arsenals this fall. Best of all, they’re free for NVIDIA RTX users.

NVIDIA Broadcast upgrades any room into an AI-powered home broadcast studio. It transforms standard webcams and microphones into smart devices, offering audio noise removal, virtual background effects and webcam auto framing compatible with most popular live streaming, video conferencing and voice chat applications.

NVIDIA Broadcast feature
Access AI-powered features and download the new NVIDIA Broadcast app later this month.

NVIDIA Omniverse Machinima enables creators to tell amazing stories with video game assets, animated by NVIDIA AI technologies. Through NVIDIA Omniverse, creators can import assets from supported games or most third-party asset libraries, then automatically animate characters using an AI-based pose estimator and footage from their webcam. Characters’ faces can come to life with only a voice recording using NVIDIA’s new Audio2Face technology.

Screenshot of NVIDIA Omniverse Machinima
Master the art of storytelling using 3D objects with NVIDIA Omniverse Machinima powered by AI.

NVIDIA is also updating in September GeForce Experience, our companion app for GeForce GPUs, to support desktop and application capture for up to 8K and HDR, enabling creators to record video at incredibly high resolution and dynamic range.

These apps, like most of the world’s top creative apps, are supported by NVIDIA Studio Drivers, which provide optimal levels of performance and reliability.

GeForce RTX 30 Series: Get Creating Soon

GeForce RTX 30 Series graphics cards are available starting September 17.

While you wait for the next generation of creative performance, perfect your creative skillset by visiting the NVIDIA Studio YouTube channel to watch tutorials and tips and tricks from industry-leading artists.

The post Up Your Creative Game: GeForce RTX 30 Series GPUs Amp Up Performance appeared first on The Official NVIDIA Blog.

Read More

‘Giant Step into the Future’: NVIDIA CEO Unveils GeForce RTX 30 Series GPUs

‘Giant Step into the Future’: NVIDIA CEO Unveils GeForce RTX 30 Series GPUs

A decade ago, GPUs were judged on whether they could power through Crysis. How quaint.

The latest NVIDIA Ampere GPU architecture, unleashed in May to power the world’s supercomputers and hyperscale data centers, has come to gaming.

And with NVIDIA CEO Jensen Huang Tuesday unveiling the new GeForce RTX 30 Series GPUs, it’s delivering NVIDIA’s “greatest generational leap in company history.”

The GeForce RTX 30 Series, NVIDIA’s second-generation RTX GPUs, deliver up to 2x the performance and 1.9x the power efficiency over previous-generation GPUs.

NVIDIA CEO Jensen Huang spoke from the kitchen of his Silicon Valley home. 
NVIDIA CEO Jensen Huang spoke from the kitchen of his Silicon Valley home.

“If the last 20 years was amazing, the next 20 will seem like nothing short of science fiction,” Huang said, speaking from the kitchen of his Silicon Valley home. Today’s NVIDIA Ampere launch is “a giant step into the future,” he added.

In addition to the trio of new GPUs — the flagship GeForce RTX 3080, the GeForce RTX 3070 and the “ferocious” GeForce RTX 3090 — Huang introduced a slate of new tools for GeForce gamers.

They include NVIDIA Reflex — which makes competitive gamers quicker, NVIDIA Omniverse Machinima — for those using real-time computer graphics engines to create movies, and NVIDIA Broadcast — which harnesses AI to build virtual broadcast studios for streamers.

Up Close and Personal

A pair of demos tell the tale. In May, we released a demo called Marbles — featuring the world’s first fully path-traced photorealistic real-time graphics.

Chock-full of reflections, textures and light sources, the demo is basically a movie that’s rendered in real time as a marble rolls through an elaborate workshop rich with different materials and textures. It’s a stunningly realistic environment.

In May, Huang showed Marbles running on our top-end Turing architecture-based Quadro RTX 8000 graphics card at 25 frames per second at 720p resolution. An enhanced version of the demo, Marbles at Night, running on NVIDIA Ampere, runs at 30 fps at 1440p.

More telling is a demo that can’t be shown remotely — gameplay running at 60 fps on an 8K LG OLED TV — because video-streaming services don’t support that level of quality.

The GeForce RTX 3090 is the world’s first GPU able to play blockbuster games at 60 fps in 8K resolution, which is 4x the pixels of 4K and 16x the pixels of 1080p.

We showed it to a group of veteran streamers in person to get their reactions as they played through some of latest games.

“You can see wear and tear on the treads,” one said, tilting his head to the side and eyeing the screen in amazement.

“This feels like a Disneyland experience,” another added.

Gamers Battle COVID-19

Huang started his news-packed talk by thanking the more than one million gamers who pooled their GPUs through Folding@Home to fight the COVID-19 coronavirus.

The result was 2.8 exaflops of computing power, 5x the processing power of the world’s largest supercomputer, Huang said, capturing the moment the virus infects a human cell.

“Thank you all for joining this historic fight,” Huang said.

RTX a “Home Run”

For 40 years, since NVIDIA researcher Turner Whitted published his groundbreaking paper on ray tracing, computer science researchers have chased the dream of creating super-realistic virtual worlds with real-time ray tracing, Huang said.

NVIDIA focused intense effort over the past 10 years to realize real-time ray tracing on a large scale. At the SIGGRAPH graphics conference two years ago, NVIDIA unveiled the first NVIDIA RTX GPU.

Based on NVIDIA’s Turing architecture, it combined programmable shaders, RT Cores to accelerate ray-triangle and ray-bounding-box intersections, and the Tensor Core AI processing pipeline.

“Now, two years later, it is clear we have reinvented computer graphics,” Huang said, citing support from all major 3D APIs, tools and game engines, noting that hundreds of RTX-enabled games are now in development. “RTX is a home run,” he said.

Just ask your kids, if you can tear them away from their favorite games for a moment.

Fortnite, from Epic Games, is the latest global sensation to turn on NVIDIA RTX real-time ray-tracing technology, Huang announced.

Now Minecraft and Fortnite, two of the most popular games in the world, have RTX On.

No kid will miss the significance of these announcements.

A Giant Leap in Performance 

The NVIDIA Ampere architecture, the second generation of RTX GPUs, “is a giant leap in performance,” Huang said.

fBuilt on a custom 8N manufacturing process, the flagship GeForce RTX 3080 has 28 billion transistors. It connects to Micron’s new GDDR6X memory — the fastest graphics memory ever made.

“The days of just relying on transistor performance scaling is over,” Huang said.

GeForce RTX 3080: The New Flagship

Starting at $699, the RTX 3080 is the perfect mix of fast performance and cutting-edge capabilities, leading Huang to declare it NVIDIA’s “new flagship GPU.”

Designed for 4K gaming, the RTX 3080 features high-speed GDDR6X memory running at 19Gbps, resulting in performance that outpaces the RTX 2080 Ti by a wide margin.

It’s up to 2X faster than the original RTX 2080. It consistently delivers more than 60 fps at 4K resolution — with RTX ON.

GeForce RTX 3070: The Sweet Spot

NVIDIA CEO Jensen Huang introducing the GeForce RTX 3070. 
NVIDIA CEO Jensen Huang introducing the GeForce RTX 3070.

Making more power available to more people is the RTX 3070, starting at $499.

And it’s faster than the $1,200 GeForce RTX 2080 Ti — at less than half the price.

The RTX 3070 hits the sweet spot of performance for games running with eye candy turned up.

GeForce RTX 3090: A Big, Ferocious GPU

At the apex of the lineup is the RTX 3090. It’s the fastest GPU ever built for gaming and creative types and is designed to power next-generation content at 8K resolution.

“There is clearly a need for a giant GPU that is available all over the world,” Huang said. “So, we made a giant Ampere.”

And the RTX 3090 is a giant of a GPU. Its Herculean 24GB of GDDR6X memory running at 19.5Gbps can tackle the most challenging AI algorithms and feed massive data-hungry workloads for true 8K gaming.

“RTX 3090 is a beast — a big ferocious GPU,” Huang said. “A BFGPU.”

At 4K it’s up to 50 percent faster than the TITAN RTX before it.

It even comes with silencer — a three-slot, dual-axial, flow-through design — up to 10x quieter and keeps the GPU up to 30 degrees C cooler than the TITAN RTX.

“The 3090 is so big that for the very first time, we can play games at 60 frames per second in 8K,” Huang said. “This is insane.”

Faster Reflexes with NVIDIA Reflex

For the 75 percent of GeForce gamers who play esports, Huang announced the release of this month of NVIDIA Reflex with our Game Ready Driver.

In Valorant, a fast-paced action game, for example, Huang showed a scenario where an opponent, traveling at 1,500 pixels per second, is only visible for 180 milliseconds.

But a typical gamer has a reaction time of 150 ms — from photon to action. “You can only hit the opponent if your PC adds less than 30 ms,” Huang explained.

Yet right now, most gamers have latencies greater than 30 ms — many up to 100 ms, Huang noted.

NVIDIA Reflex optimizes the rendering pipeline across CPU and GPU to reduce latency by up to 50 ms, he said.

“Over 100 million GeForce gamers will instantly become more competitive,” Huang said.

Turn Any Room into a Broadcast Studio

For live streamers, Huang announced NVIDIA Broadcast. It transforms standard webcams and microphones into smart devices to turn “any room into a broadcast studio,” Huang said.

It does this with effects like Audio Noise Removal, Virtual Background Effects — whether for graphics or video — and Webcam Auto Framing, giving you a virtual cameraperson.

NVIDIA Broadcast runs AI algorithms trained by deep learning on NVIDIA’s DGX supercomputer — one of the world’s most potent.

“These AI effects are amazing,” Huang said.

NVIDIA Broadcast will be available for download in September and runs on any RTX GPU, Huang said.

Omniverse Machinima

For those now using video games to create movies and shorts — an art form known as Machinima — Huang introduced NVIDIA Omniverse Machinima, based on the Omniverse 3D workflow collaboration platform.

With Omniverse Machinima, creators can use their webcam to drive an AI-based pose-estimator to animate characters. Drive face animation AI with your voice. Add high-fidelity physics like particles and fluids. Make materials physically accurate.

When done with your composition and mixing, you can even render film-quality cinematics with your RTX GPU, Huang said.

The beta will be available in October. Sign up at nvidia.com/machinima.

Nothing Short of Science Fiction

Wrapping up, Huang noted that it’s been 20 years since the NVIDIA GPU introduced programmable shading. “The GPU revolutionized modern computer graphics,” Huang said.

Now the second-generation NVIDIA RTX — fusing programmable shading, ray tracing and AI — gives us photorealistic graphics and the highest frame rates simultaneously, Huang said.

“I can’t wait to go forward 20 years to see what RTX started,” Huang said.

“In this future, GeForce is your holodeck, your lightspeed starship, your time machine,” Huang said. “In this future, we will look back and realize that it started here.”

The post ‘Giant Step into the Future’: NVIDIA CEO Unveils GeForce RTX 30 Series GPUs appeared first on The Official NVIDIA Blog.

Read More

Fast Supernovae Detection using Neural Networks

Fast Supernovae Detection using Neural Networks

A guest post by Rodrigo Carrasco-Davis & The ALeRCE Collaboration, Millennium Institute of Astrophysics, Chile

Introduction

Astronomy is the study of celestial objects, such as stars, galaxies or black holes. Studying celestial objects is a bit like having a natural physics laboratory – where the most extreme processes in nature occur – and most of them cannot be reproduced here on Earth. Observing extreme events in the universe allows us to test and improve our understanding by comparing what we know about physics to what we observe in the universe.

There is a particular type of event that is very interesting for astronomers that occurs at the end of the life of massive stars. Stars are made by the concentration of hydrogen that is pulled together by gravity, and when the density is high enough, the fusion of hydrogen atoms begins, generating light and creating elements such as helium, carbon, oxygen, neon, etc. The fusion process generates an outwards pressure while gravity causes an inward pressure, maintaining the star stable while it’s burning its fuel. This changes when the star tries to fuse iron atoms, which instead of generating energy must extract energy from the star, causing the core of the star to collapse and a supernova explosion to happen.

Crab Nebula, remnant of a supernova. Space Telescope Science Institute/NASA/ESA/J. Hester/A. Loll (Arizona State University). This image is from hubblesite.org.

This process is very important for astronomers. Due to the extreme conditions during the explosion, astronomers can observe the synthesis of heavy elements, test the behavior of matter under intense pressure and temperature, and also observe the product of the explosion, which could be a neutron star or a black hole.

Supernovae can also be used as standard candles. A typical problem in astronomy is measuring distances to celestial objects. Because stars are very far from the Earth, it is difficult to know if a star is faint and close to us, or it is far away and very bright. Most of the supernova explosions in the universe occur in a similar fashion; therefore, astronomers use supernovae to measure distances, which is important for cosmologists to study, for instance, the expansion of the universe and dark energy.

Even though supernova explosion are very bright (compared to the brightness of their own host galaxy), these events are hard to find due to their distance to the Earth, due to their low occurrence rates (roughly one supernova per galaxy per century), and the transient nature of the explosion, which could last from a few days to a couple of weeks. Also, to obtain useful information from a supernova, it is necessary to perform follow-up, this is observing the supernova with an instrument called spectrograph, to measure the energy emitted during the explosion at multiple frequencies. Early follow-up is desired because many of the interesting physical processes occur within a few hours from the beginning of the explosion. So how can we find these supernova explosions fast, among all the other observed astronomical objects in the universe?

Astronomy Today

A few decades ago, the astronomer had to choose and point to a specific object in the sky to study them. Now, modern telescopes such as The Zwicky Transient Facility (ZTF), which is currently operating, or The Vera C. Rubin Observatory, will take large images of the sky at a very high rate, observing the visible sky every three days, creating a movie of the southern hemisphere sky. Today, the ZTF telescope generates 1.4TB of data per night, identifying and sending information about interesting changing objects in the sky in real-time.

When something changes its brightness, these telescopes are able to detect the changes and generate an alert. These alerts are sent through a data stream, where each alert is composed of three cropped images of 63 by 63 pixels. These three images are called science, reference and difference image. The science image is the most recent observation of that particular location, the template is usually taken at the beginning of the survey and it is used to compare it against the science image. Everything that changed between science and template should appear in the difference image, which is computed by subtracting the reference to the science image after some image processing. The ZTF telescope is currently streaming up to one million alerts per night, ten hundred thousand on average. Let’s say a human wants to check each alert manually, then if it takes 3 seconds to inspect each alert, in a regular night, it would take approximately 3.5 days to see all of the alerts of a single night.

Science, reference and difference image from left to right. These three images, plus extra important data such as observation conditions and information about the object. The fourth image is a colored version from PanSTARRS using Aladin Sky Atlas. You can see the full evolution of brightness in time of the supernova in the ALeRCE frontend.

Organizing all the incoming alerts in the stream is a massive task. When a new alert arrives, the type of astronomical objects that generated the alert is not necessarily known. Therefore, we need to check if we already know this object from other observations (cross-match). We also need to figure out which kind of astronomical object generated the alert (classification), and lastly, we need to organize the data and make it available to the community. This is the duty of astronomical broker systems, such as ALeRCE, Lasair, Antares.

Since these alerts are basically everything that changes in the sky, we should be able to find supernovae among all the alerts sent by the ZTF telescope. The problem is that other astronomical objects also produce alerts, such as stars that change their brightness (variable stars), active galactic nuclei (AGNs), asteroids and errors in the measurement (bogus alerts). Fortunately, there are some distinguishable features in the science, reference and difference images that could help us to identify which alert is supernovae, or other objects. We would like to effectively discriminate among these five classes of objects.

Five classes of astronomical objects that can be separated using only the first alert. These are five examples per class, with science, reference and difference image respectively.

In summary, active galactic nuclei tend to occur at the center of galaxies. Supernovae occur usually close to a host galaxy. Asteroids are observed near the solar system plane, and they do not appear in the template image. Variable stars are found in images crowded with other stars since these are found mostly within the Milky Way. Bogus alerts have different causes, some of them are bad pixels in the camera, bad subtraction to generate the difference image, cosmic rays (very bright, concentrated and sharp regions of the image in the center of the alert), etc. As I mentioned before, there is no way a human could possibly check every alert by hand, so we need an automatic way to classify them so astronomers can check the most interesting sources that are more likely to be a supernova.

Finding Supernovae using Neural Networks

Since we roughly understand the differences between images among the five mentioned classes, in principle we could compute specific features to correctly classify them. However, handcrafting features are usually very hard and it takes a long period of trial and error. This is why we decided to train a convolutional neural network (CNN) to solve the classification problem (Carrasco-Davis et al. 2020). In this work, we used the first alert only to quickly find supernovae.

Our architecture provides rotational invariance by making 90° rotated copies of each image in the training set, to then apply average pooling to the dense representation of each rotated version of the image. Imposing rotational invariance in this problem is very helpful, since there is no particular orientation in which structures may appear in the images of the alert (Cabrera-Vives et al. 2017, E. Reyes et al. 2018). We also added part of the metadata contained in the alert, such as the position in sky coordinates, distance to other known objects, and atmospheric condition metrics. After training the model using cross-entropy, the probabilities were highly concentrated around values of 0 or 1, even in cases when the classifier was wrong in its predicted class. This is not so convenient when an expert further filters supernovae candidates after the model made a prediction. Saturated values of 0 or 1 do not give any insight about the chances of a wrong classification and second or third class guess made by the model.

Therefore, in addition to the cross-entropy term in the loss function, we added an extra to maximize the entropy of the prediction, in order to spread the values of the output probabilities (Pereyra et al. 2017). This improves the granularity or definition of predictions, obtaining probabilities in the whole range from 0 to 1 instead of being concentrated, producing much more interpretable predictions to assist the astronomer to choose good supernovae candidates to report for follow-up.

Convolutional neural network with enhanced rotational invariance. Rotated copies for each input are created and fed to the same CNN architecture, to then apply average pooling in the dense layer before concatenating with the metadata. Finally, two other fully connected layers, and a softmax are applied to obtain the predictions.

We performed inference on 400,000 objects uniformly distributed in space over the full coverage of ZTF, as a sanity check of the model predictions. It turns out that each predicted class by the CNN is spatially distributed as expected given the nature of each astronomical object. For instance, AGNs and supernovae (SNe) are mostly found outside the Milky Way plane (extragalactic objects), since it is less likely that further objects can be seen through the Milky Way plane due to occlusion. The model correctly predicts less number of objects close to the Milky Way plane (Galactic latitudes closer to 0). Variable stars are correctly found with higher density within the Galactic plane. Asteroids are found near the solar system plane, also called the ecliptic (marked as a yellow line) as expected and bogus alerts are spread everywhere. Running inference in a large unlabeled set gave us very important clues regarding biases in our training set and helped us to identify important metadata used by the CNN.

We found that the information within the images (science, reference and difference) is enough to obtain a good classification in the training set, but integrating the information from the metadata was critical to obtain the right spatial distribution of the predictions.

Spatial distribution of unlabeled set of astronomical objects. Each plot is in galactic coordinates. The galactic latitude is centered in the Milky Way, so latitudes closer to 0 are also closer to the Milky Way plane. The galactic longitude indicates which part of the disk we are seeing within the Milky Way plane. The yellow line represents the solar system plane (ecliptic).

Supernova Hunter

A vital part of this project is the web interface that allows astronomers to explore the candidates sorted by our neural network model certainty of being a supernova. The Supernova Hunter is a visualization tool that shows important information about the alert so the astronomer chooses which objects should report as supernovae. It also has a button to report wrong classifications made by our model, so we can add it to the training set to later improve the model using these examples labeled by hand.

Supernova Hunter: User interface for exploration of supernovae candidates. It shows a list with the alerts with a high probability of being a supernova. For each alert, the images of the alert, the position of the object and metadata are displayed on the web page.

Using the neural network classifier and the Supernova Hunter, we have been able to confirm 394 supernovae spectroscopically, and report 3060 supernovae candidates to the Transient Name Server, from June 26, 2019 to July 21, 2020 at a rate of 9.2 supernova candidates reported per day. This rate of discovery of supernovae is drastically increasing the amount of available supernovae in early stages of the explosion.

The Future

We are currently working on improving the classification performance of our model to have better supernova candidates and less expert assistants to report them. Ideally, we would like to have a system that is good enough to automatically report each possible supernova candidate with high confidence.

We would also like to extend our model so it can use more than a single stamp. We developed a neural network model that is able to receive a sequence of images instead of a single stamp, so every time a new image is available for a specific object, the model is able to integrate the new arriving information so it can improve the certainty of its prediction for each class.

Another key point of our effort is focused on finding rare objects using outlier detection techniques. This is a crucial task since these new telescopes will possibly reveal new kinds of astronomical objects due to the unprecedented sampling rate and the spatial depth of each observation.

We think this new way of analyzing massive amounts of astronomical data will be not only helpful but necessary. The organization, classification and redistribution of the data for the scientific community is an important part of doing science with astronomical data. This task requires expertise from different fields, such as computer science, astronomy, engineering and mathematics. The construction of new modern telescopes such as The Vera C. Rubin Observatory will drastically change the way astronomers study celestial objects, and as the ALeRCE broker we will be ready to make this possible. For more information, please visit our website, or take a look at our papers: ALeRCE presentation paper which describes the complete processing pipeline, the stamp classifier (the work described in this blogpost), and the light curve classifier, which provides a more complex classification with a larger taxonomy of classes by using the a time series called light curve.Read More

A big step for flood forecasts in India and Bangladesh

A big step for flood forecasts in India and Bangladesh

For several years, the Google Flood Forecasting Initiative has been working with governments to develop systems that predict when and where flooding will occur—and keep people safe and informed. 

Much of this work is centered on India, where floods are a serious risk for hundreds of millions of people. Today, we’re providing an update on how we’re expanding and improving these efforts, as well as a new partnership we’ve formed with the International Federation of the Red Cross and Red Crescent Societies.

Expanding our forecasting reach

In recent months, we’ve been expanding our forecasting models and services in partnership with the Indian Central Water Commission. In June, just in time for the monsoon season, we reached an important milestone: our systems now extend to the whole of India, with Google technology being used to improve the targeting of every alert the government sends. This means we can help better protect more than 200 million people across more than 250,000 square kilometers—more than 20 times our coverage last year. To date, we’ve sent out around 30 million notifications to people in flood-affected areas. 

In addition to expanding in India, we’ve partnered with the Bangladesh Water Development Board to bring our warnings and services to Bangladesh, which experiences more flooding than any other country in the world. We currently cover more than 40 million people in Bangladesh, and we’re working to extend this to the whole country. 

Flood forecasting map

Coverage areas of our current operational flood forecasting systems. In these areas we use our models to help government alerts reach the right people. In some areas we have also increased lead time and spatial accuracy.

Better protection for vulnerable communities

In collaboration with Yale, we’ve been visiting flood-affected areas and doing research to better understand what information people need, how they use it to protect themselves, and what we can do to make that information more accessible. One survey we conducted found that 65 percent of people who receive flood warnings before the flooding begins take action to protect themselves or their assets (such as evacuating or moving their belongings). But we’ve also found there’s a lot more we could be doing to help—including getting alerts to people faster, and providing additional information about the severity of floods.

pasted image 0 (8).png

Checking how our flood warnings match conditions on the ground. This photo was taken during a field survey in Bihar during monsoon 2019.

This year, we’ve launched a new forecasting model that will allow us to double the lead time of many of our alerts—providing more notice to governments and giving tens of millions of people an extra day or so to prepare. 

We’re providing people with information about flood depth: when and how much flood waters are likely to rise. And in areas where we can produce depth maps throughout the floodplain, we’re sharing information about depth in the user’s village or area.

We’ve also overhauled the way our alerts look and function to make sure they’re useful and accessible for everyone. We now provide the information in different formats, so that people can both read their alerts and see them presented visually; we’ve added support for Hindi, Bengali and seven other local languages; we’ve made the alert more localized and accurate; and we now allow for easy changes to language or location.

The technology behind some of these improvements is explained in more detail at the Google AI Blog.  

flood forecasting alerts.png

Alerts for flood forecasting

Partnering for greater impact 

In addition to improving our alerts, Google.org has started a collaboration with the International Federation of Red Cross and Red Crescent Societies. This partnership aims to build local networks that can get disaster alert information to people who wouldn’t otherwise receive smartphone alerts directly. 

Of course, for all the progress we’ve made with alert technology, there are still a lot of challenges to overcome. With the flood season still in full swing in India and Bangladesh, COVID-19 has delayed critical infrastructure work, added to the immense pressure on first responders and medical authorities, and disrupted the in-person networks that many people still rely on for advance notice when a flood is on the way.

There’s much more work ahead to strengthen the systems that so many vulnerable people rely on—and expand them to reach more people in flood-affected areas. Along with our partners around the world, we will continue developing, maintaining and improving technologies and digital tools to help protect communities and save lives.

Read More

Making health care more personal

Making health care more personal

The health care system today largely focuses on helping people after they have problems. When they do receive treatment, it’s based on what has worked best on average across a huge, diverse group of patients.

Now the company Health at Scale is making health care more proactive and personalized — and, true to its name, it’s doing so for millions of people.

Health at Scale uses a new approach for making care recommendations based on new classes of machine-learning models that work even when only small amounts of data on individual patients, providers, and treatments are available.

The company is already working with health plans, insurers, and employers to match patients with doctors. It’s also helping to identify people at rising risk of visiting the emergency department or being hospitalized in the future, and to predict the progression of chronic diseases. Recently, Health at Scale showed its models can identify people at risk of severe respiratory infections like influenza or pneumonia, or, potentially, Covid-19.

“From the beginning, we decided all of our predictions would be related to achieving better outcomes for patients,” says John Guttag, chief technology officer of Health at Scale and the Dugald C. Jackson Professor of Computer Science and Electrical Engineering at MIT. “We’re trying to predict what treatment or physician or intervention would lead to better outcomes for people.”

A new approach to improving health

Health at Scale co-founder and CEO Zeeshan Syed met Guttag while studying electrical engineering and computer science at MIT. Guttag served as Syed’s advisor for his bachelor’s and master’s degrees. When Syed decided to pursue his PhD, he only applied to one school, and his advisor was easy to choose.

Syed did his PhD through the Harvard-MIT Program in Health Sciences and Technology (HST). During that time, he looked at how patients who’d had heart attacks could be better managed. The work was personal for Syed: His father had recently suffered a serious heart attack.

Through the work, Syed met Mohammed Saeed SM ’97, PhD ’07, who was also in the HST program. Syed, Guttag, and Saeed founded Health at Scale in 2015 along with  David Guttag ’05, focusing on using core advances in machine learning to solve some of health care’s hardest problems.

“It started with the burning itch to address real challenges in health care about personalization and prediction,” Syed says.

From the beginning, the founders knew their solutions needed to work with widely available data like health care claims, which include information on diagnoses, tests, prescriptions, and more. They also sought to build tools for cleaning up and processing raw data sets, so that their models would be part of what Guttag refers to as a “full machine-learning stack for health care.”

Finally, to deliver effective, personalized solutions, the founders knew their models needed to work with small numbers of encounters for individual physicians, clinics, and patients, which posed severe challenges for conventional AI and machine learning.

“The large companies getting into [the health care AI] space had it wrong in that they viewed it as a big data problem,” Guttag says. “They thought, ‘We’re the experts. No one’s better at crunching large amounts of data than us.’ We thought if you want to make the right decision for individuals, the problem was a small data problem: Each patient is different, and we didn’t want to recommend to patients what was best on average. We wanted what was best for each individual.”

The company’s first models helped recommend skilled nursing facilities for post-acute care patients. Many such patients experience further health problems and return to the hospital. Health at Scale’s models showed that some facilities were better at helping specific kinds of people with specific health problems. For example, a 64-year-old man with a history of cardiovascular disease may fare better at one facility compared to another.

Today the company’s recommendations help guide patients to the primary care physicians, surgeons, and specialists that are best suited for them. Guttag even used the service when he got his hip replaced last year.

Health at Scale also helps organizations identify people at rising risk of specific adverse health events, like heart attacks, in the future.

“We’ve gone beyond the notion of identifying people who have frequently visited emergency departments or hospitals in the past, to get to the much more actionable problem of finding those people at an inflection point, where they are likely to experience worse outcomes and higher costs,” Syed says.

The company’s other solutions help determine the best treatment options for patients and help reduce health care fraud, waste, and abuse. Each use case is designed to improve patient health outcomes by giving health care organizations decision-support for action.

“Broadly speaking, we are interested in building models that can be used to help avoid problems, rather than simply predict them,” says Guttag. “For example, identifying those individuals at highest risk for serious complications of a respiratory infection [enables care providers] to target them for interventions that reduce their chance of developing such an infection.”

Impact at scale

Earlier this year, as the scope of the Covid-19 pandemic was becoming clear, Health at Scale began considering ways its models could help.

“The lack of data in the beginning of the pandemic motivated us to look at the experiences we have gained from combatting other respiratory infections like influenza and pneumonia,” says Saeed, who serves as Health at Scale’s chief medical officer.

The idea led to a peer-reviewed paper where researchers affiliated with the company, the University of Michigan, and MIT showed Health at Scale’s models could accurately predict hospitalizations and visits to the emergency department related to respiratory infections.

“We did the work on the paper using the tech we’d already built,” Guttag says. “We had interception products deployed for predicting patients at-risk of emergent hospitalizations for a variety of causes, and we saw that we could extend that approach. We had customers that we gave the solution to for free.”

The paper proved out another use case for a technology that is already being used by some of the largest health plans in the U.S. That’s an impressive customer base for a five-year-old company of only 20 people — about half of which have MIT affiliations.

“The culture MIT creates to solve problems that are worth solving, to go after impact, I think that’s been reflected in the way the company got together and has operated,” Syed says. “I’m deeply proud that we’ve maintained that MIT spirit.”

And, Syed believes, there’s much more to come.

“We set out with the goal of driving impact,” Syed says. “We currently run some of the largest production deployments of machine learning at scale, affecting millions, if not tens of millions, of patients, and we  are only just getting started.”

Read More