Beyond ‘Data-Driven’: How Energy-Efficient Computing for AI Is Propelling Innovation and Savings Across Industries

Beyond ‘Data-Driven’: How Energy-Efficient Computing for AI Is Propelling Innovation and Savings Across Industries

With advances in computing, sophisticated AI models and machine learning are having a profound impact on business and society. Industries can use AI to quickly analyze vast bodies of data, allowing them to derive meaningful insights, make predictions and automate processes for greater efficiency.

In the public sector, government agencies are achieving superior disaster preparedness. Biomedical researchers are bringing novel drugs to market faster. Telecommunications providers are building more energy-efficient networks. Manufacturers are trimming emissions from product design, development and manufacturing processes. Hollywood studios are creating impressive visual effects at a fraction of the cost and time. Robots are being deployed on important missions to help preserve the Earth. And investment advisors are running more trade scenarios to optimize portfolios.

Eighty-two percent of companies surveyed are already using or exploring AI, and 84% report that they’re increasing investments in data and AI initiatives. Any organization that delays AI implementation risks missing out on new efficiency gains and becoming obsolete.

However, AI workloads are computationally demanding, and legacy computing systems are ill-equipped for the development and deployment of AI. CPU-based compute requires linear growth in power input to meet the increased processing needs of AI and data-heavy workloads. If data centers are using carbon-based energy, it’s impossible for enterprises to innovate using AI while controlling greenhouse gas emissions and meeting sustainability commitments. Plus, many countries are introducing tougher regulations to enforce data center carbon reporting.

Accelerated computing — the use of GPUs and special hardware, software and parallel computing techniques — has exponentially improved the performance and energy efficiency of data centers.

Below, read more on how industries are using energy-efficient computing to scale AI, improve products and services, and reduce emissions and operational costs.

The Public Sector Drives Research, Delivers Improved Citizen Services 

Data is playing an increasingly important role in government services, including for public health and disease surveillance, scientific research, social security administration, and extreme-weather monitoring and management. These operations require platforms and systems that can handle large volumes of data, provide real-time data access, and ensure data quality and accuracy.

But many government agencies rely on legacy systems that are difficult to maintain, don’t efficiently integrate with modern technologies and consume excessive energy. To handle increasingly demanding workloads while sticking to sustainability goals, government agencies and public organizations must adopt more efficient computing solutions.

The U.S. Department of Energy is making inroads in this endeavor. The department runs the National Energy Research Scientific Computing Center for open science. NERSC develops simulations, data analytics and machine learning solutions to accelerate scientific discovery through computation. Seeking new computing efficiencies, the center measured results across four of its key high performance computing and AI applications. It clocked how fast the applications ran, as well as how much energy they consumed using CPU-only versus GPU-accelerated nodes on Perlmutter, one of the world’s largest supercomputers.

At performance parity, a GPU-accelerated cluster consumes 588 less megawatt hours per month, representing a 5x improvement in energy efficiency. By running the same workload on GPUs rather than CPU-only instances, researchers could save millions of dollars per month. These gains mean that the 8,000+ researchers using NERSC computing infrastructure can perform more experiments on important use cases, like studying subatomic interactions to uncover new green energy sources, developing 3D maps of the universe and bolstering a broad range of innovations in materials science and quantum physics.

Governments help protect citizens from adverse weather events, such as hurricanes, floods, blizzards and heat waves. With GPU deployments, climate models, like the IFS model from the European Centre for Medium-Range Weather Forecasts, can run up to 24x faster while reducing annual energy usage by up to 127 gigawatt hours compared to CPU-only systems. As extreme-weather events occur with greater frequency and, often, with little warning, meteorology centers can use accelerated computing to generate more accurate, timely forecasts that improve readiness and response.

By adopting more efficient computing systems, governments can save costs while equipping researchers with the tools they need for scientific discoveries to improve climate modeling and forecasting, as well as deliver superior services in public health, disaster relief and more.

Drug Discovery Researchers Conduct Virtual Screenings, Generate New Proteins at Light Speed

Drug development has always been a time-consuming process that involves innumerable calculations and thousands of experiments to screen new compounds. To develop novel medications, the binding properties of small molecules must be tested against protein targets, a cumbersome task required for up to billions of compounds — which translates to billions of CPU hours and hundreds of millions of dollars each year.

Highly accurate AI models can now predict protein structures, generate small molecules, predict protein-ligand binding and perform virtual screening.

Researchers at Oak Ridge National Laboratory (ORNL) and Scripps Research have shown that screening a dataset of billions of compounds against a protein, which has traditionally taken years, can now be completed in just hours with accelerated computing. By running AutoDock, a molecular-modeling simulation software, on a supercomputer with more than 27,000 NVIDIA GPUs, ORNL screened more than 25,000 molecules per second and evaluated the docking of 1 billion compounds in less than 12 hours. This is a speedup of more than 50x compared with running AutoDock on CPUs.

Iambic, an AI platform for drug discovery, has developed an approach combining quantum chemistry and AI that calculates quantum-accurate molecular-binding energies and forces at a fraction of the computational expense of traditional methods. These energies and forces can power molecular-dynamics simulations at unprecedented speed and accuracy. With its OrbNet model, Iambic uses a graph transformer to power quantum-mechanical operators that represent chemical structures. The company is using the technology to identify drug molecules that could deactivate proteins linked to certain cancer types.

As the number of new drug approvals declines and research and development and computing costs rise, optimizing drug discovery with accelerated computing can help control energy expenditures while creating a far-reaching impact on medical research, treatments and patient outcomes.

Telcos Scale Network Capacity

To connect their subscribers, telecommunications companies send data across sprawling networks of cell towers, fiber-optic cables and wireless signals. In the U.S., AT&T’s network connects more than 100 million users from the Aleutian Islands in Alaska to the Florida Keys, processing 500 petabytes of data per day. As telcos add compute-intensive workloads like AI and user plane function (UPF) to process and route data over 5G networks, power consumption costs are skyrocketing.

AT&T processes trillions of data rows to support field technician dispatch operations, generate performance reports and power mobile connectivity. To process data faster, AT&T tested the NVIDIA RAPIDS Accelerator for Apache Spark. By spreading work across nodes in a cluster, the software processed 2.8 trillion rows of information — a month’s worth of mobile data — in just five hours. That’s 3.3x faster at 60% lower cost than any prior test.

Other telcos are saving energy by offloading networking and security tasks to SmartNICs and data processing units (DPUs) to reduce server power consumption. Ericsson, a leading telecommunications equipment manufacturer, tested a 5G UPF on servers with and without network offload to an NVIDIA ConnectX-6 Dx NIC. At maximum network traffic, the network offloading provided 23% power savings. The study also found that CPU micro-sleeps and frequency scaling — allowing CPUs to sleep and slow their clock frequencies during low workload levels — saved more than 10% of power per CPU.

Hardware-accelerated networking offloads like these allow telco operators to increase network capacity without a proportional increase in energy consumption, ensuring that networks can scale to handle increased demand and conserve energy during times of low use. By adopting energy-efficient accelerated computing, telco operators can reduce their carbon footprint, improve scalability and lower operational costs.

Manufacturing and Product Design Teams Achieve Faster, Cleaner Simulations

Many industries rely on computational fluid dynamics during design and engineering processes to model fluid flows, combustion, heat transfer and aeroacoustics. The aerospace and automotive industries use CFD to model vehicle aerodynamics, and the energy and environmental industries use it to optimize fluid-particle refining systems and model reactions, wind-farm air flow and hydro-plant water flow.

Traditional CFD methods are compute-intensive, using nearly 25 billion CPU core hours annually, and consume massive amounts of energy. This is a major obstacle for industrial companies looking to reduce carbon emissions and achieve net zero. Parallel computing with GPUs is making a difference.

Ansys, an engineering simulation company, is speeding up CFD physics models with GPUs to help customers drastically reduce emissions while improving the aerodynamics of vehicles. To measure computing efficiency, the company ran the benchmark DrivAer model, used for optimizing vehicle geometry, on different CPU and GPU configurations using its Fluent fluid-simulation software. Results showed that a single GPU achieved more than 5x greater performance than a cluster with 80 CPU cores. With eight GPUs, the simulation experienced more than a 30x speedup. And a server with six GPUs reduced power consumption 4x compared with a high performance computing CPU cluster delivering the same performance.

CPFD offers GPU parallelization for Barracuda Virtual Reactor, a physics-based engineering software package capable of predicting fluid, particulate-solid, thermal and chemically reacting behavior in fluidized bed reactors and other fluid-particle systems.

Using CPFD’s Barracuda software, green energy supplier ThermoChem Recovery International (TRI) developed technology that converts municipal solid waste and woody biomass into jet fuel. Since its partnership with CPFD began 14 years ago, TRI has benefitted from 1,500x model speedups as CPFD moved its code from CPU hardware to full GPU parallelization. With these exponential speedups, models that would’ve previously taken years to run can now be completed in a day or less, saving millions of dollars in data center infrastructure and energy costs.

With GPU parallelization and energy-efficient architectures, industrial design processes that rely on CFD can benefit from dramatically faster simulations while achieving significant energy savings.

Media and Entertainment Boost Rendering

Rendering visual effects (VFX) and stylized animations consumes nearly 10 billion CPU core hours per year in the media and entertainment industry. A single animated film can require over 50,000 CPU cores working for more than 300 million hours. Enabling this necessitates a large space for data centers, climate control and computing — all of which result in substantial expenditures and a sizable carbon footprint.

Accelerated computing offers a more energy-efficient way to produce VFX and animation, enabling studios to iterate faster and compress production times.

Studios like Wylie Co., known for visuals in the Oscar-winning film Dune and in HBO and Netflix features, are adopting GPU-powered rendering to improve performance and save energy. After migrating to GPU rendering, Wylie Co. realized a 24x performance boost over CPUs.

Image Engine, a VFX company involved in creating Marvel Entertainment movies and Star Wars-based television shows, observed a 25x performance improvement by using GPUs for rendering.

GPUs can increase performance up to 46x while reducing energy consumption by 10x and capital expenses by 6x. With accelerated computing, the media and entertainment industry has the potential to save a staggering $900 million in hardware acquisition costs worldwide and conserve 215 gigawatt hours of energy that would have been consumed by CPU-based render farms. Such a shift would lead to substantial cost savings and significant reductions in the industry’s environmental impact.

Robotics Developers Extend Battery Life for Important Missions 

With edge AI and supercomputing now available using compact modules, demand for robots is surging for use in factory logistics, sales showrooms, urban delivery services and even ocean exploration. Mobile robot shipments are expected to climb from 549,000 units last year to 3 million by 2030, with revenue forecast to jump from more than $24 billion to $111 billion in the same period, according to ABI Research.

Most robots are battery-operated and rely on an array of lidar sensors and cameras for navigation. Robots communicate with edge servers or clouds for mission dispatch and require high throughput due to diverse sets of camera sensors as well as low latency for real-time decision-making. These factors necessitate energy-efficient onboard computing.

Accelerated edge computing can be optimized to decode images, process video and analyze lidar data to enable robot navigation of unstructured environments. This allows developers to build and deploy more energy-efficient machines that can remain in service for longer without needing to charge.

The Woods Hole Oceanographic Institution Autonomous Robotics and Perception Laboratory (WARPLab) and MIT are using the NVIDIA Jetson Orin platform for energy-efficient edge AI and robotics to power an autonomous underwater vehicle to study coral reefs.

The AUV, named CUREE, for Curious Underwater Robot for Ecosystem Exploration, gathers visual, audio and other environmental data to help understand the human impact on reefs and sea life. With 25% of the vehicle’s power needed for data collection, energy efficiency is a must. With Jetson Orin, CUREE constructs 3D models of reefs, tracks marine organisms and plant life, and autonomously navigates and gathers data. The AUV’s onboard energy-efficient computing also powers convolutional neural networks that enhance underwater vision by reducing backscatter and correcting colors. This enables CUREE to transmit clear images to scientists, facilitating fish detection and reef analysis.

Driverless smart tractors with energy-efficient edge computing are now available to help farmers with automation and data analysis. The Founder Series MK-V tractors, designed by NVIDIA Inception member Monarch Tractor, combine electrification, automation and data analysis to help farmers reduce their carbon footprint, improve field safety and streamline farming operations. Using onboard AI video analytics, the tractor can traverse rows of crops, enabling it to navigate even in remote areas without connectivity or GPS.

The MK-V tractor produces zero emissions and is estimated to save farmers $2,600 annually compared to diesel tractors. The tractor’s AI data analysis advises farmers on how to reduce the use of expensive, harmful herbicides that deplete the soil. Decreasing the volume of chemicals is a win all around, empowering farmers to protect the quality of soil, reduce herbicide expenditures and deliver more naturally cultivated produce to consumers.

As energy-efficient edge computing becomes more accessible to enable AI, expect to see growing use cases for mobile robots that can navigate complex environments, make split-second decisions, interact with humans and safely perform difficult tasks with precision.

Financial Services Use Data to Inform Investment Decisions 

Financial services is an incredibly data-intensive industry. Bankers and asset managers pursuing the best results for investors rely on AI algorithms to churn through terabytes of unstructured data from economic indicators, earnings reports, news articles, and disparate environmental, social and governance metrics to generate market insight that inform investments. Plus, financial services companies must comb through network data and transactions to prevent fraud and protect accounts.

NVIDIA and Dell Technologies are optimizing computing for financial workloads to achieve higher throughput, speed and capacity with greater energy efficiency. The Strategic Technology Analysis Center, an organization dedicated to technology discovery and assessment in the finance industry, recently tested the STAC-A2 benchmark tests on several computing stacks comprising CPU-only infrastructure and GPU-based infrastructure. The STAC-A2 benchmark is designed by quants and technologists to measure the performance, scalability, quality and resource efficiency of technology stacks running market-risk analysis for derivatives.

When testing the STAC-A2 options pricing benchmark, the Dell PowerEdge server with NVIDIA GPUs performed 16x faster and 3x more energy efficiently than a CPU-only system for the same workload. This enables investment advisors to integrate larger bodies of data into derivatives risk-analysis calculations, enabling more data-driven decisions without increasing computing time or energy requirements.

PayPal, which was looking to deploy a new fraud-detection system to operate 24/7, worldwide and in real time to protect customer transactions, realized CPU-only servers couldn’t meet such computing requirements. Using NVIDIA GPUs for inference, PayPal improved real-time fraud detection by 10% and lowered server energy consumption by nearly 8x.

With accelerated computing, financial services organizations can run more iterations of investment scenarios, improve risk assessments and make more informed decisions for better investment results. Accelerated computing is the foundation for improving data throughput, reducing latency and optimizing energy usage to lower operating costs and achieve emissions goals.

An AI Future With Energy-Efficient Computing

With energy-efficient computing, enterprises can reduce data center costs and their carbon footprint while scaling AI initiatives and data workloads to stay competitive.

The NVIDIA accelerated computing platform offers a comprehensive suite of energy-efficient hardware and software to help enterprises use AI to drive innovation and efficiency without the need for equivalent growth in energy consumption.

With more than 100 frameworks, pretrained models and development tools optimized for GPUs, NVIDIA AI Enterprise accelerates the entire AI journey, from data preparation and model training to inference and scalable deployment. By getting their AI into production faster, businesses can significantly reduce overall power consumption.

With the NVIDIA RAPIDS Accelerator for Apache Spark, which is included with NVIDIA AI Enterprise, data analytics workloads can be completed 6x faster, translating to 5x savings on infrastructure and 6x less power used for the same amount of work. For a typical enterprise, this means 10 gigawatt hours less energy consumed compared with running jobs without GPU acceleration.

NVIDIA BlueField DPUs bring greater energy efficiency to data centers by offloading and accelerating data processing, networking and security tasks from the main CPU infrastructure. By maximizing performance per watt, they can help enterprises slash server power consumption by up to 30%, saving millions in data center costs.

As businesses shift to a new paradigm of AI-driven results, energy-efficient accelerated computing is helping organizations deliver on the promise of AI while controlling costs, maintaining sustainable practices and ensuring they can keep up with the pace of innovation.

Learn how accelerated computing can help organizations achieve both AI goals and carbon-footprint objectives.

Read More

Accenture creates a regulatory document authoring solution using AWS generative AI services

Accenture creates a regulatory document authoring solution using AWS generative AI services

This post is co-written with Ilan Geller, Shuyu Yang and Richa Gupta from Accenture.

Bringing innovative new pharmaceuticals drugs to market is a long and stringent process. Companies face complex regulations and extensive approval requirements from governing bodies like the US Food and Drug Administration (FDA). A key part of the submission process is authoring regulatory documents like the Common Technical Document (CTD), a comprehensive standard formatted document for submitting applications, amendments, supplements, and reports to the FDA. This document contains over 100 highly detailed technical reports created during the process of drug research and testing. Manually creating CTDs is incredibly labor-intensive, requiring up to 100,000 hours per year for a typical large pharma company. The tedious process of compiling hundreds of documents is also prone to errors.

Accenture built a regulatory document authoring solution using automated generative AI that enables researchers and testers to produce CTDs efficiently. By extracting key data from testing reports, the system uses Amazon SageMaker JumpStart and other AWS AI services to generate CTDs in the proper format. This revolutionary approach compresses the time and effort spent on CTD authoring. Users can quickly review and adjust the computer-generated reports before submission.

Because of the sensitive nature of the data and effort involved, pharmaceutical companies need a higher level of control, security, and auditability. This solution relies on the AWS Well-Architected principles and guidelines to enable the control, security, and auditability requirements. The user-friendly system also employs encryption for security.

By harnessing AWS generative AI, Accenture aims to transform efficiency for regulated industries like pharmaceuticals. Automating the frustrating CTD document process accelerates new product approvals so innovative treatments can get to patients faster. AI delivers a major leap forward.

This post provides an overview of an end-to-end generative AI solution developed by Accenture for regulatory document authoring using SageMaker JumpStart and other AWS services.

Solution overview

Accenture built an AI-based solution that automatically generates a CTD document in the required format, along with the flexibility for users to review and edit the generated content​. The preliminary value is estimated at a 40–45% reduction in authoring time.

This generative AI-based solution extracts information from the technical reports produced as part of the testing process and delivers the detailed dossier in a common format required by the central governing bodies. Users then review and edit the documents, where necessary, and submit the same to the central governing bodies. This solution uses the SageMaker JumpStart AI21 Jurassic Jumbo Instruct and AI21 Summarize models to extract and create the documents.

The following diagram illustrates the solution architecture.

The workflow consists of the following steps:

  1. A user accesses the regulatory document authoring tool from their computer browser.
  2. A React application is hosted on AWS Amplify and is accessed from the user’s computer (for DNS, use Amazon Route 53).
  3. The React application uses the Amplify authentication library to detect whether the user is authenticated.
  4. Amazon Cognito provides a local user pool or can be federated with the user’s active directory.
  5. The application uses the Amplify libraries for Amazon Simple Storage Service (Amazon S3) and uploads documents provided by users to Amazon S3.
  6. The application writes the job details (app-generated job ID and Amazon S3 source file location) to an Amazon Simple Queue Service (Amazon SQS) queue. It captures the message ID returned by Amazon SQS. Amazon SQS enables a fault-tolerant decoupled architecture. Even if there are some backend errors while processing a job, having a job record inside Amazon SQS will ensure successful retries.
  7. Using the job ID and message ID returned by the previous request, the client connects to the WebSocket API and sends the job ID and message ID to the WebSocket connection.
  8. The WebSocket triggers an AWS Lambda function, which creates a record in Amazon DynamoDB. The record is a key-value mapping of the job ID (WebSocket) with the connection ID and message ID.
  9. Another Lambda function gets triggered with a new message in the SQS queue. The Lambda function reads the job ID and invokes an AWS Step Functions workflow for processing data files.
  10. The Step Functions state machine invokes a Lambda function to process the source documents. The function code invokes Amazon Textract to analyze the documents. The response data is stored in DynamoDB. Based on specific requirements with processing data, it can also be stored in Amazon S3 or Amazon DocumentDB (with MongoDB compatibility).
  11. A Lambda function invokes the Amazon Textract API DetectDocument to parse tabular data from source documents and stores extracted data into DynamoDB.
  12. A Lambda function processes the data based on mapping rules stored in a DynamoDB table.
  13. A Lambda function invokes the prompt libraries and a series of actions using generative AI with a large language model hosted through Amazon SageMaker for data summarization.
  14. The document writer Lambda function writes a consolidated document in an S3 processed folder.
  15. The job callback Lambda function retrieves the callback connection details from the DynamoDB table, passing the job ID. Then the Lambda function makes a callback to the WebSocket endpoint and provides the processed document link from Amazon S3.
  16. A Lambda function deletes the message from the SQS queue so that it’s not reprocessed.
  17. A document generator web module converts the JSON data into a Microsoft Word document, saves it, and renders the processed document on the web browser.
  18. The user can view, edit, and save the documents back to the S3 bucket from the web module. This helps in reviews and corrections needed, if any.

The solution also uses SageMaker notebooks (labeled T in the preceding architecture) to perform domain adaption, fine-tune the models, and deploy the SageMaker endpoints.

Conclusion

In this post, we showcased how Accenture is using AWS generative AI services to implement an end-to-end approach towards a regulatory document authoring solution. This solution in early testing has demonstrated a 60–65% reduction in the time required for authoring CTDs. We identified the gaps in traditional regulatory governing platforms and augmented generative intelligence within its framework for faster response times, and are continuously improving the system while engaging with users across the globe. Reach out to the Accenture Center of Excellence team to dive deeper into the solution and deploy it for your clients.

This joint program focused on generative AI will help increase the time-to-value for joint customers of Accenture and AWS. The effort builds on the 15-year strategic relationship between the companies and uses the same proven mechanisms and accelerators built by the Accenture AWS Business Group (AABG).

Connect with the AABG team at accentureaws@amazon.com to drive business outcomes by transforming to an intelligent data enterprise on AWS.

For further information about generative AI on AWS using Amazon Bedrock or SageMaker, refer to Generative AI on AWS: Technology and Get started with generative AI on AWS using Amazon SageMaker JumpStart.

You can also sign up for the AWS generative AI newsletter, which includes educational resources, blogs, and service updates.


About the Authors

Ilan Geller is a Managing Director in the Data and AI practice at Accenture.  He is the Global AWS Partner Lead for Data and AI and the Center for Advanced AI.  His roles at Accenture have primarily been focused on the design, development, and delivery of complex data, AI/ML, and most recently Generative AI solutions.

Shuyu Yang is Generative AI and Large Language Model Delivery Lead and also leads CoE (Center of Excellence) Accenture AI (AWS DevOps professional) teams.

Richa Gupta is a Technology Architect at Accenture, leading various AI projects. She comes with 18+ years of experience in architecting Scalable AI and GenAI solutions. Her expertise area is on AI architecture, Cloud Solutions and Generative AI. She plays and instrumental role in various presales activities.

Shikhar Kwatra is an AI/ML Specialist Solutions Architect at Amazon Web Services, working with a leading Global System Integrator. He has earned the title of one of the Youngest Indian Master Inventors with over 500 patents in the AI/ML and IoT domains. Shikhar aids in architecting, building, and maintaining cost-efficient, scalable cloud environments for the organization, and supports the GSI partner in building strategic industry solutions on AWS. Shikhar enjoys playing guitar, composing music, and practicing mindfulness in his spare time.

Sachin Thakkar is a Senior Solutions Architect at Amazon Web Services, working with a leading Global System Integrator (GSI). He brings over 23 years of experience as an IT Architect and as Technology Consultant for large institutions. His focus area is on Data, Analytics and Generative AI. Sachin provides architectural guidance and supports the GSI partner in building strategic industry solutions on AWS.

Read More

Graph neural networks in TensorFlow

Graph neural networks in TensorFlow

Objects and their relationships are ubiquitous in the world around us, and relationships can be as important to understanding an object as its own attributes viewed in isolation — take for example transportation networks, production networks, knowledge graphs, or social networks. Discrete mathematics and computer science have a long history of formalizing such networks as graphs, consisting of nodes connected by edges in various irregular ways. Yet most machine learning (ML) algorithms allow only for regular and uniform relations between input objects, such as a grid of pixels, a sequence of words, or no relation at all.

Graph neural networks, or GNNs for short, have emerged as a powerful technique to leverage both the graph’s connectivity (as in the older algorithms DeepWalk and Node2Vec) and the input features on the various nodes and edges. GNNs can make predictions for graphs as a whole (Does this molecule react in a certain way?), for individual nodes (What’s the topic of this document, given its citations?) or for potential edges (Is this product likely to be purchased together with that product?). Apart from making predictions about graphs, GNNs are a powerful tool used to bridge the chasm to more typical neural network use cases. They encode a graph’s discrete, relational information in a continuous way so that it can be included naturally in another deep learning system.

We are excited to announce the release of TensorFlow GNN 1.0 (TF-GNN), a production-tested library for building GNNs at large scales. It supports both modeling and training in TensorFlow as well as the extraction of input graphs from huge data stores. TF-GNN is built from the ground up for heterogeneous graphs, where types of objects and relations are represented by distinct sets of nodes and edges. Real-world objects and their relations occur in distinct types, and TF-GNN’s heterogeneous focus makes it natural to represent them.

Inside TensorFlow, such graphs are represented by objects of type tfgnn.GraphTensor. This is a composite tensor type (a collection of tensors in one Python class) accepted as a first-class citizen in tf.data.Dataset, tf.function, etc. It stores both the graph structure and its features attached to nodes, edges and the graph as a whole. Trainable transformations of GraphTensors can be defined as Layers objects in the high-level Keras API, or directly using the tfgnn.GraphTensor primitive.

GNNs: Making predictions for an object in context

For illustration, let’s look at one typical application of TF-GNN: predicting a property of a certain type of node in a graph defined by cross-referencing tables of a huge database. For example, a citation database of Computer Science (CS) arXiv papers with one-to-many cites and many-to-one cited relationships where we would like to predict the subject area of each paper.

Like most neural networks, a GNN is trained on a dataset of many labeled examples (~millions), but each training step consists only of a much smaller batch of training examples (say, hundreds). To scale to millions, the GNN gets trained on a stream of reasonably small subgraphs from the underlying graph. Each subgraph contains enough of the original data to compute the GNN result for the labeled node at its center and train the model. This process — typically referred to as subgraph sampling — is extremely consequential for GNN training. Most existing tooling accomplishes sampling in a batch way, producing static subgraphs for training. TF-GNN provides tooling to improve on this by sampling dynamically and interactively.

Pictured, the process of subgraph sampling where small, tractable subgraphs are sampled from a larger graph to create input examples for GNN training.

TF-GNN 1.0 debuts a flexible Python API to configure dynamic or batch subgraph sampling at all relevant scales: interactively in a Colab notebook (like this one), for efficient sampling of a small dataset stored in the main memory of a single training host, or distributed by Apache Beam for huge datasets stored on a network filesystem (up to hundreds of millions of nodes and billions of edges). For details, please refer to our user guides for in-memory and beam-based sampling, respectively.

On those same sampled subgraphs, the GNN’s task is to compute a hidden (or latent) state at the root node; the hidden state aggregates and encodes the relevant information of the root node’s neighborhood. One classical approach is message-passing neural networks. In each round of message passing, nodes receive messages from their neighbors along incoming edges and update their own hidden state from them. After n rounds, the hidden state of the root node reflects the aggregate information from all nodes within n edges (pictured below for n = 2). The messages and the new hidden states are computed by hidden layers of the neural network. In a heterogeneous graph, it often makes sense to use separately trained hidden layers for the different types of nodes and edges

Pictured, a simple message-passing neural network where, at each step, the node state is propagated from outer to inner nodes where it is pooled to compute new node states. Once the root node is reached, a final prediction can be made.

The training setup is completed by placing an output layer on top of the GNN’s hidden state for the labeled nodes, computing the loss (to measure the prediction error), and updating model weights by backpropagation, as usual in any neural network training.

Beyond supervised training (i.e., minimizing a loss defined by labels), GNNs can also be trained in an unsupervised way (i.e., without labels). This lets us compute a continuous representation (or embedding) of the discrete graph structure of nodes and their features. These representations are then typically utilized in other ML systems. In this way, the discrete, relational information encoded by a graph can be included in more typical neural network use cases. TF-GNN supports a fine-grained specification of unsupervised objectives for heterogeneous graphs.

Building GNN architectures

The TF-GNN library supports building and training GNNs at various levels of abstraction.

At the highest level, users can take any of the predefined models bundled with the library that are expressed in Keras layers. Besides a small collection of models from the research literature, TF-GNN comes with a highly configurable model template that provides a curated selection of modeling choices that we have found to provide strong baselines on many of our in-house problems. The templates implement GNN layers; users need only to initialize the Keras layers.

At the lowest level, users can write a GNN model from scratch in terms of primitives for passing data around the graph, such as broadcasting data from a node to all its outgoing edges or pooling data into a node from all its incoming edges (e.g., computing the sum of incoming messages). TF-GNN’s graph data model treats nodes, edges and whole input graphs equally when it comes to features or hidden states, making it straightforward to express not only node-centric models like the MPNN discussed above but also more general forms of GraphNets. This can, but need not, be done with Keras as a modeling framework on the top of core TensorFlow. For more details, and intermediate levels of modeling, see the TF-GNN user guide and model collection.

Training orchestration

While advanced users are free to do custom model training, the TF-GNN Runner also provides a succinct way to orchestrate the training of Keras models in the common cases. A simple invocation may look like this:

The Runner provides ready-to-use solutions for ML pains like distributed training and tfgnn.GraphTensor padding for fixed shapes on Cloud TPUs. Beyond training on a single task (as shown above), it supports joint training on multiple (two or more) tasks in concert. For example, unsupervised tasks can be mixed with supervised ones to inform a final continuous representation (or embedding) with application specific inductive biases. Callers only need substitute the task argument with a mapping of tasks:

Additionally, the TF-GNN Runner also includes an implementation of integrated gradients for use in model attribution. Integrated gradients output is a GraphTensor with the same connectivity as the observed GraphTensor but its features replaced with gradient values where larger values contribute more than smaller values in the GNN prediction. Users can inspect gradient values to see which features their GNN uses the most.

Conclusion

In short, we hope TF-GNN will be useful to advance the application of GNNs in TensorFlow at scale and fuel further innovation in the field. If you’re curious to find out more, please try our Colab demo with the popular OGBN-MAG benchmark (in your browser, no installation required), browse the rest of our user guides and Colabs, or take a look at our paper.

Acknowledgements

The TF-GNN release 1.0 was developed by a collaboration between Google Research: Sami Abu-El-Haija, Neslihan Bulut, Bahar Fatemi, Johannes Gasteiger, Pedro Gonnet, Jonathan Halcrow, Liangze Jiang, Silvio Lattanzi, Brandon Mayer, Vahab Mirrokni, Bryan Perozzi, Anton Tsitsulin, Dustin Zelle, Google Core ML: Arno Eigenwillig, Oleksandr Ferludin, Parth Kothari, Mihir Paradkar, Jan Pfeifer, Rachael Tamakloe, and Google DeepMind: Alvaro Sanchez-Gonzalez and Lisa Wang.

Read More

Graph neural networks in TensorFlow

Graph neural networks in TensorFlow

Posted by Dustin Zelle – Software Engineer, Research and Arno Eigenwillig – Software Engineer, CoreML

This article is also shared on the Google Research Blog


Objects and their relationships are ubiquitous in the world around us, and relationships can be as important to understanding an object as its own attributes viewed in isolation — for example: transportation networks, production networks, knowledge graphs, or social networks. Discrete mathematics and computer science have a long history of formalizing such networks them as graphs, consisting of nodes arbitrarily connected by edges in various irregular ways. Yet most machine learning (ML) algorithms allow only for regular and uniform relations between input objects, such as a grid of pixels, a sequence of words, or no relation at all.

Graph neural networks, or GNNs for short, have emerged as a powerful technique to leverage both the graph’s connectivity (as in the older algorithms DeepWalk and Node2Vec) and the input features on the various nodes and edges. GNNs can make predictions for graphs as a whole (Does this molecule react in a certain way?), for individual nodes (What’s the topic of this document, given its citations?) or for potential edges (Is this product likely to be purchased together with that product?). Apart from making predictions about graphs, GNNs are a powerful tool used to bridge the chasm to more typical neural network use cases. They encode a graph’s discrete, relational information in a continuous way so that it can be included naturally in another deep learning system.

We are excited to announce the release of TensorFlow GNN 1.0 (TF-GNN), a production-tested library for building GNNs at large scale. It supports both modeling and training in TensorFlow as well as the extraction of input graphs from huge data stores. TF-GNN is built from the ground up for heterogeneous graphs where types and relations are represented by distinct sets of nodes and edges. Real-world objects and their relations occur in distinct types and TF-GNN’s heterogeneous focus makes it natural to represent them.

Inside TensorFlow, such graphs are represented by objects of type tfgnn.GraphTensor. This is a composite tensor type (a collection of tensors in one Python class) accepted as a first-class citizen in tf.data.Datasettf.function, etc. It stores both the graph structure and its features attached to nodes, edges and the graph as a whole. Trainable transformations of GraphTensors can be defined as Layers objects in the high-level Keras API, or directly using the tfgnn.GraphTensor primitive.

GNNs: Making predictions for an object in context

For illustration, let’s look at one typical application of TF-GNN: predicting a property of a certain type of node in a graph defined by cross-referencing tables of a huge database. For example, a citation database of Computer Science (CS) arXiv papers with one-to-many cites and many-to-one cited relationships where we would like to predict the subject area of each paper.

Like most neural networks, a GNN is trained on a dataset of many labeled examples (~millions), but each training step consists only of a much smaller batch of training examples (say, hundreds). To scale to millions, the GNN gets trained on a stream of reasonably small subgraphs from the underlying graph. Each subgraph contains enough of the original data to compute the GNN result for the labeled node at its center and train the model. This process — typically referred to as subgraph sampling — is extremely consequential for GNN training. Most existing tooling accomplishes sampling in a batch way, producing static subgraphs for training. TF-GNN provides tooling to improve on this by sampling dynamically and interactively.

moving image illustrating the process of subgraph sampling where small, tractable subgraphs are sampled from a larger graph to create input examples for GNN training.
Pictured, the process of subgraph sampling where small, tractable subgraphs are sampled from a larger graph to create input examples for GNN training.

TF-GNN 1.0 debuts a flexible Python API to configure dynamic or batch subgraph sampling at all relevant scales: interactively in a Colab notebook (like this one), for efficient sampling of a small dataset stored in the main memory of a single training host, or distributed by Apache Beam for huge datasets stored on a network filesystem (up to hundreds of millions of nodes and billions of edges). For details, please refer to our user guides for in-memory and beam-based sampling, respectively.

On those same sampled subgraphs, the GNN’s task is to compute a hidden (or latent) state at the root node; the hidden state aggregates and encodes the relevant information of the root node’s neighborhood. One classical approach is message-passing neural networks. In each round of message passing, nodes receive messages from their neighbors along incoming edges and update their own hidden state from them. After n rounds, the hidden state of the root node reflects the aggregate information from all nodes within n edges (pictured below for n = 2). The messages and the new hidden states are computed by hidden layers of the neural network. In a heterogeneous graph, it often makes sense to use separately trained hidden layers for the different types of nodes and edges.

moving image illustrating the process of subgraph sampling where small, tractable subgraphs are sampled from a larger graph to create input examples for GNN training.
Pictured, a simple message-passing neural network where, at each step, the node state is propagated from outer to inner nodes where it is pooled to compute new node states. Once the root node is reached, a final prediction can be made.

The training setup is completed by placing an output layer on top of the GNN’s hidden state for the labeled nodes, computing the loss (to measure the prediction error), and updating model weights by backpropagation, as usual in any neural network training.

Beyond supervised training (i.e., minimizing a loss defined by labels), GNNs can also be trained in an unsupervised way (i.e., without labels). This lets us compute a continuous representation (or embedding) of the discrete graph structure of nodes and their features. These representations are then typically utilized in other ML systems. In this way, the discrete, relational information encoded by a graph can be included in more typical neural network use cases. TF-GNN supports a fine-grained specification of unsupervised objectives for heterogeneous graphs.

Building GNN architectures

The TF-GNN library supports building and training GNNs at various levels of abstraction.

At the highest level, users can take any of the predefined models bundled with the library that are expressed in Keras layers. Besides a small collection of models from the research literature, TF-GNN comes with a highly configurable model template that provides a curated selection of modeling choices that we have found to provide strong baselines on many of our in-house problems. The templates implement GNN layers; users need only to initialize the Keras layers.

import tensorflow_gnn as tfgnn
from tensorflow_gnn.models import mt_albis

def model_fn(graph_tensor_spec: tfgnn.GraphTensorSpec):
  """Builds a GNN as a Keras model."""
  graph = inputs = tf.keras.Input(type_spec=graph_tensor_spec)

  # Encode input features (callback omitted for brevity).
  graph = tfgnn.keras.layers.MapFeatures(
      node_sets_fn=set_initial_node_states)(graph)

  # For each round of message passing...
  for _ in range(2):
    # ... create and apply a Keras layer.
    graph = mt_albis.MtAlbisGraphUpdate(
        units=128, message_dim=64,
        attention_type="none", simple_conv_reduce_type="mean",
        normalization_type="layer", next_state_type="residual",
        state_dropout_rate=0.2, l2_regularization=1e-5,
    )(graph)

  return tf.keras.Model(inputs, graph)

At the lowest level, users can write a GNN model from scratch in terms of primitives for passing data around the graph, such as broadcasting data from a node to all its outgoing edges or pooling data into a node from all its incoming edges (e.g., computing the sum of incoming messages). TF-GNN’s graph data model treats nodes, edges and whole input graphs equally when it comes to features or hidden states, making it straightforward to express not only node-centric models like the MPNN discussed above but also more general forms of GraphNets. This can, but need not, be done with Keras as a modeling framework on the top of core TensorFlow. For more details, and intermediate levels of modeling, see the TF-GNN user guide and model collection.

Training orchestration

While advanced users are free to do custom model training, the TF-GNN Runner also provides a succinct way to orchestrate the training of Keras models in the common cases. A simple invocation may look like this:

from tensorflow_gnn import runner

runner.run(
   task=runner.RootNodeBinaryClassification("papers", ...),
   model_fn=model_fn,
   trainer=runner.KerasTrainer(tf.distribute.MirroredStrategy(), model_dir="/tmp/model"),
   optimizer_fn=tf.keras.optimizers.Adam,
   epochs=10,
   global_batch_size=128,
   train_ds_provider=runner.TFRecordDatasetProvider("/tmp/train*"),
   valid_ds_provider=runner.TFRecordDatasetProvider("/tmp/validation*"),
   gtspec=...,
)

The Runner provides ready-to-use solutions for ML pains like distributed training and tfgnn.GraphTensor padding for fixed shapes on Cloud TPUs. Beyond training on a single task (as shown above), it supports joint training on multiple (two or more) tasks in concert. For example, unsupervised tasks can be mixed with supervised ones to inform a final continuous representation (or embedding) with application specific inductive biases. Callers only need substitute the task argument with a mapping of tasks:

from tensorflow_gnn import runner
from tensorflow_gnn.models import contrastive_losses

runner.run(
     task={
        "classification": runner.RootNodeBinaryClassification("papers", ...),
        "dgi": contrastive_losses.DeepGraphInfomaxTask("papers"),
      },
    ...
)

Additionally, the TF-GNN Runner also includes an implementation of integrated gradients for use in model attribution. Integrated gradients output is a GraphTensor with the same connectivity as the observed GraphTensor but its features replaced with gradient values where larger values contribute more than smaller values in the GNN prediction. Users can inspect gradient values to see which features their GNN uses the most.

Conclusion

In short, we hope TF-GNN will be useful to advance the application of GNNs in TensorFlow at scale and fuel further innovation in the field. If you’re curious to find out more, please try our Colab demo with the popular OGBN-MAG benchmark (in your browser, no installation required), browse the rest of our user guides and Colabs, or take a look at our paper.

Acknowledgements

The TF-GNN release 1.0 was developed by a collaboration between Google Research (Sami Abu-El-Haija, Neslihan Bulut, Bahar Fatemi, Johannes Gasteiger, Pedro Gonnet, Jonathan Halcrow, Liangze Jiang, Silvio Lattanzi, Brandon Mayer, Vahab Mirrokni, Bryan Perozzi, Anton Tsitsulin, Dustin Zelle), Google Core ML (Arno Eigenwillig, Oleksandr Ferludin, Parth Kothari, Mihir Paradkar, Jan Pfeifer, Rachael Tamakloe), and Google DeepMind (Alvaro Sanchez-Gonzalez and Lisa Wang).

Read More

Integrate QnABot on AWS with ServiceNow

Integrate QnABot on AWS with ServiceNow

Do your employees wait for hours on the telephone to open an IT ticket? Do they wait for an agent to triage an issue, which sometimes only requires restarting the computer? Providing excellent IT support is crucial for any organization, but legacy systems have relied heavily on human agents being available to intake reports and triage issues. Conversational AI (or chatbots) can help triage some of these common IT problems and create a ticket for the tasks when human assistance is needed. Chatbots quickly resolve common business issues, improve employee experiences, and free up agents’ time to handle more complex problems.

QnABot on AWS is an open source solution built using AWS native services like Amazon Lex, Amazon OpenSearch Service, AWS Lambda, Amazon Transcribe, and Amazon Polly. QnABot version 5.4+ is also enhanced with generative AI capabilities.

According to Gartner Magic Quadrant 2023, ServiceNow is one of the leading IT Service Management (ITSM) providers on the market. ServiceNow’s Incident Management uses workflows to identify, track, and resolve high‑impact IT service incidents.

In this post, we demonstrate how to integrate the QnABot on AWS chatbot solution with ServiceNow. With this integration, users can chat with QnABot to triage their IT service issues and open an incident ticket in ServiceNow in real time by providing details to QnABot.

Watch the following video to see how users can ask questions to an IT service desk chatbot and get answers. For most frequently asked questions, chatbot answers can help resolve the issue. When a user determines that the answers provided are not useful, they can request the creation of a ticket in ServiceNow.

Solution overview

QnABot on AWS is a multi-channel, multi-language chatbot that responds to your customer’s questions, answers, and feedback. QnABot on AWS is a complete solution and can be deployed as part of your IT Service Desk ticketing workflow. Its distributed architecture allows for integrations with other systems like ServiceNow. If you wish to build your own chatbot using Amazon Lex or add only Amazon Lex as part of your application, refer to Integrate ServiceNow with Amazon Lex chatbot for ticket processing.

The following diagram illustrates the solution architecture.

The workflow includes the following steps:

  1. A QnABot administrator can configure the questions using the Content Designer UI delivered by Amazon API Gateway and Amazon Simple Storage Service (Amazon S3).
  2. The Content Designer Lambda function saves the input in OpenSearch Service in a question’s bank index.
  3. When QnABot users ask questions prompting ServiceNow integration, Amazon Lex fetches the questions and requests the user to provide a description of the issue. When the description is provided, it invokes a Lambda function.
  4. The Lambda function fetches secrets from AWS Secrets Manager, where environment variables are stored, and makes an HTTP call to create a ticket in ServiceNow. The ticket number is then returned to the user.

When building a diagnostic workflow, you may require inputs to different questions before you can create a ticket in ServiceNow. You can use response bots and the document chaining capabilities of QnABot to achieve this capability.

Response bots are bots created to elicit a response from users and store them as part of session variables or as part of slot values. You can use built-in response bots or create a custom response bot. Response chatbot names must start with the letters “QNA.”

This solution provides a set of built-in response bots. Refer to Configuring the chatbot to ask the questions and use response bots for implementation details.

You can use document chaining to elicit the response and invoke Lambda functions. The chaining rule is a JavaScript programming expression used to test the value of the session attribute set to elicit a response and either route to another bot or invoke Lambda functions. You can identify the next question in the document by identifying the question ID (QID) specified in the Document Chaining:Chaining Rule field as ‘QID::‘ followed by the QID value of the document. For example, a rule that evaluates to “QID::Admin001” will chain to item Admin.001.

When using a chaining rule for Lambda, the function name must start with the letters “QNA,” and is specified in the Document Chaining:Chaining Rule field as ‘Lambda::FunctionNameorARN’. All chaining rules must be enclosed in a single quote.

Deploy the QnABot solution

Complete the following steps to deploy the solution:

  1. Choose Launch Solution on the QnABot implementation guide to deploy the latest QnABot template via AWS CloudFormation.
  2. Provide a name for the bot.
  3. Provide an email where you will receive an email to reset your password.
  4. Make sure that EnableCognitoLogin is set to true.
  5. For all other parameters, accept the defaults (see the implementation guide for parameter definitions), and launch the QnABot stack.

This post uses a static webpage hosted on Amazon CloudFront, and the QnABot chatbot is embedded in the page using the Amazon Lex web UI sample plugin. We also provide instructions for testing this solution using the QnABot client page.

Create a ServiceNow account

This section walks through the steps to create a ServiceNow account and ServiceNow developer instance:

  1. First, sign up for a ServiceNow account.

  1. Go to your email and confirm this email address for your ServiceNow ID.
  2. As part of the verification, you’ll will be asked to provide the six-digit verification code sent to your email.
  3. You can skip the page that asks you to set up two-factor authentication. You’re redirected to the landing page with the ServiceNow Developer program.
  4. In the Getting Started steps, choose Yes, I need a developer oriented IDE.

  1. Choose Start Building to set up an instance.

When the build is complete, which may take couple of seconds to minutes, you will be provided with the instance URL, user name, and password details. Save this information to use in later steps.

  1. Log in to the site using the following URL (provide your instance): https://devXXXXXX.service-now.com/now/nav/ui/classic/params/target/change_request_list.do.

Be sure to stay logged in to the ServiceNow developer instance throughout the process.

If logged out, use your email and password to log back in and wake up the instance and prevent hibernation.

  1. Choose All in the navigation bar, then choose Incidents.

  1. Select All to remove all of the filters.

All incidents will be shown on this page.

Create users in ServiceNow and an Amazon Cognito pool

You can create an incident using the userid of the chatbot user. For that, we need to confirm that the userId of the chatbot user exists in ServiceNow. First, we create the ServiceNow user, then we create a user with the same ID in an Amazon Cognito user pool. Amazon Cognito is an AWS service to authenticate clients and provide temporary AWS credentials.

  1. Create a ServiceNow user. Be sure to include a first name, last name, and email.

Note down the user ID of the newly created user. You will need this when creating an Amazon Cognito user in a user pool.

  1. On the Amazon Cognito console, choose User pools in the navigation pane.

If you have deployed the Amazon Lex web UI plugin, you will see two user pool names; if you did not, you’ll see only one user pool name.

  1. Select the user pool that has your QnABot name and create a new user. Use the same userId as that of the ServiceNow user.
  2. If you are using the Amazon Lex web UI, create a user in the appropriate Amazon Cognito user pool by following the preceding steps.

Note that the userId you created will be used for the QnABot client and Amazon Lex Web UI client.

Create a Lambda function for invoking ServiceNow

In this step, you create a Lambda function that invokes the ServiceNow API to create a ticket.

  1. On the Lambda console, choose Functions in the navigation pane.
  2. Choose Create function.

  1. Select Author from scratch.
  2. For Function name, enter a name, such as qna-ChatBotLambda. (Remember that QnABot requires the prefix qna- in the name.)
  3. For Runtime, choose Node.js 18.x.

This Lambda function creates new role. If you want to use an existing role, you can change the default AWS Identity and Access Management (IAM) execution role by selecting Use existing role.

  1. Choose Create function.
  2. After you create the function, use the inline editor to edit the code for index.js.
  3. Right-click on index.js and rename it to index.mjs.
  4. Enter the following code, which is sample code for the function that you’re using as the compute layer for our logic:
import AWS from '@aws-sdk/client-secrets-manager';

const incident="incident";
const secret_name = "servicenow/password";

export const handler = async (event, context) => {
    console.log('Received event:',JSON.stringify(event, null,2));
    // make async call createticket which creates serviceNow ticket
    await createTicket( event).then(response => event=response);
    return event;
    
};

// async function to create servicenow ticket
async function createTicket( event){
 
    var password='';
    await getSecretValue().then(response => password=response);
    
    // fetch description and userid from event
      var shortDesc =  event.req._event.inputTranscript;
    console.log("received slots value", shortDesc);
    // userName of the logged in user
    var userName= event.req._userInfo.UserName;
    console.log("userId", userName);
    
    console.log("password from secrets manager::", password);
    // description provided by user is added to short_description
    var requestData = {
        "short_description": shortDesc,
        "caller_id": userName
      };
      var postData = JSON.stringify(requestData);

    // create url from hostname fetched from envrionment variables. Remaining path is constant.
    const url = "https://"+process.env.SERVICENOW_HOST+":443/api/now/table/"+incident;

    // create incident in servicenow and return event with ticket information
    try {
            await fetch(url,{
                method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                'Accept': 'application/json',
                'Authorization': 'Basic ' + Buffer.from(process.env.SERVICENOW_USERNAME + ":" + password).toString('base64'),
                'Content-Length': Buffer.byteLength(postData),
            },
            'body': postData
            }).then(response=>response.json())
            .then(data=>{ console.log(data); 
                var ticketNumber = data.result.number;
                var ticketType = data.result.sys_class_name;
                event.res.message="Done! I've opened an " + ticketType + " ticket for you in ServiceNow. Your ticket number is: " + ticketNumber + ".";
            });  
            return event;
        }
        catch (e) {
            console.error(e);
            return 500;
        }

}

// get secret value from secrets manager
async function getSecretValue(){
    var secret;
    var client = new AWS.SecretsManager({
        region: process.env.AWS_REGION
    });
   // await to get secret value
    try {
        secret = await client.getSecretValue({SecretId: secret_name});
    }
    catch (err) {
        console.log("error", err);
    
    }   
   const secretString = JSON.parse(secret.SecretString);
    return secretString.password;
}

This function uses the ServiceNow Incident API. For more information, refer to Create an incident.

  1. Choose Deploy to deploy this code to the $LATEST version of the Lambda function.
  2. On the Configuration tab, in the Environment variables section, add the following:
      • Add SERVICENOW_HOST with the value devXXXXXX.service-now.com.
      • Add SERVICENOW_USERNAME with the value admin.

  3. Copy the Lambda function ARN. You will need it at later stage.

The next step is to store your ServiceNow user name and password in Secrets Manager.

  1. On the Secrets Manager console, create a new secret.
  2. Select Other type of secret.
  3. Add your key-value pairs as shown and choose Next.

  1. For Secret name, enter a descriptive name (for this post, servicenow/password). If you choose a different name, update the value of const secret_name in the Lambda function code.
  2. Choose Next.
  3. Leave Configure rotation on default and choose Next.
  4. Review the secret information and choose Store.
  5. Copy the ARN of the newly created secret.

Now let’s give Lambda permissions to Secrets Manager.

  1. On the Lambda function page, go to the Configurations tab and navigate to the Permissions section.

  1. Choose the execution role name to open the IAM page for the role.
  2. In the following inline policy, provide the ARN of the secret you created earlier:
{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "SecretsManagerRead",
			"Effect": "Allow",
			"Action": ["secretsmanager:GetResourcePolicy",
				"secretsmanager:GetSecretValue",
				"secretsmanager:DescribeSecret",
				"secretsmanager:ListSecrets",
				"secretsmanager:ListSecretVersionIds"
],
			"Resource": "<ARN>"
		}
	]
}
  1. Add the inline policy to the role.

Configure QnABot configurations

In this section, we first create some knowledge questions using the Questions feature of QnABot. We then create a response bot that elicits a response from a user when they ask for help. This bot uses document chaining to call another bot, and triggers Lambda to create a ServiceNow ticket.

For more information about using QnABot with generative AI, refer to Deploy generative AI self-service question answering using the QnABot on AWS solution powered by Amazon Lex with Amazon Kendra, and Amazon Bedrock.

Create knowledge question 1

Create a knowledge question for installing software:

  1. On the AWS CloudFormation console, navigate to the QnABot stack.
  2. On the Outputs tab, and open the link for ContentDesignerURL.
  3. Log in to the QnABot Content Designer using admin credentials.
  4. Choose Add to add a new question.
  5. Select qna.
  6. For Item ID, enter software.001.
  7. Under Questions/Utterances, enter the following:
    a.	How to install a software 
    b.	How to install developer tools 
    c.	can you give me instructions to install software 
    

  8. Under Answer, enter the following answer:
Installing from Self Service does not require any kind of permissions or admin credentials. It will show you software that is available for you, without any additional requests.
1. Click the search icon in the menu at the top. Type Self Service and press Enter.
2. Sign in with your security key credentials.
3. Search for your desired software in the top right corner.
4. Click the Install button.

  1. Expand the Advanced section and enter the same text in Markdown Answer.

  1. Leave the rest as default, and choose Create to save the question.

Create knowledge question 2

Now you create the second knowledge question.

  1. Choose Add to add a new question.
  2. Select qna.
  3. For Item ID, enter knowledge.001.
  4. Under Questions/Utterances, enter Want to learn more about Amazon Lex.
  5. Under Answer, enter the following answer:
### Amazon Lex
Here is a video of Amazon Lex Introduction <iframe width="580" height="327" src="https://www.youtube.com/embed/Q2yJf4bn5fQ" title="Conversational AI powered by Amazon Lex | Amazon Web Services" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
Do you want to learn more about it?<br>
Here are some resources<br>
1. [Introduction to Amazon Lex](https://explore.skillbuilder.aws/learn/course/external/view/elearning/249/introduction-to-amazon-lex)
2. [Building better bots using Amazon Connect](https://explore.skillbuilder.aws/learn/course/external/view/elearning/481/building-better-bots-using-amazon-connect)
3. [Amazon Lex V2 getting started- Streaming APIs](https://aws.amazon.com/blogs/machine-learning/delivering-natural-conversational-experiences-using-amazon-lex-streaming-apis/)

  1. Expand the Advanced section and enter the same answer under Markdown Answer.

  1. Leave the rest as default, and choose Create to save the question.

Create knowledge question 3

Complete the following steps to add another knowledge question:

  1. Choose Add to add a new question.
  2. Select qna.
  3. For Item ID, enter password.reset.
  4. Under Questions/Utterances, enter I need to reset my password.
  5. Under Answer, enter the following answer:
#### Password Reset Instructions
Please follow below instructions to reset your password
1. Please go to AnyTech's IT web page. 
2. Use the Password Reset Tool on the left hand navigation. 
3. In the Password Reset Tool, provide your new password and save. 
4. Once you change your password, please log out of your laptop and login.
<br><br>
**Note**: If you are logged out of your computer, you can ask your manager to reset the password.

  1. Expand the Advanced section and enter the same text for Markdown Answer.
  2. Choose Create to save the question.

Create a response bot

Complete the following steps to create the first response bot, which elicits a response:

  1. Choose Add to add a new question.
  2. Select qna.
  3. For Item ID, enter ElicitResponse.001.
  4. Under Questions/Utterances, enter Please create a ticket.
  5. Under Answer, enter the following answer:
Sure, I can help you with that!! Please give a short description of your problem.

  1. Expand the Advanced section and navigate to the Elicit Response section.
  2. For Elicit Response: ResponseBot Hook, enter QNAFreeText.
  3. For Elicit Response: Response Session Attribute Namespace, enter short_description.

This creates a slot named short_description that captures the response or description for the incident. This slot uses the built-in QNAFreeText, which is used for capturing free text.

  1. For Document Chaining: Chaining Rule, enter QID::item.002. This must be in single quotes. Remember this chaining rule to use when creating your document chain.
  2. Leave the rest as default.

  1. Choose Create to save the question.

Create a document chain

Now we create a document chain in QnABot that will trigger the Lambda function to create a ticket and respond with a ticket number. Document chaining allows you to chain two bots based on the rule you configured. Complete the following steps:

  1. Choose Add to add a new question.
  2. Select qna.
  3. For Item ID, enter item.002. This should match the QID value given in the document chain rule earlier.
  4. Under Questions/Utterances, enter servicenow integration.
  5. Under Answer, enter the following answer:
There was an error, please contact system administrator
  1. In the Advanced section, add the Lambda function ARN for Lambda Hook.

  1. Choose Create to save the question.

Test the QnABot

To test the QnABot default client, complete the following steps:

  1. Choose the options menu in the Content Designer and choose QnABot Client.

The QnABot client will open in a new browser tab.

  1. Log in using the newly created user credentials to begin the test.

If you plan to use the Amazon Lex Web UI on a static page, follow these instructions.

  1. Choose the chat icon at the bottom of the page to start the chat.
  2. To log in, choose Login on the menu.

You will be routed to the login page.

  1. Provide the userId created earlier.
  2. For first-time logins, you will be prompted to reset your password.

  1. Now we can test the chatbot with example use cases. For our first use case, we want to learn about Amazon and enter the question “I want to learn about Amazon Lex, can you give me some information about it?” QnABot provides a video and some links to resources.

  1. In our next, example, we need to install software on our laptop, and ask “Can you give me instructions to install software.” QnABot understands that the user is requesting help installing software and provides answers from the knowledge bank. You can follow those instructions and install the software you need.

  1. While installing the software, what if you locked your password due to multiple failed login attempts? To request a password reset, you can ask “I need to reset my password.”

  1. You might need additional assistance resetting the password and want to create a ticket. In this case, enter “Please create a ticket.” QnABot asks for a description of the problem; you can enter “reset password.” QnAbot creates a ticket with the description provided and provides the ticket number as part of the response.

  1. You can verify the incident ticket was created on the ServiceNow console under Incidents. If the ticket is not shown on the first page, search for the ticket number using the search toolbar.

Clean up

To avoid incurring future charges, delete the resources you created. For instructions to uninstall the QnABot solution plugin, refer to Uninstall the solution.

Conclusion

Integrating QnABot on AWS with ServiceNow provides an end-to-end solution for automated customer support. With QnABot’s conversational AI capabilities to understand customer questions and ServiceNow’s robust incident management features, companies can streamline ticket creation and resolution. You can also extend this solution to show a list of tickets created by the user. For more information about incorporating these techniques into your bots, see QnABot on AWS.


About the Authors

Sujatha Dantuluri is a Senior Solutions Architect in the US federal civilian team at AWS. She has over 20 years of experience supporting commercial and federal government. She works closely with customers in building and architecting mission-critical solutions. She has also contributed to IEEE standards.

Maia Haile is a Solutions Architect at Amazon Web Services based in the Washington, D.C. area. In that role, she helps public sector customers achieve their mission objectives with well-architected solutions on AWS. She has 5 years of experience spanning nonprofit healthcare, media and entertainment, and retail. Her passion is using AI and ML to help public sector customers achieve their business and technical goals.

Read More

Deploy large language models for a healthtech use case on Amazon SageMaker

Deploy large language models for a healthtech use case on Amazon SageMaker

In 2021, the pharmaceutical industry generated $550 billion in US revenue. Pharmaceutical companies sell a variety of different, often novel, drugs on the market, where sometimes unintended but serious adverse events can occur.

These events can be reported anywhere, from hospitals or at home, and must be responsibly and efficiently monitored. Traditional manual processing of adverse events is made challenging by the increasing amount of health data and costs. Overall, $384 billion is projected as the cost of pharmacovigilance activities to the overall healthcare industry by 2022. To support overarching pharmacovigilance activities, our pharmaceutical customers want to use the power of machine learning (ML) to automate the adverse event detection from various data sources, such as social media feeds, phone calls, emails, and handwritten notes, and trigger appropriate actions.

In this post, we show how to develop an ML-driven solution using Amazon SageMaker for detecting adverse events using the publicly available Adverse Drug Reaction Dataset on Hugging Face. In this solution, we fine-tune a variety of models on Hugging Face that were pre-trained on medical data and use the BioBERT model, which was pre-trained on the Pubmed dataset and performs the best out of those tried.

We implemented the solution using the AWS Cloud Development Kit (AWS CDK). However, we don’t cover the specifics of building the solution in this post. For more information on the implementation of this solution, refer to Build a system for catching adverse events in real-time using Amazon SageMaker and Amazon QuickSight.

This post delves into several key areas, providing a comprehensive exploration of the following topics:

  • The data challenges encountered by AWS Professional Services
  • The landscape and application of large language models (LLMs):
    • Transformers, BERT, and GPT
    • Hugging Face
  • The fine-tuned LLM solution and its components:
    • Data preparation
    • Model training

Data challenge

Data skew is often a problem when coming up with classification tasks. You would ideally like to have a balanced dataset, and this use case is no exception.

We address this skew with generative AI models (Falcon-7B and Falcon-40B), which were prompted to generate event samples based on five examples from the training set to increase the semantic diversity and increase the sample size of labeled adverse events. It’s advantageous to us to use the Falcon models here because, unlike some LLMs on Hugging Face, Falcon gives you the training dataset they use, so you can be sure that none of your test set examples are contained within the Falcon training set and avoid data contamination.

The other data challenge for healthcare customers are HIPAA compliance requirements. Encryption at rest and in transit has to be incorporated into the solution to meet these requirements.

Transformers, BERT, and GPT

The transformer architecture is a neural network architecture that is used for natural language processing (NLP) tasks. It was first introduced in the paper “Attention Is All You Need” by Vaswani et al. (2017). The transformer architecture is based on the attention mechanism, which allows the model to learn long-range dependencies between words. Transformers, as laid out in the original paper, consist of two main components: the encoder and the decoder. The encoder takes the input sequence as input and produces a sequence of hidden states. The decoder then takes these hidden states as input and produces the output sequence. The attention mechanism is used in both the encoder and the decoder. The attention mechanism allows the model to attend to specific words in the input sequence when generating the output sequence. This allows the model to learn long-range dependencies between words, which is essential for many NLP tasks, such as machine translation and text summarization.

One of the more popular and useful of the transformer architectures, Bidirectional Encoder Representations from Transformers (BERT), is a language representation model that was introduced in 2018. BERT is trained on sequences where some of the words in a sentence are masked, and it has to fill in those words taking into account both the words before and after the masked words. BERT can be fine-tuned for a variety of NLP tasks, including question answering, natural language inference, and sentiment analysis.

The other popular transformer architecture that has taken the world by storm is Generative Pre-trained Transformer (GPT). The first GPT model was introduced in 2018 by OpenAI. It works by being trained to strictly predict the next word in a sequence, only aware of the context before the word. GPT models are trained on a massive dataset of text and code, and they can be fine-tuned for a range of NLP tasks, including text generation, question answering, and summarization.

In general, BERT is better at tasks that require deeper understanding of the context of words, whereas GPT is better suited for tasks that require generating text.

Hugging Face

Hugging Face is an artificial intelligence company that specializes in NLP. It provides a platform with tools and resources that enable developers to build, train, and deploy ML models focused on NLP tasks. One of the key offerings of Hugging Face is its library, Transformers, which includes pre-trained models that can be fine-tuned for various language tasks such as text classification, translation, summarization, and question answering.

Hugging Face integrates seamlessly with SageMaker, which is a fully managed service that enables developers and data scientists to build, train, and deploy ML models at scale. This synergy benefits users by providing a robust and scalable infrastructure to handle NLP tasks with the state-of-the-art models that Hugging Face offers, combined with the powerful and flexible ML services from AWS. You can also access Hugging Face models directly from Amazon SageMaker JumpStart, making it convenient to start with pre-built solutions.

Solution overview

We used the Hugging Face Transformers library to fine-tune transformer models on SageMaker for the task of adverse event classification. The training job is built using the SageMaker PyTorch estimator. SageMaker JumpStart also has some complementary integrations with Hugging Face that makes straightforward to implement. In this section, we describe the major steps involved in data preparation and model training.

Data preparation

We used the Adverse Drug Reaction Data (ade_corpus_v2) within the Hugging Face dataset with an 80/20 training/test split. The required data structure for our model training and inference has two columns:

  • One column for text content as model input data.
  • Another column for the label class. We have two possible classes for a text: Not_AE and Adverse_Event.

Model training and experimentation

In order to efficiently explore the space of possible Hugging Face models to fine-tune on our combined data of adverse events, we constructed a SageMaker hyperparameter optimization (HPO) job and passed in different Hugging Face models as a hyperparameter, along with other important hyperparameters such as training batch size, sequence length, models, and learning rate. The training jobs used an ml.p3dn.24xlarge instance and took an average of 30 minutes per job with that instance type. Training metrics were captured though the Amazon SageMaker Experiments tool, and each training job ran through 10 epochs.

We specify the following in our code:

  • Training batch size – Number of samples that are processed together before the model weights are updated
  • Sequence length – Maximum length of the input sequence that BERT can process
  • Learning rate – How quickly the model updates its weights during training
  • Models – Hugging Face pretrained models
# we use the Hyperparameter Tuner
from sagemaker.tuner import IntegerParameter,ContinuousParameter, CategoricalParameter
tuning_job_name = 'ade-hpo'
# Define exploration boundaries
hyperparameter_ranges = {
 'learning_rate': ContinuousParameter(5e-6,5e-4),
 'max_seq_length': CategoricalParameter(['16', '32', '64', '128', '256']),
 'train_batch_size': CategoricalParameter(['16', '32', '64', '128', '256']),
 'model_name': CategoricalParameter(["emilyalsentzer/Bio_ClinicalBERT", 
                                                            "dmis-lab/biobert-base-cased-v1.2", "monologg/biobert_v1.1_pubmed", "pritamdeka/BioBert-PubMed200kRCT", "saidhr20/pubmed-biobert-text-classification" ])
}

# create Optimizer
Optimizer = sagemaker.tuner.HyperparameterTuner(
    estimator=bert_estimator,
    hyperparameter_ranges=hyperparameter_ranges,
    base_tuning_job_name=tuning_job_name,
    objective_type='Maximize',
    objective_metric_name='f1',
    metric_definitions=[
        {'Name': 'f1',
         'Regex': "f1: ([0-9.]+).*$"}],  
    max_jobs=40,
    max_parallel_jobs=4,
)

Optimizer.fit({'training': inputs_data}, wait=False)

Results

The model that performed the best in our use case was the monologg/biobert_v1.1_pubmed model hosted on Hugging Face, which is a version of the BERT architecture that has been pre-trained on the Pubmed dataset, which consists of 19,717 scientific publications. Pre-training BERT on this dataset gives this model extra expertise when it comes to identifying context around medically related scientific terms. This boosts the model’s performance for the adverse event detection task because it has been pre-trained on medically specific syntax that shows up often in our dataset.

The following table summarizes our evaluation metrics.

Model Precision Recall F1
Base BERT 0.87 0.95 0.91
BioBert 0.89 0.95 0.92
BioBERT with HPO 0.89 0.96 0.929
BioBERT with HPO and synthetically generated adverse event 0.90 0.96 0.933

Although these are relatively small and incremental improvements over the base BERT model, this nevertheless demonstrates some viable strategies to improve model performance through these methods. Synthetic data generation with Falcon seems to hold a lot of promise and potential for performance improvements, especially as these generative AI models get better over time.

Clean up

To avoid incurring future charges, delete any resources created like the model and model endpoints you created with the following code:

# Delete resources
model_predictor.delete_model()
model_predictor.delete_endpoint()

Conclusion

Many pharmaceutical companies today would like to automate the process of identifying adverse events from their customer interactions in a systematic way in order to help improve customer safety and outcomes. As we showed in this post, the fine-tuned LLM BioBERT with synthetically generated adverse events added to the data classifies the adverse events with high F1 scores and can be used to build a HIPAA-compliant solution for our customers.

As always, AWS welcomes your feedback. Please leave your thoughts and questions in the comments section.


About the authors

Zack Peterson is a data scientist in AWS Professional Services. He has been hands on delivering machine learning solutions to customers for many years and has a master’s degree in Economics.

Dr. Adewale Akinfaderin is a senior data scientist in Healthcare and Life Sciences at AWS. His expertise is in reproducible and end-to-end AI/ML methods, practical implementations, and helping global healthcare customers formulate and develop scalable solutions to interdisciplinary problems. He has two graduate degrees in Physics and a doctorate degree in Engineering.

Ekta Walia Bhullar, PhD, is a senior AI/ML consultant with the AWS Healthcare and Life Sciences (HCLS) Professional Services business unit. She has extensive experience in the application of AI/ML within the healthcare domain, especially in radiology. Outside of work, when not discussing AI in radiology, she likes to run and hike.

Han Man is a Senior Data Science & Machine Learning Manager with AWS Professional Services based in San Diego, CA. He has a PhD in Engineering from Northwestern University and has several years of experience as a management consultant advising clients in manufacturing, financial services, and energy. Today, he is passionately working with key customers from a variety of industry verticals to develop and implement ML and generative AI solutions on AWS.

Read More

Twitch Streamer Mr_Vudoo Supercharges Gaming, Entertaining and Video Editing With RTX This Week ‘In the NVIDIA Studio’

Twitch Streamer Mr_Vudoo Supercharges Gaming, Entertaining and Video Editing With RTX This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Mr_Vudoo is a digital renaissance man — a livestreamer, video editor, gamer and entertainer skilled in producing an array of content for his audience.

This week’s featured artist In the NVIDIA Studio, he recently acquired a new GeForce RTX 4080 SUPER graphics card, which helps creators like him take their content to the next level. (Read more about the 4080 SUPER below.)

There’s no better place for creative types to connect with others and explore what’s next in AI and accelerated computing than GTC, which is back in-person, from March 18-21 in San Jose.

From the keynote by NVIDIA founder and CEO Jensen Huang to hundreds of sessions, exhibits and networking events, GTC delivers something for every technical level and interest area, including sessions on how to power content creation using OpenUSD and generative AI. GTC registration is open for virtual or in-person attendance.

Join sessions like “What’s Next in Generative AI” in person or virtually.

In other NVIDIA Studio news, Topaz Labs, a company that delivers AI-powered photo and video enhancement software, recently adopted NVIDIA TensorRT acceleration for its new Remove Tool. It uses AI to replace unwanted objects in an image with a context-aware background. The tool expedites photo editing workflows, delivering 2.4x faster processing on a GeForce RTX 4090 GPU compared with an Apple MacBook M3 Max.

Topaz Lab’s TensorRT-powered Remove Tool removes unwanted objects with just a click.

Mr_Vudoo Taps RTX for a Livestreaming Upgrade

Mr_Vudoo’s Twitch channel is known for its variety of unique content. In his Trading Card Games series, Mr_Vudoo opens trading card packs and competes at one of the highest levels live on stream. In his Gameplay series, he goes back to his original love for gaming, streaming both multiplayer online games and third-person shooters. And in his In Real Life series, he breaks the norm of traditional streaming bringing his viewers outside to share his everyday life experiences.

It takes significant computing power to bring these series to life. Mr_Vudoo’s GeForce RTX 4080 SUPER features the eighth-generation NVIDIA NVENC — an independent component for encoding video that frees up the system to run games or tackle other compute-intensive tasks. Using it, Mr_Vudoo can achieve a more seamless streaming experience.

“It’s a revelation to stream with high-quality settings and get almost no performance loss in games,” he said.

Mr_Vudoo can also join in the new Twitch Enhanced Broadcasting beta, powered by GeForce RTX GPUs and NVENC, to broadcast up to three resolutions simultaneously at up to 1080p. This eliminates the need to trade off resolution for stream reliability.

In the coming months, Enhanced Broadcasting beta testers will be able to experiment with higher input bit rates, up to 4K resolutions, support for up to five concurrent streams and new codecs.

Mr_Vudoo uses a number of AI-enabled features in the video and photo editing part of his creative workflow. With RTX acceleration, he can add a video camera effect with a timed zoom and a film strip transition in real time without having to render the entire project.

 

“Adding multiple effects on a clip without affecting the preview of the video is a massive time-saver,” he said.

DaVinci Resolve has several RTX-accelerated AI features that can boost content creation, offering tools to smooth slow-motion effects or provide seamless video super resolution. These features are available to all RTX GPU owners.

“GeForce RTX graphics cards are the best GPUs for video editors to use, as they can render tasks much faster, allowing us to become more efficient with our work.” — Mr_Vudoo

Mr_Vudoo can quickly export files with the RTX 4080 SUPER’s dual encoders, which work in tandem to slash export times nearly in half.

In post-production, Mr_Vudoo uses Adobe Photoshop’s AI-powered subject selection tool to quickly isolate objects in an image, instead of having to manually crop them out, speeding his work.

 

Mr_Vudoo also taps the free NVIDIA Broadcast app to boost his productivity.

“I’ve utilized the video noise removal and background replacement the most,” he said. “The eye contact feature was very interesting and quite honestly took me by surprise at how well it worked.”

AI has become an irreplaceable part of Mr_Vudoo’s content creation process, helping him quickly and effortlessly produce his best work. Catch him on Twitch.

 

RTX 4080 SUPER Brings Super Performance

GeForce RTX 4080 SUPER graphics cards are changing the content creation game.

Generative AI apps like Adobe Photoshop can take advantage of the GPU’s Tensor Cores to speed productivity and creative workflows. With the 4080 SUPER, 3D apps like Blender can run up to 70% faster than on previous-generation graphics cards. And video editing apps like Blackmagic Design’s DaVinci Resolve can accelerate AI effects over 30% faster than with the GeForce RTX 3080 Ti.

For gamers, the RTX 4080 SUPER enables greater immersion, with fully ray-traced visuals and the ability to run all settings at max. It delivers twice the speed of the RTX 3080 Ti, up to 836 trillion operations per second, in the most graphically intensive games with DLSS Frame Generation.

Get creative and AI superpowers with the GeForce RTX 4080 SUPER.

Since its release, GeForce RTX 4080 SUPER Series GPUs have been put to the test in creating, gaming and other AI-powered tasks. Here’s what some reviewers had to say:

  • “Jumping over to creative professionals and content creators, the 4080 Super also provides nice performance gains over the standard GeForce RTX 4080 in applications like Blender, Maya, After Effects, DaVinci Resolve and more. This means users can take full advantage of what the NVIDIA 4080 SUPER offers in much more than just gaming and can push the software they use for streaming, video creation, audio and 3D creation to get the most out of their PC.” – CG Magazine
  • “Features like NVIDIA Broadcast and Reflex hold deep practical appeal; RTX Video Super Resolution uses AI to make ugly videos beautiful. And NVIDIA maintains a strong lead in most creative and machine learning/AI workloads if you like to put your GPU to work when you’re not playing — witness the dual AV1 encoders in the 4080 SUPER” — PC World  
  • “Blender can make use of the RT cores on NVIDIA’s GPUs through the OptiX ray tracing rendering engine, and as a result, performance is much higher than any competing GPU in a similar class. The GeForce RTX 4080 SUPER notches another victory over its namesake, and dang the RTX 4090 is a beast.” – Hot Hardware
  • “In terms of creative performance, the RTX 4080 SUPER walks away the winner against the RX 7900 XTX, even if you don’t factor in the fact that Blender Benchmark 4.0.0 workloads wouldn’t even run on the RX 7900 XTX (though the RX 7900 XT was able to run them, just not nearly as well).” – Tech Radar
  • “And when you throw in RTX technologies like DLSS into the mix, NVIDIA’s superior AV1 encoding quality, content-creator-friendly features, and performance, plus generative AI capabilities – and there’s a lot more to the story here than pure 4K gaming EXPERT-ise.” — TweakTown

Discover what RTX 4080 SUPER Series graphics cards and systems are available.

Read More

PyTorch 2 paper and tutorial @ ASPLOS 2024

The PyTorch team is excited to share that our paper on PyTorch 2 has been accepted for presentation at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), scheduled to take place from April 27 to May 1, 2024, in San Diego, CA, USA.

The paper delves into the implementation of torch.compile and highlights the key technologies driving it, including TorchDynamo (graph capture), TorchInductor (backend compiler), and Dynamic Shape support.

During the ASPLOS conference, we’ll be conducting a tutorial on Saturday, April 27, focusing on the inner workings of PyTorch 2 and how systems researchers can leverage and build upon it. Stay tuned for more details as the event approaches – we look forward to your participation!

A preview of the paper is attached below:

Title: PyTorch 2: Faster Machine Learning Through Dynamic Python Bytecode Transformation and Graph Compilation. Full Paper PDF

Abstract

This paper introduces two extensions to the popular PyTorch machine learning framework, TorchDynamo and TorchInductor, which implement the torch.compile feature released in PyTorch 2. TorchDynamo is a Python-level just-in-time (JIT) compiler that enables graph compilation in PyTorch programs without sacrificing the flexibility of Python. It achieves this by dynamically modifying Python bytecode before execution and extracting sequences of PyTorch operations into an FX graph, which is then JIT compiled using one of many extensible backends. TorchInductor is the default compiler backend for TorchDynamo, which translates PyTorch programs into OpenAI’s Triton for GPUs and C++ for CPUs. Results show that TorchDynamo is able to capture graphs more robustly than prior approaches while adding minimal overhead, and TorchInductor is able to provide a 2.27x inference and 1.41x training geometric mean speedup on an NVIDIA A100 GPU across 180+ real-world models, which outperforms six other compilers. These extensions provide a new way to apply optimizations through compilers in eager mode frameworks like PyTorch.

Authors

Jason Ansel (Meta); Edward Yang (Meta); Horace He (Meta); Natalia Gimelshein (OpenAI); Animesh Jain (Meta); Michael Voznesensky (Meta); Bin Bao (Meta); David Berard (Meta); Geeta Chauhan (Meta); Anjali Chourdia (Meta); Will Constable (Meta); Alban Desmaison (Meta); Zachary DeVito (Meta); Elias Ellison (Meta); Will Feng (Meta); Jiong Gong (Intel); Michael Gschwind (Meta); Brian Hirsh (Meta); Sherlock Huang (Meta); Laurent Kirsch (Meta); Michael Lazos (Meta); Yanbo Liang (Meta); Jason Liang (Meta); Yinghai Lu (Meta); CK Luk (Meta); Bert Maher (Meta); Yunjie Pan (University of Michigan); Christian Puhrsch (Meta); Matthias Reso (Meta); Mark Saroufim (Meta); Helen Suk (Meta); Michael Suo (Meta); Phil Tillet (OpenAI); Eikan Wang (Intel); Xiaodong Wang (Meta); William Wen (Meta); Shunting Zhang (Meta); Xu Zhao (Meta); Keren Zhou (OpenAI & George Mason University); Richard Zou (Meta); Ajit Mathews (Meta); Gregory Chanan (Meta); Peng Wu (Meta); Soumith Chintala (Meta)

Read More

Announcing support for Llama 2 and Mistral models and streaming responses in Amazon SageMaker Canvas

Announcing support for Llama 2 and Mistral models and streaming responses in Amazon SageMaker Canvas

Launched in 2021, Amazon SageMaker Canvas is a visual, point-and-click service for building and deploying machine learning (ML) models without the need to write any code. Ready-to-use Foundation Models (FMs) available in SageMaker Canvas enable customers to use generative AI for tasks such as content generation and summarization.

We are thrilled to announce the latest updates to Amazon SageMaker Canvas, which bring exciting new generative AI capabilities to the platform. With support for Meta Llama 2 and Mistral.AI models and the launch of streaming responses, SageMaker Canvas continues to empower everyone that wants to get started with generative AI without writing a single line of code. In this post, we discuss these updates and their benefits.

Introducing Meta Llama 2 and Mistral models

Llama 2 is a cutting-edge foundation model by Meta that offers improved scalability and versatility for a wide range of generative AI tasks. Users have reported that Llama 2 is capable of engaging in meaningful and coherent conversations, generating new content, and extracting answers from existing notes. Llama 2 is among the state-of-the-art large language models (LLMs) available today for the open source community to build their own AI-powered applications.

Mistral.AI, a leading AI French start-up, has developed the Mistral 7B, a powerful language model with 7.3 billion parameters. Mistral models has been very well received by the open-source community thanks to the usage of Grouped-query attention (GQA) for faster inference, making it highly efficient and performing comparably to model with twice or three times the number of parameters.

Today, we are excited to announce that SageMaker Canvas now supports three Llama 2 model variants and two Mistral 7B variants:

To test these models, navigate to the SageMaker Canvas Ready-to-use models page, then choose Generate, extract and summarize content. This is where you’ll find the SageMaker Canvas GenAI chat experience. In here, you can use any model from Amazon Bedrock or SageMaker JumpStart by selecting them on the model drop-down menu.

In our case, we choose one of the Llama 2 models. Now you can provide your input or query. As you send the input, SageMaker Canvas forwards your input to the model.

Choosing which one of the models available in SageMaker Canvas fits best for your use case requires you to take into account information about the models themselves: the Llama-2-70B-chat model is a bigger model (70 billion parameters, compared to 13 billion with Llama-2-13B-chat ), which means that its performance is generally higher that the smaller one, at the cost of a slightly higher latency and an increased cost per token. Mistral-7B has performances comparable to Llama-2-7B or Llama-2-13B, however it is hosted on Amazon SageMaker. This means that the pricing model is different, moving from a dollar-per-token pricing model, to a dollar-per-hour model. This can be more cost effective with a significant amount of requests per hour and a consistent usage at scale. All of the models above can perform well on a variety of use cases, so our suggestion is to evaluate which model best solves your problem, considering output, throughput, and cost trade-offs.

If you’re looking for a straightforward way to compare how models behave, SageMaker Canvas  natively provides this capability in the form of model comparisons. You can select up to three different models and send the same query to all of them at once. SageMaker Canvas will then get the responses from each of the models and show them in a side-by-side chat UI. To do this, choose Compare and choose other models to compare against, as shown below:

Introducing response streaming: Real-time interactions and enhanced performance

One of the key advancements in this release is the introduction of streamed responses. The streaming of responses provides a richer experience for the user and better reflects a chat experience. With streaming responses, users can receive instant feedback and seamless integration in their chatbot applications. This allows for a more interactive and responsive experience, enhancing the overall performance and user satisfaction of the chatbot. The ability to receive immediate responses in a chat-like manner creates a more natural conversation flow and improves the user experience.

With this feature, you can now interact with your AI models in real time, receiving instant responses and enabling seamless integration into a variety of applications and workflows. All models that can be queried in SageMaker Canvas—from Amazon Bedrock and SageMaker JumpStart—can stream responses to the user.

Get started today

Whether you’re building a chatbot, recommendation system, or virtual assistant, the Llama 2 and Mistral models combined with streamed responses bring enhanced performance and interactivity to your projects.

To use the latest features of SageMaker Canvas, make sure to delete and recreate the app. To do that, log out from the app by choosing Log out, then open SageMaker Canvas again. You should see the new models and enjoy the latest releases. Logging out of the SageMaker Canvas application will release all resources used by the workspace instance, therefore avoiding incurring additional unintended charges.

Conclusion

To get started with the new streamed responses for the Llama 2 and Mistral models in SageMaker Canvas, visit the SageMaker console and explore the intuitive interface. To learn more about how SageMaker Canvas and generative AI can help you achieve your business goals, refer to Empower your business users to extract insights from company documents using Amazon SageMaker Canvas and Generative AI and Overcoming common contact center challenges with generative AI and Amazon SageMaker Canvas.

If you want to learn more about SageMaker Canvas features and deep dive on other ML use cases, check out the other posts available in the SageMaker Canvas category of the AWS ML Blog. We can’t wait to see the amazing AI applications you will create with these new capabilities!


About the authors

Picture of DavideDavide Gallitelli is a Senior Specialist Solutions Architect for AI/ML. He is based in Brussels and works closely with customers all around the globe that are looking to adopt Low-Code/No-Code Machine Learning technologies, and Generative AI. He has been a developer since he was very young, starting to code at the age of 7. He started learning AI/ML at university, and has fallen in love with it since then.

Dan Sinnreich is a Senior Product Manager at AWS, helping to democratize low-code/no-code machine learning. Previous to AWS, Dan built and commercialized enterprise SaaS platforms and time-series models used by institutional investors to manage risk and construct optimal portfolios. Outside of work, he can be found playing hockey, scuba diving, and reading science fiction.

Read More

How HSR.health is limiting risks of disease spillover from animals to humans using Amazon SageMaker geospatial capabilities

How HSR.health is limiting risks of disease spillover from animals to humans using Amazon SageMaker geospatial capabilities

This is a guest post co-authored by Ajay K Gupta, Jean Felipe Teotonio and Paul A Churchyard from HSR.health.

HSR.health is a geospatial health risk analytics firm whose vision is that global health challenges are solvable through human ingenuity and the focused and accurate application of data analytics. In this post, we present one approach for zoonotic disease prevention that uses Amazon SageMaker geospatial capabilities to create a tool that provides more accurate disease spread information to health scientists to help them save more lives, quicker.

Zoonotic diseases affect both animals and humans. The transition of a disease from animal to human, known as spillover, is a phenomenon that continually occurs on our planet. According to health organizations such as the Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO), a spillover event at a wet market in Wuhan, China most likely caused the coronavirus disease 2019 (COVID-19). Studies suggest that a virus found in fruit bats underwent significant mutations, allowing it to infect humans. The initial patient, or ‘patient zero’, for COVID-19 probably started a subsequent local outbreak that eventually spread on internationally. HSR.health’s Zoonotic Spillover Risk Index aims to assist in the identification of these early outbreaks before they cross international borders and lead to widespread global impact.

The main weapon public health has against the propagation of regional outbreaks is disease surveillance: an entire interlocking system of disease reporting, investigation, and data communication between different levels of a public health system. This system is dependent not only on human factors, but also on technology and resources to collect disease data, analyze patterns, and create a consistent and continuous stream of data transfer from local to regional to central health authorities.

The speed at which COVID-19 went from a local outbreak to a global disease present in every single continent should be a sobering example of the dire need to harness innovative technology to create more efficient and accurate disease surveillance systems.

The risk of zoonotic disease spillover is sharply correlated with multiple social, environmental, and geographic factors that influence how often human beings interact with wildlife. HSR.health’s Zoonotic Disease Spillover Risk Index uses over 20 distinct geographic, social, and environmental factors historically known to affect the risk of human-wildlife interaction and therefore zoonotic disease spillover risk. Many of these factors can be mapped through a combination of satellite imagery and remote sensing.

In this post, we explore how HSR.health uses SageMaker geospatial capabilities to retrieve relevant features from satellite imagery and remote sensing for developing the risk index. SageMaker geospatial capabilities make it easy for data scientists and machine learning (ML) engineers to build, train, and deploy models using geospatial data. With SageMaker geospatial capabilities, you can efficiently transform or enrich large-scale geospatial datasets, accelerate model building with pre-trained ML models, and explore model predictions and geospatial data on an interactive map using 3D accelerated graphics and built-in visualization tools.

Using ML and geospatial data for risk mitigation

ML is highly effective for anomaly detection on spatial or temporal data due to its ability to learn from data without being explicitly programmed to identify specific types of anomalies. Spatial data, which relates to the physical position and shape of objects, often contains complex patterns and relationships that may be difficult for traditional algorithms to analyze.

Incorporating ML with geospatial data enhances the capability to detect anomalies and unusual patterns systematically, which is essential for early warning systems. These systems are crucial in fields such as environmental monitoring, disaster management, and security. Predictive modeling using historical geospatial data allows organizations to identify and prepare for potential future events. These events range from natural disasters and traffic disruptions to, as this post discusses, disease outbreaks.

Detecting Zoonotic spillover risks

To predict zoonotic spillover risks, HSR.health has adopted a multimodal approach. By using a blend of data types—including environmental, biogeographical, and epidemiological information—this method enables a comprehensive assessment of disease dynamics. Such a multifaceted perspective is critical for developing proactive measures and enabling a rapid response to outbreaks.

The approach includes the following components:

  • Disease and outbreak data – HSR.health uses the extensive disease and outbreak data provided by Gideon and the World Health Organization (WHO), two trusted sources of global epidemiological information. This data serves as a fundamental pillar in the analytics framework. For Gideon, the data can be accessed through an API, and for the WHO, HSR.health has built a large language model (LLM) to mine outbreak data from past disease outbreak reports.
  • Earth observation data – Environmental factors, land use analysis and detection of habitat changes are integral components to assessing zoonotic risk. These insights can be derived from satellite-based earth observation data. HSR.health is able to streamline the use of earth observation data by using SageMaker geospatial capabilities to access and manipulate large-scale geospatial datasets. SageMaker geospatial offers a rich data catalog, including datasets from USGS Landsat-8, Sentinel-1, Sentinel-2, and others. It is also possible to bring in other datasets, such as high-resolution imagery from Planet Labs.
  • Social determinants of risk – Beyond biological and environmental factors, the team at HSR.health also considered social determinants, which encompass various socioeconomic and demographic indicators, and play a pivotal role in shaping zoonotic spillover dynamics.

From these components, HSR.health evaluated a range of different factors, and the following features have been identified as influential for identifying zoonotic spillover risks:

  • Animal habitats and habitable zones – Understanding the habitats of potential zoonotic hosts and their habitable zones is fundamental to assessing transmission risk.
  • Population centers – Proximity to densely populated areas is a key consideration because it influences the likelihood of human-animal interactions.
  • Loss of habitat – The degradation of natural habitats, particularly through deforestation, can accelerate zoonotic spillover events.
  • Human-wildland interface – Areas where human settlements intersect with wildlife habitats are potential hotspots for zoonotic transmission.
  • Social characteristics – Socioeconomic and cultural factors can significantly impact zoonotic risk, and HSR.health examines these as well.
  • Human health characteristics – The health status of local human populations is an essential variable because it affects susceptibility and transmission dynamics.

Solution overview

HSR.health’s workflow encompasses data preprocessing, feature extraction, and the creation of informative visualizations using ML techniques. This allows for a clear understanding of the data’s evolution from its raw form to actionable insights.

The following is a visual representation of the workflow, starting with input data from Gideon, earth observation data, and social determinant of risk data.

Solution overview

Retrieve and process satellite imagery using SageMaker geospatial capabilities

Satellite data forms a cornerstone of the analysis performed to build the risk index, providing critical information on environmental changes. To generate insights from satellite imagery, HSR.health uses Earth Observation Jobs (EOJs). EOJs enable the acquisition and transformation of raster data gathered from the Earth’s surface. An EOJ obtains satellite imagery from a designated data source—for instance, a satellite constellation—over a specific area and time period. It then applies one or more models to the retrieved images.

Additionally, Amazon SageMaker Studio offers a geospatial notebook pre-installed with commonly-used geospatial libraries. This notebook enables direct visualization and processing of geospatial data within a Python notebook environment. EOJs can be created in the geospatial notebook environment.

To configure an EOJ, the following parameters are used:

  • InputConfig – The input configuration specifies the data sources and the filtering criteria to be used during data acquisition:
    • RasterDataCollectionArn – Specifies the satellite from which to collect data.
    • AreaOfInterest – The geographical area of interest (AOI) defines the polygon boundaries for image collection.
    • TimeRangeFilter – The time range of interest: {StartTime: <string>, EndTime: <string>}.
    • PropertyFilters – Additional property filters, such as acceptable percentage of cloud coverage or desired sun azimuth angles.
  • JobConfig – This configuration defines the type of job to be applied to the retrieved satellite image data. It supports operations such as band math, resampling, geomosaic or cloud removal.

The following example code demonstrates running an EOJ for cloud removal, representative of the steps performed by HSR.health:

eoj_input_config = {
    "RasterDataCollectionQuery": {
        "RasterDataCollectionArn": "arn:aws:sagemaker-geospatial:us-west-2:378778860802:raster-data-collection/public/nmqj48dcu3g7ayw8",
        "AreaOfInterest": {
            "AreaOfInterestGeometry": {
                "PolygonGeometry": {
                    "Coordinates": [
                        [
                            [-76.23240119828894,-6.268815697653608],
                            [-76.23240119828894,-6.339419992332921],
                            [-76.13834453776985,-6.339419992332921],
                            [-76.13834453776985,-6.268815697653608],
                            [-76.23240119828894,-6.268815697653608]                       
        ]
                    ]
                }
            }
        },
        "TimeRangeFilter": {
            "StartTime": "2022-03-01T00:00:00Z",
            "EndTime": "2022-06-30T23:59:59Z",
        },
        "PropertyFilters": {
            "Properties": [{"Property": {"EoCloudCover": {"LowerBound": 0.0, "UpperBound": 2.0}}}],
            "LogicalOperator": "AND",
        },
    }
}
eoj_job_config = {
    "CloudRemovalConfig": {
        "AlgorithmName": "INTERPOLATION",
        "InterpolationValue": "-9999",
        "TargetBands": ["red", "green", "blue", "nir", "swir16"],
    }
}

eoj = geospatial_client.start_earth_observation_job(
    Name="eoj-analysis-loreto",
    InputConfig=eoj_input_config,
    JobConfig=eoj_job_config,
    ExecutionRoleArn=execution_role,
)

HSR.health used several operations to preprocess the data and extract relevant features. This includes operations such as land cover classification, mapping temperature variation, and vegetation indexes.

One vegetation index relevant for indicating vegetation health is the Normalized Difference Vegetation Index (NDVI). The NDVI quantifies vegetation health by using near-infrared light, which vegetation reflects, and red light, which vegetation absorbs. Monitoring the NDVI over time can reveal changes in vegetation, such as the impact of human activities like deforestation.

The following code snippet demonstrates how to calculate a vegetation index like the NDVI based on the data that has been passed through cloud removal:

eoj_input_config = {
    "PreviousEarthObservationJobArn": eoj["Arn"]
}
eoj_job_config = {
  "BandMathConfig": {
    "CustomIndices": {
        "Operations": [
            {
                "Equation": "(nir - red) / (nir + red)",
                "Name": "ndvi",
                "OutputType": "FLOAT32"
            }
        ]
    }
  }
}
eoj = geospatial_client.start_earth_observation_job(
    Name="eoj-vi-ndvi",
    InputConfig=eoj_input_config,
    JobConfig=eoj_job_config,
    ExecutionRoleArn=execution_role,
)

EOJ visualization

We can visualize the job output using SageMaker geospatial capabilities. SageMaker geospatial capabilities can help you overlay model predictions on a base map and provide layered visualization to make collaboration easier. With the GPU-powered interactive visualizer and Python notebooks, it’s possible to explore millions of data points in one view, facilitating the collaborative exploration of insights and results.

The steps outlined in this post demonstrate just one of the many raster-based features that HSR.health has extracted to create the risk index.

Combining raster-based features with health and social data

After extracting the relevant features in raster format, HSR.health used zonal statistics to aggregate the raster data within the administrative boundary polygons to which the social and health data are assigned. The analysis incorporates a combination of raster and vector geospatial data. This kind of aggregation allows for the management of raster data in a geodataframe, which facilitates its integration with the health and social data to produce the final risk index.

The following code snippet demonstrates how to aggregate raster data to administrative vector boundaries:

import geopandas as gp
import numpy as np
import pandas as pd
import rasterio
from rasterstats import zonal_stats
import pandas as pd

def get_proportions(inRaster, inVector, classDict, idCols, year):
    # Reading In Vector File
    if '.parquet' in inVector:
        vector = gp.read_parquet(inVector)
    else:
        vector = gp.read_file(inVector)
    raster = rasterio.open(inRaster)
    vector = vector.to_crs(raster.crs)
    # Retrieving the Bounding Box for the Raster Image
    xmin, ymin, xmax, ymax = raster.bounds
    # Selecting the Vector Features that Intersect with the Raster Bounding Box
    vector = vector.cx[xmin:xmax, ymin:ymax]
    vector = vector.reset_index()
    # Calculate the sum of pixels of each class in the vector geometries
    stats = zonal_stats(vector.geometry, raster.read(1), affine=raster.transform, nodata=raster.nodata, categorical=True)
    # Creating a dataframe with the class sum of pixels and the id fields of the vector geometries
    df1 = pd.DataFrame(data=stats)
    df1 = df1.fillna(0)
    df1['totalpixels'] = df1.sum(axis=1)  
    df1['year'] = year 
    if 'year' in vector.columns.tolist():
        vector = vector.drop(['year'], 1)
    # Merging the class sum of pixels dataframe with the vector geodataframe
    df = vector.merge(df1, left_index=True, right_index=True)
    # Renaming Columns
    cdict = pd.read_csv(classDict)
    cdict = cdict.set_index("Value")['Class_name'].to_dict()
    df = df.rename(columns=cdict)
    keptCols = [x for x in df.columns.tolist() if x in idCols + list(cdict.values()) + ['totalpixels', 'year']]
    df = df[keptCols]
    return(df)

def aggregateData(rasterList, inVector, classDict, idCols, years):
    dfList = []
    # Creating aggregated raster to vector geodataframes for all rasters in rasterList
    for tiff in rasterList:
        inRaster = tiff
        year = [x for x in years if x in tiff][0]
        dfList.append(get_proportions(inRaster, inVector, classDict, idCols, year))
    # Concating into a single geodataframe
    allDf = pd.concat(dfList, ignore_index=True)
    classDictDf = pd.read_csv(classDict)
    # Renaming the numerical values of the categories to the string version of the category name
    classCols = classDictDf['Class_name'].unique().tolist()
    # Summing the pixel counts by administrative division as a single administrative division might cover more than one raster image
    for col in classCols:
        allDf[col] = allDf[col].fillna(0)
        allDf[col] = allDf.groupby(idCols + ['year'])[col].transform(lambda x: x.sum())
    # Removing Duplicates from the dataframe
    allDf = allDf.groupby(idCols + ['year']).first().reset_index()
    # Reattaching the geometry to the aggregated raster data
    if '.parquet' in inVector:
        vector = gp.read_parquet(inVector)
    else:
        vector = gp.read_file(inVector)
    allDf = vector.merge(allDf, on=idCols)
    return(allDf)

To evaluate the extracted features effectively, ML models are used to predict factors representing each feature. One of the models used is a support vector machine (SVM). The SVM model assists in revealing patterns and associations within data that inform risk assessments.

The index represents a quantitative assessment of risk levels, calculated as a weighted average of these factors, to aid in understanding potential spillover events in various regions.

import pandas as pd
import numpy as np
import geopandas as gp

def finalIndicatorCalculation(inputLayer, weightDictionary, outLayer):
    # Creating a dictionary with the weights for each factor in the indicator
    weightsDict = pd.read_csv(weightDictionary).set_index('metric')['weight'].to_dict()
    # Reading in the data from the layer
    layer = gp.read_file(inputLayer)
    # Initializing the Sum of the Weights
    layer['sumweight'] = 0
    # Calculating the sum of the weighted factors
    for col in weightsDict.keys():
        layer[col] = layer[col].fillna(0)
        layer['sumweight'] = layer['sumweight'] + (layer[col] * zweights[col])
    # Calculating Raw Zoonotic Spillover Risk Index
    layer['raw_idx'] = np.log(layer['e_pop']) * layer['sumweight']
    # Normalizing the Index between 0 and 100
    layer['zs_idx'] = ((layer['raw_idx'] - layer['raw_idx'].min()) / (layer['raw_idx'].max() - layer['raw_idx'].min()) * 100).round(2)
    return(layer)

The following figure on the left shows the aggregation of the image classification from the test area scene in northern Peru aggregated to the district administrative level with the calculated change in the forest area between 2018–2023. Deforestation is one of the key factors that determine the risk of zoonotic spillover. The figure on the right highlights the zoonotic spillover risk severity levels within the regions covered, ranging from highest (red) to the lowest (dark green) risk. The area was chosen as one of the training areas for the image classification due to the diversity of land cover captured in the scene, including: urban, forest, sand, water, grassland, and agriculture, among others. Additionally, this is one of many areas of interest for potential zoonotic spillover events due to the deforestation and interaction between humans and animals.

Zoonotic spillover risk severity levels in northern Peru

By adopting this multi-modal approach, encompassing historical data on disease outbreak, Earth observation data, social determinants, and ML techniques, we can better understand and predict zoonotic spillover risk, ultimately directing disease surveillance and prevention strategies to areas of greatest outbreak risk. The following screenshot shows a dashboard of the output from a zoonotic spillover risk analysis. This risk analysis highlights where resources and surveillance for new potential zoonotic outbreaks can occur so that the next disease can be contained before it becomes an endemic or a new pandemic.

Zoonotic spillover risk analysis dashboard

A novel approach to pandemic prevention

In 1998, along the Nipah River in Malaysia, between the fall of 1998 and spring of 1999, 265 people were infected with a then unknown virus that caused acute encephalitis and severe respiratory distress. 105 of them died, a 39.6% fatality rate. COVID-19’s untreated fatality rate by contrast is 6.3%. Since then, the Nipah Virus, as it is now dubbed, has transitioned out of its forest habitat and caused over 20 deadly outbreaks, mostly in India and Bangladesh.

Viruses such as Nipah surface each year, posing challenges to our daily lives, particularly in countries where establishing strong, lasting, and robust systems for disease surveillance and detection is more difficult. These detection systems are crucial for reducing the risks associated with such viruses.

Solutions that use ML and geospatial data, such as the Zoonotic Spillover Risk Index, can assist local public health authorities in prioritizing resource allocation to areas of highest risk. By doing so, they can establish targeted and localized surveillance measures to detect and halt regional outbreaks before they extend beyond borders. This approach can significantly limit the impact of a disease outbreak and save lives.

Conclusion

This post demonstrated how HSR.health successfully developed the Zoonotic Spillover Risk Index by integrating geospatial data, health, social determinants, and ML. By using SageMaker, the team created a scalable workflow that can pinpoint the most substantial threats of a potential future pandemic. Effective management of these risks can lead to a reduction in the global disease burden. The substantial economic and social advantages of reducing pandemic risk cannot be overstated, with benefits extending regionally and globally.

HSR.health used SageMaker geospatial capabilities for an initial implementation of the Zoonotic Spillover Risk Index and is now seeking partnerships, as well as support from host countries and funding sources, to develop the index further and extend its application to additional regions around the world. For more information about HSR.health and the Zoonotic Spillover Risk Index, visit www.hsr.health.

Discover the potential of integrating Earth observation data into your healthcare initiatives by exploring SageMaker geospatial features. For more information, refer to Amazon SageMaker geospatial capabilities, or engage with additional examples to get hands-on experience.


About the Authors

Ajay K GuptaAjay K Gupta is Co-Founder and CEO of HSR.health, a firm that disrupts and innovates health risk analytics through geospatial tech and AI techniques to predict the spread and severity of disease. And provides these insights to industry, governments, and the health sector so they can anticipate, mitigate, and take advantage of future risks. Outside of work, you can find Ajay behind the mic bursting eardrums while belting out his favorite pop music tunes from U2, Sting, George Michael, or Imagine Dragons.

Jean Felipe TeotonioJean Felipe Teotonio is a driven physician and passionate expert in healthcare quality and infectious disease epidemiology, Jean Felipe leads the HSR.health public health team. He works towards the shared goal of improving public health by reducing the global burden of disease by leveraging GeoAI approaches to develop solutions for the greatest health challenges of our time. Outside of work, his hobbies include reading sci fi books, hiking, the English premier league, and playing bass guitar.

Paul A ChurchyardPaul A Churchyard, CTO and Chief Geospatial Engineer for HSR.health, uses his broad technical skills and expertise to build the core infrastructure for the firm as well as its patented and proprietary GeoMD Platform. Additionally, he and the data science team incorporate geospatial analytics and AI/ML techniques into all health risk indices HSR.health produces. Outside of work, Paul is a self-taught DJ and loves snow.

Janosch WoschitzJanosch Woschitz is a Senior Solutions Architect at AWS, specializing in geospatial AI/ML. With over 15 years of experience, he supports customers globally in leveraging AI and ML for innovative solutions that capitalize on geospatial data. His expertise spans machine learning, data engineering, and scalable distributed systems, augmented by a strong background in software engineering and industry expertise in complex domains such as autonomous driving.

Emmett NelsonEmmett Nelson is an Account Executive at AWS supporting Nonprofit Research customers across the Healthcare & Life Sciences, Earth / Environmental Sciences, and Education verticals. His primary focus is enabling use cases across analytics, AI/ML, high performance computing (HPC), genomics, and medical imaging. Emmett joined AWS in 2020 and is based in Austin, TX.

Read More