Enable intelligent decision-making with Amazon SageMaker Canvas and Amazon QuickSight

Every company, regardless of its size, wants to deliver the best products and services to its customers. To achieve this, companies want to understand industry trends and customer behavior, and optimize internal processes and data analyses on a routine basis. This is a crucial component of a company’s success.

A very prominent part of the analyst role includes business metrics visualization (like sales revenue) and prediction of future events (like increase in demand) to make data-driven business decisions. To approach this first challenge, you can use Amazon QuickSight, a cloud-scale business intelligence (BI) service that provides easy-to-understand insights and gives decision-makers the opportunity to explore and interpret information in an interactive visual environment. For the second task, you can use Amazon SageMaker Canvas, a cloud service that expands access to machine learning (ML) by providing business analysts with a visual point-and-click interface that allows you to generate accurate ML predictions on your own.

When looking at these metrics, business analysts often identify patterns in customer behavior, in order to determine whether the company risks losing the customer. This problem is called customer churn, and ML models have a proven track record of predicting such customers with high accuracy (for an example, see Elula’s AI Solutions Help Banks Improve Customer Retention).

Building ML models can be a tricky process because it requires an expert team to manage the data preparation and ML model training. However, with Canvas, you can do that without any special knowledge and with zero lines of code. For more information, check out Predict customer churn with no-code machine learning using Amazon SageMaker Canvas.

In this post, we show you how to visualize the predictions generated from Canvas in a QuickSight dashboard, enabling intelligent decision-making via ML.

Overview of solution

In the post Predict customer churn with no-code machine learning using Amazon SageMaker Canvas, we assumed the role of a business analyst in the marketing department of a mobile phone operator, and we successfully created an ML model to identify customers with potential risk of churn. Thanks to the predictions generated by our model, we now want to make an analysis of a potential financial outcome to make data-driven business decisions about potential promotions for these clients and regions.

The architecture that will help us achieve this is shown in the following diagram.

The workflow steps are as follows:

  1. Upload a new dataset with the current customer population into Canvas.
  2. Run a batch prediction and download the results.
  3. Upload the files into QuickSight to create or update visualizations.

You can perform these steps in Canvas without writing a single line of code. For the full list of supported data sources, refer to Importing data in Amazon SageMaker Canvas.

Prerequisites

For this walkthrough, make sure that the following prerequisites are met:

Use the customer churn model

After you complete the prerequisites, you should have a model trained on historical data in Canvas, ready to be used with new customer data to predict customer churn, which you can then use in QuickSight.

  1. Create a new file churn-no-labels.csv by randomly selecting 1,500 lines from the original dataset churn.csv and removing the Churn? column.

We use this new dataset to generate predictions.

We complete the next steps in Canvas. You can open Canvas via the AWS Management Console, or via the SSO application provided by your cloud administrator. If you’re not sure how to access Canvas, refer to Getting started with using Amazon SageMaker Canvas.

  1. On the Canvas console, choose Datasets in the navigation pane.
  2. Choose Import.

  1. Choose Upload and choose the churn-no-labels.csv file that you created.
  2. Choose Import data.

The data import process time depends on the size of the file. In our case, it should be around 10 seconds. When it’s complete, we can see the dataset is in Ready status.

  1. To preview the first 100 rows of the dataset, choose the options menu (three dots) and choose Preview.

  1. Choose Models in the navigation pane, then choose the churn model you created as part of the prerequisites.

  1. On the Predict tab, choose Select dataset.

  1. Select the churn-no-labels.csv dataset, then choose Generate predictions.

Inference time depends on model complexity and dataset size; in our case, it takes around 10 seconds. When the job is finished, it changes its status to Ready and we can download the results.

  1. Choose the options menu (three dots), Download, and Download all values.

Optionally, we can take a quick look at the results choosing Preview. The first two columns are predictions from the model.

We have successfully used our model to predict churn risk for our current customer population. Now we’re ready to visualize business metrics based on our predictions.

Import data to QuickSight

As we discussed previously, business analysts require predictions to be visualized together with business metrics in order to make data-driven business decisions. To do that, we use QuickSight, which provides easy-to-understand insights and gives decision-makers the opportunity to explore and interpret information in an interactive visual environment. With QuickSight, we can build visualizations like graphs and charts in seconds with a simple drag-and-drop interface. In this post, we build several visualizations to better understand business risks and how we could manage them, such as where we should launch new marketing campaigns.

To get started, complete the following steps:

  1. On the QuickSight console, choose Datasets in the navigation pane.
  2. Choose New dataset.

QuickSight supports many data sources. In this post, we use a local file, the one we previously generated in Canvas, as our source data.

  1. Choose Upload a file.

  1. Choose the recently downloaded file with predictions.

QuickSight uploads and analyzes the file.

  1. Check that everything is as expected in the preview, then choose Next.

  1. Choose Visualize.

The data is now successfully imported and we’re ready to analyze it.

Create a dashboard with business metrics of churn predictions

It’s time to analyze our data and make a clear and easy-to-use dashboard that recaps all the information necessary for data-driven business decisions. This type of dashboard is an important tool in the arsenal of a business analysts.

The following is an example dashboard that can help identify and act on the risk of customer churn.

On this dashboard, we visualize several important business metrics:

  • Customers likely to churn – The left donut chart represents the number and percent of users over 50% risk of churning. This chart helps us quickly understand the size of a potential problem.
  • Potential revenue loss – The top middle donut chart represents the amount of revenue loss from users over 50% risk of churning. This chart helps us quickly understand the size of potential revenue loss from churn. The chart also shows that we could lose several above-average customers as a percent of potential revenue lost that’s bigger than the percent of users at risk of churning.
  • Potential revenue loss by state – The top right horizontal bar chart represents the size of revenue lost versus revenue from customers not at risk of churning. This visual could help us understand which state is the most important for us from a marketing campaign perspective.
  • Details about customers at risk of churning – The bottom left table contains details about all our customers. This table could be helpful if we want to quickly look at the details of several customers with and without churn risk.

Customers likely to churn

We start by building a chart with customers at risk of churning.

  1. Under Fields list, choose the Churn? attribute.

QuickSight automatically builds a visualization.

Although the bar plot is a common visualization to understand data distribution, we prefer to use a donut chart. We can change this visual by changing its properties.

  1. Choose the donut chart icon under Visual types.
  2. Choose the current name (double-click) and change it to Customers likely to churn.

  1. To customize other visual effects (remove legend, add values, change font size), choose the pencil icon and make your changes.

As shown in the following screenshot, we increased the area of the donut, as well as added some extra information in the labels.

Potential revenue loss

Another important metric to consider when calculating the business impact of customer churn is potential revenue loss. This is an important metric because it helps us understand the business impact from customers not at risk of churning. In the telecom industry, for example, we could have many inactive clients who have a high risk of churn and but zero revenue. This chart can help us understand if we’re in a such situation or not. To add this metric to our dashboard, we create a custom calculated field by providing the mathematical formula for computing potential revenue loss, then visualize it as another donut chart.

  1. On the Add menu, choose Add calculated field.

  1. Name the field Total charges.
  2. Enter the formula {Day Charge}+{Eve Charge}+{Intl Charge}+{Night Charge}.
  3. Choose Save.

  1. On the Add menu, choose Add visual.

  1. Under Visual types, choose the donut chart icon.
  2. Under Fields list, drag Churn? to Group/Color.
  3. Drag Total charges to Value.
  4. On the Value menu, choose Show as and choose Currency.
  5. Choose the pencil icon to customize other visual effects (remove legend, add values, change font size).

At this moment, our dashboard has two visualizations.

We can already observe that in total we could lose 18% (270) customers, which equals 24% ($6,280) in revenue. Let’s explore further by analyzing potential revenue loss at the state level.

Potential revenue loss by state

To visualize potential revenue loss by state, let’s add a horizontal bar graph.

  1. On the Add menu, choose Add visual.

  1. Under Visual types¸ choose the horizontal bar chart icon.
  2. Under Fields list¸ drag Churn? to Group/Color.
  3. Drag Total charges to Value.
  4. On the Value menu, choose Show as and Currency.
  5. Drag Stage to Y axis.
  6. Choose the pencil icon to customize other visual effects (remove legend, add values, change font size).

  1. We can also sort our new visual by choosing Total charges at the bottom and choosing Descending.

This visual could help us understand which state is the most important from a marketing campaign perspective. For example, in Hawaii, we could potentially lose half our revenue ($253,000) while in Washington, this value is less than 10% ($52,000). We can also see that in Arizona, we risk losing almost every customer.

Details about customers at risk of churning

Let’s build a table with details about customers at risk of churning.

  1. On the Add menu, choose Add visual.

  1. Under Visual types, choose the table icon.
  2. Under Field lists, drag Phone, State, Int’l Plan, Vmail Plan, Churn?, and Account Length to Group by.
  3. Drag probability to Value.
  4. On the Value menu, choose Show as and Percent.

Customize your dashboard

QuickSight offers several options to customize your dashboard, such as the following.

  1. To add a name, on the Add menu, choose Add title.

  1. Enter a title (for this post, we rename our dashboard Churn analysis).

  1. To resize your visuals, choose the bottom right corner of the chart and drag to the desired size.
  2. To move a visual, choose the top center of the chart and drag it to a new location.
  3. To change the theme, choose Themes in the navigation pane.
  4. Choose your new theme (for example, Midnight), and choose Apply.

Publish your dashboard

A dashboard is a read-only snapshot of an analysis that you can share with other QuickSight users for reporting purposes. Your dashboard preserves the configuration of the analysis at the time you publish it, including such things as filtering, parameters, controls, and sort order. The data used for the analysis isn’t captured as part of the dashboard. When you view the dashboard, it reflects the current data in the datasets used by the analysis.

To publish your dashboard, complete the following steps:

  1. On the Share menu, choose Publish dashboard.

  1. Enter a name for your dashboard.
  2. Choose Publish dashboard.

Congratulations, you have successfully created a churn analysis dashboard.

Update your dashboard with a new prediction

As the model evolves and we generate new data from the business, we might need to update this dashboard with new information. Complete the following steps:

  1. Create a new file churn-no-labels-updated.csv by randomly selecting another 1,500 lines from the original dataset churn.csv and removing the Churn? column.

We use this new dataset to generate new predictions.

  1. Repeat the steps from the Use the customer churn model section of this post to get predictions for the new dataset, and download the new file.
  2. On the QuickSight console, choose Datasets in the navigation pane.
  3. Choose the dataset we created.

  1. Choose Edit dataset.

  1. On the drop-down menu, choose Update file.

  1. Choose Upload file.

  1. Choose the recently downloaded file with the predictions.
  2. Review the preview, then choose Confirm file update.

After the “File updated successfully” message appears, we can see that file name has also changed.

  1. Choose Save & publish.

  1. When the “Saved and published successfully” message apears, you can go back to the main menu by choosing the QuickSight logo in the left upper corner.

  1. Choose Dashboards in the navigation pane and choose the dashboard we created before.

You should see your dashboard with the updated values.

We have just updated our QuickSight dashboard with the most recent predictions from Canvas.

Clean up

To avoid future charges, log out from Canvas.

Conclusion

In this post, we used an ML model from Canvas to predict customers at risk of churning and built a dashboard with insightful visualizations to help us make data-driven business decisions. We did so without writing a single line of code thanks to user-friendly interfaces and clear visualizations. This enables business analysts to be agile in building ML models, and perform analyses and extract insights in complete autonomy from data science teams.

To learn more about using Canvas, see Build, Share, Deploy: how business analysts and data scientists achieve faster time-to-market using no-code ML and Amazon SageMaker Canvas. For more information about creating ML models with a no-code solution, see Announcing Amazon SageMaker Canvas – a Visual, No Code Machine Learning Capability for Business Analysts. To learn more about the latest QuickSight features and best practices, see AWS Big Data Blog.


About the Author

Aleksandr Patrushev is AI/ML Specialist Solutions Architect at AWS, based in Luxembourg. He is passionate about the cloud and machine learning, and the way they could change the world. Outside work, he enjoys hiking, sports, and spending time with his family.

Davide Gallitelli is a Specialist Solutions Architect for AI/ML in the EMEA region. He is based in Brussels and works closely with customers throughout Benelux. He has been a developer since he was very young, starting to code at the age of 7. He started learning AI/ML at university, and has fallen in love with it since then.

Read More

HARMAN to Deliver Immersive In-Vehicle Experience With NVIDIA DRIVE IX

Breakthroughs in centralized, high performance computing aren’t just opening up new functionality for autonomous driving, but for the in-vehicle experience as well.

With the introduction of NVIDIA DRIVE Thor, automakers can build unified AI compute platforms that combine advanced driver-assistance systems and in-vehicle infotainment. The centralized NVIDIA DRIVE architecture supports novel features in the vehicle, including the ability to support content across multiple displays.

HARMAN, a global leader in rich, connected in-vehicle solutions, will be working with the NVIDIA DRIVE platform to develop multi-domain infotainment solutions. HARMAN is using the DRIVE IX intelligent experience software stack to bring immersive cinematic experiences to every seat in the vehicle, including with individual sound zones for personalized audio.

U.S. drivers collectively whiled away 3.6 billion hours in their vehicles last year, according to a study from INRIX. With AI-assisted automated-driving systems, these hours can become valuable downtime.

Electric vehicle charging could also add to time spent in vehicles, as owners wait for their car to charge during longer trips. Current EVs take at least 30 minutes to charge — making it a prime opportunity to catch up on the latest video game release.

With the high performance of NVIDIA DRIVE and the open DRIVE IX platform, HARMAN can build and deploy multi-domain features that will turn personal vehicles into an entirely new living space.

Sit Back and Relax

HARMAN’s innovative technologies combine and orchestrate various in-vehicle domains, such as cockpit, sound, telematics and cloud services to create personalized, road-ready solutions that meet the demands of automakers and their customers.

With this curated approach, HARMAN delivers consumer experiences at automotive grade, including premium car audio, connectivity and visualization solutions that create meaningful experiences for consumers.

DRIVE IX enables HARMAN to make every screen — whether in the cockpit or the rear passenger seats — display high-resolution video content, as well as 3D audio, creating a virtual concert hall.

HARMAN’s long-term expertise in both the consumer and automotive categories uniquely positions it to bolster the use of vehicles as a living space.

With the open, flexible and high-performance DRIVE platform, HARMAN is poised to completely transform the vehicle experience, turning rush-hour dread into a thing of the past.

The post HARMAN to Deliver Immersive In-Vehicle Experience With NVIDIA DRIVE IX appeared first on NVIDIA Blog.

Read More

Introducing Whisper

Introducing Whisper

We’ve trained and are open-sourcing a neural net called Whisper that approaches human level robustness and accuracy on English speech recognition.

Read Paper


View Code


View Model Card

Whisper examples:

Whisper is an automatic speech recognition (ASR) system trained on 680,000 hours of multilingual and multitask supervised data collected from the web. We show that the use of such a large and diverse dataset leads to improved robustness to accents, background noise and technical language. Moreover, it enables transcription in multiple languages, as well as translation from those languages into English. We are open-sourcing models and inference code to serve as a foundation for building useful applications and for further research on robust speech processing.

Introducing Whisper
Introducing Whisper

The Whisper architecture is a simple end-to-end approach, implemented as an encoder-decoder Transformer. Input audio is split into 30-second chunks, converted into a log-Mel spectrogram, and then passed into an encoder. A decoder is trained to predict the corresponding text caption, intermixed with special tokens that direct the single model to perform tasks such as language identification, phrase-level timestamps, multilingual speech transcription, and to-English speech translation.

Introducing Whisper
Introducing Whisper

Other existing approaches frequently use smaller, more closely paired audio-text training datasets, or use broad but unsupervised audio pretraining. Because Whisper was trained on a large and diverse dataset and was not fine-tuned to any specific one, it does not beat models that specialize in LibriSpeech performance, a famously competitive benchmark in speech recognition. However, when we measure Whisper’s zero-shot performance across many diverse datasets we find it is much more robust and makes 50% fewer errors than those models.

About a third of Whisper’s audio dataset is non-English, and it is alternately given the task of transcribing in the original language or translating to English. We find this approach is particularly effective at learning speech to text translation and outperforms the supervised SOTA on CoVoST2 to English translation zero-shot.

Introducing Whisper
Introducing Whisper

We hope Whisper’s high accuracy and ease of use will allow developers to add voice interfaces to a much wider set of applications. Check out the paper, model card, and code to learn more details and to try out Whisper.


References
  1. Chan, W., Park, D., Lee, C., Zhang, Y., Le, Q., and Norouzi, M. SpeechStew: Simply mix all available speech recogni- tion data to train one large neural network. arXiv preprint arXiv:2104.02133, 2021.
  2. Galvez, D., Diamos, G., Torres, J. M. C., Achorn, K., Gopi, A., Kanter, D., Lam, M., Mazumder, M., and Reddi, V. J. The people’s speech: A large-scale diverse english speech recognition dataset for commercial usage. arXiv preprint arXiv:2111.09344, 2021.
  3. Chen, G., Chai, S., Wang, G., Du, J., Zhang, W.-Q., Weng, C., Su, D., Povey, D., Trmal, J., Zhang, J., et al. Gigaspeech: An evolving, multi-domain asr corpus with 10,000 hours of transcribed audio. arXiv preprint arXiv:2106.06909, 2021.
  4. Baevski, A., Zhou, H., Mohamed, A., and Auli, M. wav2vec 2.0: A framework for self-supervised learning of speech representations. arXiv preprint arXiv:2006.11477, 2020.
  5. Baevski, A., Hsu, W.N., Conneau, A., and Auli, M. Unsu pervised speech recognition. Advances in Neural Information Processing Systems, 34:27826–27839, 2021.
  6. Zhang, Y., Park, D. S., Han, W., Qin, J., Gulati, A., Shor, J., Jansen, A., Xu, Y., Huang, Y., Wang, S., et al. BigSSL: Exploring the frontier of large-scale semi-supervised learning for automatic speech recognition. arXiv preprint arXiv:2109.13226, 2021.


OpenAI

Empowering Cambridge youth through data activism

Empowering Cambridge youth through data activism

For over 40 years, the Mayor’s Summer Youth Employment Program (MSYEP, or the Mayor’s Program) in Cambridge, Massachusetts, has been providing teenagers with their first work experience, but 2022 brought a new offering. Collaborating with MIT’s Personal Robots research group (PRG) and Responsible AI for Social Empowerment and Education (RAISE) this summer, MSYEP created a STEAM-focused learning site at the Institute. Eleven students joined the program to learn coding and programming skills through the lens of “Data Activism.”

MSYEP’s partnership with MIT provides an opportunity for Cambridge high schoolers to gain exposure to more pathways for their future careers and education. The Mayor’s Program aims to respect students’ time and show the value of their work, so participants are compensated with an hourly wage as they learn workforce skills at MSYEP worksites. In conjunction with two ongoing research studies at MIT, PRG and RAISE developed the six-week Data Activism curriculum to equip students with critical-thinking skills so they feel prepared to utilize data science to challenge social injustice and empower their community.

Rohan Kundargi, K-12 Community Outreach Administrator for MIT Office of Government and Community Relations (OGCR), says, I see this as a model for a new type of partnership between MIT and Cambridge MSYEP. Specifically, an MIT research project that involves students from Cambridge getting paid to learn, research, and develop their own skills!”

Cross-Cambridge collaboration

Cambridge’s Office of Workforce Development initially contacted MIT OGCR about hosting a potential MSYEP worksite that taught Cambridge teens how to code. When Kundargi reached out to MIT pK-12 collaborators, MIT PRG’s graduate research assistant Raechel Walker proposed the Data Activism curriculum. Walker defines “data activism” as utilizing data, computing, and art to analyze how power operates in the world, challenge power, and empathize with people who are oppressed.

Walker says, “I wanted students to feel empowered to incorporate their own expertise, talents, and interests into every activity. In order for students to fully embrace their academic abilities, they must remain comfortable with bringing their full selves into data activism.”

As Kundargi and Walker recruited students for the Data Activism learning site, they wanted to make sure the cohort of students — the majority of whom are individuals of color — felt represented at MIT and felt they had the agency for their voice to be heard. “The pioneers in this field are people who look like them,” Walker says, speaking of well-known data activists Timnit Gebru, Rediet Abebe, and Joy Buolamwini.

When the program began this summer, some of the students were not aware of the ways data science and artificial intelligence exacerbate systemic oppression in society, or some of the tools currently being used to mitigate those societal harms. As a result, Walker says, the students wanted to learn more about discriminatory design in every aspect of life. They were also interested in creating responsible machine learning algorithms and AI fairness metrics.

A different side of STEAM

The development and execution of the Data Activism curriculum contributed to Walker’s and postdoc Xiaoxue Du’s respective research at PRG. Walker is studying AI education, specifically creating and teaching data activism curricula for minoritized communities. Du’s research explores processes, assessments, and curriculum design that prepares educators to use, adapt, and integrate AI literacy curricula. Additionally, her research targets how to leverage more opportunities for students with diverse learning needs.

The Data Activism curriculum utilizes a “libertatory computing” framework, a term Walker coined in her position paper with Professor Cynthia Breazeal, director of MIT RAISE, dean for digital learning, and head of PRG, and Eman Sherif, a then-undergraduate researcher from University of California at San Diego, titled “Liberty Computing for African American Students.” This framework ensures that students, especially minoritized students, acquire a sound racial identity, critical consciousness, collective obligation, liberation centered academic/achievement identity, as well as the activism skills to use computing to transform a multi-layered system of barriers in which racism persists. Walker says, “We encouraged students to demonstrate competency in every pillar because all of the pillars are interconnected and build upon each other.”

Walker developed a series of interactive coding and project-based activities that focused on understanding systemic racism, utilizing data science to analyze systemic oppression, data drawing, responsible machine learning, how racism can be embedded into AI, and different AI fairness metrics.

This was the students’ first time learning how to create data visualizations using the programming language Python and the data analysis tool Pandas. In one project meant to examine how different systems of oppression can affect different aspects of students’ own identities, students created datasets with data from their respective intersectional identities. Another activity highlighted African American achievements, where students analyzed two datasets about African American scientists, activists, artists, scholars, and athletes. Using the data visualizations, students then created zines about the African Americans who inspired them.

RAISE hired Olivia Dias, Sophia Brady, Lina Henriquez, and Zeynep Yalcin through the MIT Undergraduate Research Opportunity Program (UROP) and PRG hired freelancer Matt Taylor to work with Walker on developing the curriculum and designing interdisciplinary experience projects. Walker and the four undergraduate researchers constructed an intersectional data analysis activity about different examples of systemic oppression. PRG also hired three high school students to test activities and offer insights about making the curriculum engaging for program participants. Throughout the program, the Data Activism team taught students in small groups, continually asked students how to improve each activity, and structured each lesson based on the students’ interests. Walker says Dias, Brady, Henriquez, and Yalcin were invaluable to cultivating a supportive classroom environment and helping students complete their projects.

Student Nina says, “It’s opened my eyes to a different side of STEM. I didn’t know what ‘data’ meant before this program, or how intersectionality can affect AI and data.” Before MSYEP, Nina took Intro to Computer Science and AP Computer Science, but she has been coding since Girls Who Code first sparked her interest in middle school. “The community was really nice. I could talk with other girls. I saw there needs to be more women in STEM, especially in coding.” Now she’s interested in applying to colleges with strong computer science programs so she can pursue a coding-related career.

From MSYEP to the mayor’s office

Mayor Sumbul Siddiqui visited the Data Activism learning site on Aug. 9, accompanied by Breazeal. A graduate of MSYEP herself, Siddiqui says, “Through hands-on learning through computer programming, Cambridge high school students have the unique opportunity to see themselves as data scientists. Students were able learn ways to combat discrimination that occurs through artificial intelligence.” In an Instagram post, Siddiqui also said, “I had a blast visiting the students and learning about their projects.”

Students worked on an activity that asked them to envision how data science might be used to support marginalized communities. They transformed their answers into block-printed T-shirt designs, carving pictures of their hopes into rubber block stamps. Some students focused on the importance of data privacy, like Jacob T., who drew a birdcage to represent data stored and locked away by third party apps. He says, “I want to open that cage and restore my data to myself and see what can be done with it.”

Many students wanted to see more representation in both the media they consume and across various professional fields. Nina talked about the importance of representation in media and how that could contribute to greater representation in the tech industry, while Kiki talked about encouraging more women to pursue STEM fields. Jesmin said, “I wanted to show that data science is accessible to everyone, no matter their origin or language you speak. I wrote ‘hello’ in Bangla, Arabic, and English, because I speak all three languages and they all resonate with me.”

“Overall, I hope the students continue to use their data activism skills to re-envision a society that supports marginalized groups,” says Walker. “Moreover, I hope they are empowered to become data scientists and understand how their race can be a positive part of their identity.”

Read More

Empowering Cambridge youth through data activism

For over 40 years, the Mayor’s Summer Youth Employment Program (MSYEP, or the Mayor’s Program) in Cambridge, Massachusetts, has been providing teenagers with their first work experience, but 2022 brought a new offering. Collaborating with MIT’s Personal Robots research group (PRG) and Responsible AI for Social Empowerment and Education (RAISE) this summer, MSYEP created a STEAM-focused learning site at the Institute. Eleven students joined the program to learn coding and programming skills through the lens of “Data Activism.”

MSYEP’s partnership with MIT provides an opportunity for Cambridge high schoolers to gain exposure to more pathways for their future careers and education. The Mayor’s Program aims to respect students’ time and show the value of their work, so participants are compensated with an hourly wage as they learn workforce skills at MSYEP worksites. In conjunction with two ongoing research studies at MIT, PRG and RAISE developed the six-week Data Activism curriculum to equip students with critical-thinking skills so they feel prepared to utilize data science to challenge social injustice and empower their community.

Rohan Kundargi, K-12 Community Outreach Administrator for MIT Office of Government and Community Relations (OGCR), says, I see this as a model for a new type of partnership between MIT and Cambridge MSYEP. Specifically, an MIT research project that involves students from Cambridge getting paid to learn, research, and develop their own skills!”

Cross-Cambridge collaboration

Cambridge’s Office of Workforce Development initially contacted MIT OGCR about hosting a potential MSYEP worksite that taught Cambridge teens how to code. When Kundargi reached out to MIT pK-12 collaborators, MIT PRG’s graduate research assistant Raechel Walker proposed the Data Activism curriculum. Walker defines “data activism” as utilizing data, computing, and art to analyze how power operates in the world, challenge power, and empathize with people who are oppressed.

Walker says, “I wanted students to feel empowered to incorporate their own expertise, talents, and interests into every activity. In order for students to fully embrace their academic abilities, they must remain comfortable with bringing their full selves into data activism.”

As Kundargi and Walker recruited students for the Data Activism learning site, they wanted to make sure the cohort of students — the majority of whom are individuals of color — felt represented at MIT and felt they had the agency for their voice to be heard. “The pioneers in this field are people who look like them,” Walker says, speaking of well-known data activists Timnit Gebru, Rediet Abebe, and Joy Buolamwini.

When the program began this summer, some of the students were not aware of the ways data science and artificial intelligence exacerbate systemic oppression in society, or some of the tools currently being used to mitigate those societal harms. As a result, Walker says, the students wanted to learn more about discriminatory design in every aspect of life. They were also interested in creating responsible machine learning algorithms and AI fairness metrics.

A different side of STEAM

The development and execution of the Data Activism curriculum contributed to Walker’s and postdoc Xiaoxue Du’s respective research at PRG. Walker is studying AI education, specifically creating and teaching data activism curricula for minoritized communities. Du’s research explores processes, assessments, and curriculum design that prepares educators to use, adapt, and integrate AI literacy curricula. Additionally, her research targets how to leverage more opportunities for students with diverse learning needs.

The Data Activism curriculum utilizes a “libertatory computing” framework, a term Walker coined in her position paper with Professor Cynthia Breazeal, director of MIT RAISE, dean for digital learning, and head of PRG, and Eman Sherif, a then-undergraduate researcher from University of California at San Diego, titled “Liberty Computing for African American Students.” This framework ensures that students, especially minoritized students, acquire a sound racial identity, critical consciousness, collective obligation, liberation centered academic/achievement identity, as well as the activism skills to use computing to transform a multi-layered system of barriers in which racism persists. Walker says, “We encouraged students to demonstrate competency in every pillar because all of the pillars are interconnected and build upon each other.”

Walker developed a series of interactive coding and project-based activities that focused on understanding systemic racism, utilizing data science to analyze systemic oppression, data drawing, responsible machine learning, how racism can be embedded into AI, and different AI fairness metrics.

This was the students’ first time learning how to create data visualizations using the programming language Python and the data analysis tool Pandas. In one project meant to examine how different systems of oppression can affect different aspects of students’ own identities, students created datasets with data from their respective intersectional identities. Another activity highlighted African American achievements, where students analyzed two datasets about African American scientists, activists, artists, scholars, and athletes. Using the data visualizations, students then created zines about the African Americans who inspired them.

RAISE hired Olivia Dias, Sophia Brady, Lina Henriquez, and Zeynep Yalcin through the MIT Undergraduate Research Opportunity Program (UROP) and PRG hired freelancer Matt Taylor to work with Walker on developing the curriculum and designing interdisciplinary experience projects. Walker and the four undergraduate researchers constructed an intersectional data analysis activity about different examples of systemic oppression. PRG also hired three high school students to test activities and offer insights about making the curriculum engaging for program participants. Throughout the program, the Data Activism team taught students in small groups, continually asked students how to improve each activity, and structured each lesson based on the students’ interests. Walker says Dias, Brady, Henriquez, and Yalcin were invaluable to cultivating a supportive classroom environment and helping students complete their projects.

Student Nina says, “It’s opened my eyes to a different side of STEM. I didn’t know what ‘data’ meant before this program, or how intersectionality can affect AI and data.” Before MSYEP, Nina took Intro to Computer Science and AP Computer Science, but she has been coding since Girls Who Code first sparked her interest in middle school. “The community was really nice. I could talk with other girls. I saw there needs to be more women in STEM, especially in coding.” Now she’s interested in applying to colleges with strong computer science programs so she can pursue a coding-related career.

From MYSEP to the mayor’s office

Mayor Sumbul Siddiqui visited the Data Activism learning site on Aug. 9, accompanied by Breazeal. A graduate of MSYEP herself, Siddiqui says, “Through hands-on learning through computer programming, Cambridge Rindge and Latin School students have the unique opportunity to see themselves as data scientists. Students were able learn ways to combat discrimination that occurs through artificial intelligence.” In an Instagram post, Siddiqui also said, “I had a blast visiting the students and learning about their projects.”

Students worked on an activity that asked them to envision how data science might be used to support marginalized communities. They transformed their answers into block-printed T-shirt designs, carving pictures of their hopes into rubber block stamps. Some students focused on the importance of data privacy, like Jacob T., who drew a birdcage to represent data stored and locked away by third party apps. He says, “I want to open that cage and restore my data to myself and see what can be done with it.”

Many students wanted to see more representation in both the media they consume and across various professional fields. Nina talked about the importance of representation in media and how that could contribute to greater representation in the tech industry, while Kiki talked about encouraging more women to pursue STEM fields. Jesmin said, “I wanted to show that data science is accessible to everyone, no matter their origin or language you speak. I wrote ‘hello’ in Bangla, Arabic, and English, because I speak all three languages and they all resonate with me.”

“Overall, I hope the students continue to use their data activism skills to re-envision a society that supports marginalized groups,” says Walker. “Moreover, I hope they are empowered to become data scientists and understand how their race can be a positive part of their identity.”

Read More

Toward Supporting Quality Alt Text in Computing Publications

While researchers have examined alternative (alt) text for social media and news contexts, few have studied the status and challenges for authoring alt text of figures in computing-related publications. These figures are distinct, often conveying dense visual information, and may necessitate unique accessibility solutions. Accordingly, we explored how to support authors in creating alt text in computing publications—specifically in the field of human-computer interaction (HCI). We conducted two studies: (1) an analysis of 300 recently published figures at a general HCI conference (ACM CHI)…Apple Machine Learning Research

Amazon SageMaker Autopilot is up to eight times faster with new ensemble training mode powered by AutoGluon

Amazon SageMaker Autopilot has added a new training mode that supports model ensembling powered by AutoGluon. Ensemble training mode in Autopilot trains several base models and combines their predictions using model stacking. For datasets less than 100 MB, ensemble training mode builds machine learning (ML) models with high accuracy quickly—up to eight times faster than hyperparameter optimization (HPO) training mode with 250 trials, and up to 5.8 times faster than HPO training mode with 100 trials. It supports a wide range of algorithms, including LightGBM, CatBoost, XGBoost, Random Forest, Extra Trees, linear models, and neural networks based on PyTorch and FastAI.

How AutoGluon builds ensemble models

AutoGluon-Tabular (AGT) is a popular open-source AutoML framework that trains highly accurate ML models on tabular datasets. Unlike existing AutoML frameworks, which primarily focus on model and hyperparameter selection, AGT succeeds by ensembling multiple models and stacking them in multiple layers. The default behavior of AGT can be summarized as follows: Given a dataset, AGT trains various base models ranging from off-the-shelf boosted trees to customized neural networks on the dataset. The predictions from the base models are used as features to build a stacking model, which learns the appropriate weight of each base model. With these learned weights, the stacking model then combines the base model’s predictions and returns the combined predictions as the final set of predictions.

How Autopilot’s ensemble training mode works

Different datasets have characteristics that are suitable for different algorithms. Given a dataset with unknown characteristics, it’s difficult to know beforehand which algorithms will work best on a dataset. With this in mind, data scientists using AGT often create multiple custom configurations with a subset of algorithms and parameters. They run these configurations on a given dataset to find the best configuration in terms of performance and inference latency.

Autopilot is a low-code ML product that automatically builds the best ML models for your data. In the new ensemble training mode, Autopilot selects an optimal set of AGT configurations and runs multiple trials to return the best model. These trials are run in parallel to evaluate if AGT’s performance can be further improved, in terms of objective metrics or inference latency.

Results observed using OpenML benchmarks

To evaluate the performance improvements, we used OpenML benchmark datasets with sizes varying from 0.5–100 MB and ran 10 AGT trials with different combinations of algorithms and hyperparameter configurations. The tests compared ensemble training mode to HPO mode with 250 trials and HPO mode with 100 trials. The following table compares the overall Autopilot experiment runtime (in minutes) between the two training modes for various dataset sizes.

Dataset Size HPO Mode (250 trials) HPO Mode (100 trials) Ensemble Mode (10 trials) Runtime Improvement with HPO 250 Runtime Improvement with HPO 100
< 1MB 121.5 mins 88.0 mins 15.0 mins 8.1x 5.9x
1–10 MB 136.1 mins 76.5 mins 25.8 mins 5.3x 3.0x
10–100 MB 152.7 mins 103.1 mins 60.9 mins 2.5x 1.7x

For comparing performance of multiclass classification problems, we use accuracy, for binary classification problems we use the F1-score, and for regression problems we use R2. The gains in objective metrics are shown in the following tables. We observed that ensemble training mode performed better than HPO training mode (both 100 and 250 trials).

Note that the ensemble mode shows consistent improvement over HPO mode with 250 trials irrespective of dataset size and problem types.

The following table compares accuracy for multi-class classification problems (higher is better).

Dataset Size HPO Mode (250 trials) HPO Mode (100 trials) Ensemble Mode (10 trials) Percentage Improvement over HPO 250
< 1MB 0.759 0.761 0.771 1.46%
1–5 MB 0.941 0.935 0.957 1.64%
5–10 MB 0.639 0.633 0.671 4.92%
10–50 MB 0.998 0.999 0.999 0.11%
51–100 MB 0.853 0.852 0.875 2.56%

The following table compares F1 scores for binary classification problems (higher is better).

Dataset Size HPO Mode (250 trials) HPO Mode (100 trials) Ensemble Mode (10 trials) Percentage Improvement over HPO 250
< 1MB 0.801 0.807 0.826 3.14%
1–5 MB 0.59 0.587 0.629 6.60%
5–10 MB 0.886 0.889 0.898 1.32%
10–50 MB 0.731 0.736 0.754 3.12%
51–100 MB 0.503 0.493 0.541 7.58%

The following table compares R2 for regression problems (higher is better).

Dataset Size HPO Mode (250 trials) HPO Mode (100 trials) Ensemble Mode (10 trials) Percentage Improvement over HPO 250
< 1MB 0.717 0.718 0.716 0%
1–5 MB 0.803 0.803 0.817 2%
5–10 MB 0.590 0.586 0.614 4%
10–50 MB 0.686 0.688 0.684 0%
51–100 MB 0.623 0.626 0.631 1%

In the next sections, we show how to use the new ensemble training mode in Autopilot to analyze datasets and easily build high-quality ML models.

Dataset overview

We use the Titanic dataset to predict if a given passenger survived or not. This is a binary classification problem. We focus on creating an Autopilot experiment using the new ensemble training mode and compare the results of F1 score and overall runtime with an Autopilot experiment using HPO training mode (100 trials).

Column Name Description
Passengerid Identification number
Survived Survival
Pclass Ticket class
Name Passenger name
Sex Sex
Age Age in years
Sibsp Number of siblings or spouses aboard the Titanic
Parch Number of parents or children aboard the Titanic
Ticket Ticket number
Fare Passenger fare
Cabin Cabin number
Embarked Port of embarkation

The dataset has 890 rows and 12 columns. It contains demographic information about the passengers (age, sex, ticket class, and so on) and the Survived (yes/no) target column.

Prerequisites

Complete the following prerequisite steps:

  1. Ensure that you have an AWS account, secure access to log in to the account via the AWS Management Console, and AWS Identity and Access Management (IAM) permissions to use Amazon SageMaker and Amazon Simple Storage Service (Amazon S3) resources.
  2. Download the Titanic dataset and upload it to an S3 bucket in your account.
  3. Onboard to a SageMaker domain and access Amazon SageMaker Studio to use Autopilot. For instructions, refer Onboard to Amazon SageMaker Domain. If you’re using existing Studio, upgrade to the latest version of Studio to use the new ensemble training mode.

Create an Autopilot experiment with ensemble training mode

When the dataset is ready, you can initialize an Autopilot experiment in Studio. For full instructions, refer to Create an Amazon SageMaker Autopilot experiment. Create an Autopilot experiment by providing an experiment name, the data input, and specifying the target data to predict in the Experiment and data details section. Optionally, you can specify the data spilt ratio and auto creation of the Amazon S3 output location.

For our use case, we provide an experiment name, input Amazon S3 location, and choose Survived as the target. We keep the auto split enabled and override the default output Amazon S3 location.

Next, we specify the training method in the Training method section. You can either let Autopilot select the training mode automatically using Auto based on the dataset size, or select the training mode manually for either ensembling or HPO. The details on each option are as follows:

  • Auto – Autopilot automatically chooses either ensembling or HPO mode based on your dataset size. If your dataset is larger than 100 MB, Autopilot chooses HPO, otherwise it chooses ensembling.
  • Ensembling – Autopilot uses AutoGluon’s ensembling technique to train several base models and combines their predictions using model stacking into an optimal predictive model.
  • Hyperparameter optimization – Autopilot finds the best version of a model by tuning hyperparameters using the Bayesian Optimization technique and running training jobs on your dataset. HPO selects the algorithms most relevant to your dataset and picks the best range of hyperparameters to tune the models.

For our use case, we select Ensembling as our training mode.

After this, we proceed to the Deployment and advanced settings section. Here, we deselect the Auto deploy option. Under Advanced settings, you can specify the type of ML problem that you want to solve. If nothing is provided, Autopilot automatically determines the model based on the data you provide. Because ours is a binary classification problem, we choose Binary classification as our problem type and F1 as our objective metric.

Finally, we review our selections and choose Create experiment.

At this point, it’s safe to leave Studio and return later to check on the result, which you can find on the Experiments menu.

The following screenshot shows the final results of our titanic-ens ensemble training mode Autopilot job.

You can see the multiple trials that have been attempted by the Autopilot in ensemble training mode. Each trial returns the best model from the pool of individual model runs and stacking ensemble model runs.

To explain this a little further, let’s assume Trial 1 considered all eight supported algorithms and used stacking level 2. It will internally create the individual models for each algorithm as well as the weighted ensemble models with stack Level 0, Level 1, and Level 2. However, the output of Trial 1 will be the best model from the pool of models created.

Similarly, let’s consider Trial 2 to have picked up tree based boosting algorithms only. In this case, Trial 2 will internally create three individual models for each of the three algorithms as well as the weighted ensemble models, and return the best model from its run.

The final model returned by a trial may or may not be a weighted ensemble model, but the majority of the trials will most likely return their best weighted ensemble model. Finally, based on the selected objective metric, the best model amongst all the 10 trials will be identified.

In the preceding example, our best model was the one with highest F1 score (our objective metric). Several other useful metrics, including accuracy, balanced accuracy, precision, and recall are also shown. In our environment, the end-to-end runtime for this Autopilot experiment was 10 minutes.

Create an Autopilot experiment with HPO training mode

Now let’s perform all of the aforementioned steps to create a second Autopilot experiment with the HPO training method (default 100 trials). Apart from training method selection, which is now Hyperparameter optimization, everything else stays the same. In HPO mode, you can specify the number of trials by setting Max candidates under Advanced settings for Runtime, but we recommend leaving this to default. Not providing any value in Max candidates will run 100 HPO trials. In our environment, the end-to-end runtime for this Autopilot experiment was 2 hours.

Runtime and performance metric comparison

We see that for our dataset (under 1 MB), not only did ensemble training mode run 12 times faster than HPO training mode (120 minutes to 10 minutes), but it also produced improved F1 scores and other performance metrics.

Training Mode F1 Score Accuracy Balanced Accuracy AUC Precision Recall Log Loss Runtime
Ensemble modeWeightedEnsemble 0.844 0.878 0.865 0.89 0.912 0.785 0.394 10 mins
HPO mode – XGBoost 0.784 0.843 0.824 0.867 0.831 0.743 0.428 120 mins

Inference

Now that we have a winner model, we can either deploy it to an endpoint for real-time inferencing or use batch transforms to make predictions on the unlabeled dataset we downloaded earlier.

Summary

You can run your Autopilot experiments faster without any impact on performance with the new ensemble training mode for datasets less than 100 MB. To get started, create an SageMaker Autopilot experiment on the Studio console and select Ensembling as your training mode, or let Autopilot infer the training mode automatically based on the dataset size. You can refer to the CreateAutoMLJob API reference guide for updates to API, and upgrade to the latest version of Studio to use the new ensemble training mode. For more information on this feature, see Model support, metrics, and validation with Amazon SageMaker Autopilot and to learn more about Autopilot, visit the product page.


About the authors

Janisha Anand is a Senior Product Manager in the SageMaker Low/No Code ML team, which includes SageMaker Autopilot. She enjoys coffee, staying active, and spending time with her family.

Saket Sathe is a Senior Applied Scientist in the SageMaker Autopilot team. He is passionate about building the next generation of machine learning algorithms and systems. Aside from work, he loves to read, cook, slurp ramen, and play badminton.

Abhishek Singh is a Software Engineer for the Autopilot team in AWS. He has 8+ years experience as a software developer, and is passionate about building scalable software solutions that solve customer problems. In his free time, Abhishek likes to stay active by going on hikes or getting involved in pick up soccer games.

Vadim Omeltchenko is a Sr. AI/ML Solutions Architect who is passionate about helping AWS customers innovate in the cloud. His prior IT experience was predominantly on the ground.

Read More

Now You’re Speaking My Language: NVIDIA Riva Sets New Bar for Fully Customizable Speech AI

Whether for virtual assistants, transcriptions or contact centers, voice AI services are turning words and conversations into bits and bytes of business magic.

At GTC this week, NVIDIA announced new additions to NVIDIA Riva, a GPU-accelerated software development kit for building and deploying speech AI applications.

Riva’s pretrained models are now offered in seven languages, including French and Hindi. Additional languages on the horizon: Arabic, Italian, Japanese, Korean and Portuguese. Riva also brings improvements in accuracy for English, German, Mandarin, Russian and Spanish. Additionally, it adds capabilities like word-level confidence scores and speaker diarization — the process of identifying speakers in audio streams.

Riva is built to be fully customizable at every stage of the speech AI pipeline to help solve unique problems efficiently. Developers can also deploy it where they want their data to be: on premises, for hybrid multiclouds, at the edge or in embedded devices. It’s used by enterprises to bolster services, efficiency and competitive advantage.

While AI for voice services has been in high demand, development tools have lagged. More people are working and learning from home, shopping online and seeking remote customer support, which strains call centers and pushes voice applications to their limits. Customer service wait times have recently tripled as staffing shortages have hit call centers hard, according to a 2022 Bloomberg report.

Advances in speech AI offer the way forward. NVIDIA Riva enables companies to explore larger deep learning models and develop more nuanced voice systems. Speech AI applications built on Riva provide an accelerated path to better services, promising improved customer experiences and engagement.

Rising Demand for Voice AI Applications

The worldwide market for contact center software reached about $27 billion in 2021, a figure expected to nearly triple to $79 billion by 2029, according to Fortune Business Insights.

This increase is due to the benefits that customized voice applications offer businesses of any size, in almost every industry — from global enterprises, to original equipment manufacturers delivering speech AI-based systems and cloud services, to systems integrators and independent software vendors.

Riva SDK Accelerates AI Workflows 

NVIDIA Riva includes pretrained language models that can be used as is or fine-tuned using transfer learning from the NVIDIA TAO Toolkit, which allows for custom datasets in a no-code environment. Riva automated speech recognition (ASR) and text-to-speech (TTS) models can be optimized, exported and deployed as speech services.

Voice AI is making its way into ever more types of applications, such as customer support virtual assistants and chatbots, video conferencing systems, drive-thru convenience food orders, retail by phone, and media and entertainment. Global organizations have adopted Riva to drive voice AI efforts, including T-Mobile, Deloitte, HPE, Interactions, 1-800-Flowers.com, Quantiphi and Kore.ai.

  • T-Mobile adopted Riva for its T-Mobile Expert Assist — a custom-built call center application that uses AI to transcribe real-time customer conversations and recommend solutions — for 17,000 customer service agents. T-Mobile plans to deploy Riva worldwide soon.
  • Hewlett Packard Enterprise offers HPE ProLiant servers that include NVIDIA GPUs and NVIDIA Riva software in a system capable of developing and running challenging speech AI and natural language processing workloads that can easily turn audio into insights. HPE ProLiant systems and NVIDIA Riva form a world-class, full-stack solution for running financial services and other industry applications.

“To deliver the capabilities of NVIDIA Riva, HPE offers a Kubernetes-based NLP reference architecture based on HPE Ezmeral software,” said Scott Ramsay, vice president of HPE GreenLake solutions at HPE. “Delivered through the HPE GreenLake cloud platform, this system enables developers to accelerate the development and deployment of next-generation speech AI applications.”

  • Deloitte supports clients looking to deploy ASR and TTS use cases, such as for order-taking systems in some of the world’s largest quick-order restaurants. It’s also developing chatbot services for healthcare providers that will enable accurate and efficient transcriptions for patient questions and chat summarizations.

“Advances in natural language processing make it possible to design cost-efficient experiences that enable purposeful, simple and natural customer conversations,” said Christine Ahn, principal at Deloitte US. “Our clients are looking for a streamlined path to conversational AI deployment, and NVIDIA Riva supports that path.”

  • Interactions has integrated Riva with its Curo software platform to create seamless, personalized engagements for customers in a broad range of industries that include telecommunications, as well as for companies such as 1-800-Flowers.com, which has deployed a speech AI order-taking system.
  • Kore.ai is integrating Riva with its SmartAssist speech AI contact-center-as-a-service, which powers its BankAssist, HealthAssist, AgentAssist, HR Assist and IT Assist products. Proof of concepts with NVIDIA Riva are in progress.
  • Quantiphi is a solution-delivery partner that is developing closed-captioning solutions using Riva for customers in media and entertainment, including Fox News. It’s also developing digital avatars with Riva for telecommunications and other industries.

Complex Speech AI Pipelines, Easier Solutions

Speech AI pipelines can be complex and require coordination across multiple services. Microservices are required to run at scale with ASR models, natural language understanding, TTS and domain-specific apps. NVIDIA GPUs are ideal for acceleration of these types of specialized tasks.

Riva offers software libraries for building speech AI applications and includes GPU-optimized services for ASR and TTS that use the latest deep learning models. Developers can meld these multiple speech AI skills within their applications.

Developers can easily access Riva and pretrained models through NVIDIA NGC, a hub for GPU-optimized AI software, models and Jupyter Notebook examples.

Support for Riva is available through NVIDIA AI Enterprise, a cloud-native suite of AI and data analytics software that’s optimized to enable any organization to use AI. It’s certified to deploy anywhere — from the enterprise data center to the public cloud — and includes global enterprise support to keep AI projects on track.

Try NVIDIA Riva with guided labs on ready-to-run infrastructure in NVIDIA LaunchPad.

The post Now You’re Speaking My Language: NVIDIA Riva Sets New Bar for Fully Customizable Speech AI appeared first on NVIDIA Blog.

Read More