India Enterprises Serve Over a Billion Local Language Speakers Using LLMs Built With NVIDIA AI

India Enterprises Serve Over a Billion Local Language Speakers Using LLMs Built With NVIDIA AI

Namaste, vanakkam, sat sri akaal — these are just three forms of greeting in India, a country with 22 constitutionally recognized languages and over 1,500 more recorded by the country’s census. Around 10% of its residents speak English, the internet’s most common language.

As India, the world’s most populous country, forges ahead with rapid digitalization efforts, its enterprises and local startups are developing multilingual AI models that enable more Indians to interact with technology in their primary language. It’s a case study in sovereign AI — the development of domestic AI infrastructure that is built on local datasets and reflects a region’s specific dialects, cultures and practices.

These projects are building language models for Indic languages and English that can power customer service AI agents for businesses, rapidly translate content to broaden access to information, and enable services to more easily reach a diverse population of over 1.4 billion individuals.

To support initiatives like these, NVIDIA has released a small language model for Hindi, India’s most prevalent language with over half a billion speakers. Now available as an NVIDIA NIM microservice, the model, dubbed Nemotron-4-Mini-Hindi-4B, can be easily deployed on any NVIDIA GPU-accelerated system for optimized performance.

Tech Mahindra, an Indian IT services and consulting company, is the first to use the Nemotron Hindi NIM microservice to develop an AI model called Indus 2.0, which is focused on Hindi and dozens of its dialects. Indus 2.0 harnesses Tech Mahindra’s high-quality fine-tuning data to further boost model accuracy, unlocking opportunities for clients in banking, education, healthcare and other industries to deliver localized services.

Tech Mahindra will showcase Indus 2.0 at the NVIDIA AI Summit, taking place Oct. 23-25 in Mumbai. The company also uses NVIDIA NeMo to develop its sovereign large language model (LLM) platform, TeNo.

NVIDIA NIM Makes AI Adoption for Hindi as Easy as Ek, Do, Teen

The Nemotron Hindi model has 4 billion parameters and is derived from Nemotron-4 15B, a 15-billion parameter multilingual language model developed by NVIDIA. The model was pruned, distilled and trained with a combination of real-world Hindi data, synthetic Hindi data and an equal amount of English data using NVIDIA NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI.

The dataset was created with NVIDIA NeMo Curator, which improves generative AI model accuracy by processing high-quality multimodal data at scale for training and customization. NeMo Curator uses NVIDIA RAPIDS libraries to accelerate data processing pipelines on multi-node GPU systems, lowering processing time and total cost of ownership. It also provides pre-built pipelines and building blocks for synthetic data generation, data filtering, classification and deduplication to process high-quality data.

After fine-tuning with NeMo, the final model leads on multiple accuracy benchmarks for AI models with up to 8 billion parameters. Packaged as a NIM microservice, it can be easily harnessed to support use cases across industries such as education, retail and healthcare.

It’s available as part of the NVIDIA AI Enterprise software platform, which gives businesses access to additional resources, including technical support and enterprise-grade security, to streamline AI development for production environments.

Bevy of Businesses Serves Multilingual Population

Innovators, major enterprises and global systems integrators across India are building customized language models using NVIDIA NeMo.

Companies in the NVIDIA Inception program for cutting-edge startups are using NeMo to develop AI models for several Indic languages.

Sarvam AI offers enterprise customers speech-to-text, text-to-speech, translation and data parsing models. The company developed Sarvam 1, India’s first homegrown, multilingual LLM, which was trained from scratch on domestic AI infrastructure powered by NVIDIA H100 Tensor Core GPUs.

Sarvam 1 — developed using NVIDIA AI Enterprise software including NeMo Curator and NeMo Framework — supports English and 10 major Indian languages, including Bengali, Marathi, Tamil and Telugu.

Sarvam AI also uses NVIDIA NIM microservices, NVIDIA Riva for conversational AI, NVIDIA TensorRT-LLM software and NVIDIA Triton Inference Server to optimize and deploy conversational AI agents with sub-second latency.

Another Inception startup, Gnani.ai, built a multilingual speech-to-speech LLM that powers AI customer service assistants that handle around 10 million real-time voice interactions daily for over 150 banking, insurance and financial services companies across India and the U.S. The model supports 14 languages and was trained on over 14 million hours of conversational speech data using NVIDIA Hopper GPUs and NeMo Framework.

Gnani.ai uses TensorRT-LLM, Triton Inference Server and Riva NIM microservices to optimize its AI for virtual customer service assistants and speech analytics.

Large enterprises building LLMs with NeMo include:

  • Flipkart, a major Indian ecommerce company majority-owned by Walmart, is integrating NeMo Guardrails, an open-source toolkit that enables developers to add programmable guardrails to LLMs, to enhance the safety of its conversational AI systems.
  • Krutrim, part of the Ola Group of businesses that includes one of India’s top ride-booking platforms, is developing a multilingual Indic foundation model using Mistral NeMo 12B, a state-of-the-art LLM developed by Mistral AI and NVIDIA.
  • Zoho Corporation, a global technology company based in Chennai, will use NVIDIA TensorRT-LLM and NVIDIA Triton Inference Server to optimize and deliver language models for its over 700,000 customers. The company will use NeMo running on NVIDIA Hopper GPUs to pretrain narrow, small, medium and large models from scratch for over 100 business applications.

India’s top global systems integrators are also offering NVIDIA NeMo-accelerated solutions to their customers.

  • Infosys will work on specific tools and solutions using the NVIDIA AI stack. The company’s center of excellence is also developing AI-powered small language models that will be offered to customers as a service.
  • Tata Consultancy Services has developed AI solutions based on NVIDIA NIM Agent Blueprints for the telecommunications, retail, manufacturing, automotive and financial services industries. TCS’ offerings include NeMo-powered, domain-specific language models that can be customized to address customer queries and answer company-specific questions for employees for all enterprise functions such as IT, HR or field operations.
  • Wipro is using NVIDIA AI Enterprise software including NIM Agent Blueprints and NeMo to help businesses easily develop custom conversational AI solutions such as digital humans to support customer service interactions.

Wipro and TCS also use NeMo Curator’s synthetic data generation pipelines to generate data in languages other than English to customize LLMs for their clients.

To learn more about NVIDIA’s collaboration with businesses and developers in India, watch the replay of company founder and CEO Jensen Huang’s fireside chat at the NVIDIA AI Summit.

Read More

India Enterprises Serve Over a Billion Local Language Speakers Using LLMs Built With NVIDIA AI

India Enterprises Serve Over a Billion Local Language Speakers Using LLMs Built With NVIDIA AI

Namaste, vanakkam, sat sri akaal — these are just three forms of greeting in India, a country with 22 constitutionally recognized languages and over 1,500 more recorded by the country’s census. Around 10% of its residents speak English, the internet’s most common language.

As India, the world’s most populous country, forges ahead with rapid digitalization efforts, its enterprises and local startups are developing multilingual AI models that enable more Indians to interact with technology in their primary language. It’s a case study in sovereign AI — the development of domestic AI infrastructure that is built on local datasets and reflects a region’s specific dialects, cultures and practices.

These projects are building language models for Indic languages and English that can power customer service AI agents for businesses, rapidly translate content to broaden access to information, and enable services to more easily reach a diverse population of over 1.4 billion individuals.

To support initiatives like these, NVIDIA has released a small language model for Hindi, India’s most prevalent language with over half a billion speakers. Now available as an NVIDIA NIM microservice, the model, dubbed Nemotron-4-Mini-Hindi-4B, can be easily deployed on any NVIDIA GPU-accelerated system for optimized performance.

Tech Mahindra, an Indian IT services and consulting company, is the first to use the Nemotron Hindi NIM microservice to develop an AI model called Indus 2.0, which is focused on Hindi and dozens of its dialects. Indus 2.0 harnesses Tech Mahindra’s high-quality fine-tuning data to further boost model accuracy, unlocking opportunities for clients in banking, education, healthcare and other industries to deliver localized services.

Tech Mahindra will showcase Indus 2.0 at the NVIDIA AI Summit, taking place Oct. 23-25 in Mumbai. The company also uses NVIDIA NeMo to develop its sovereign large language model (LLM) platform, TeNo.

NVIDIA NIM Makes AI Adoption for Hindi as Easy as Ek, Do, Teen

The Nemotron Hindi model has 4 billion parameters and is derived from Nemotron-4 15B, a 15-billion parameter multilingual language model developed by NVIDIA. The model was pruned, distilled and trained with a combination of real-world Hindi data, synthetic Hindi data and an equal amount of English data using NVIDIA NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI.

The dataset was created with NVIDIA NeMo Curator, which improves generative AI model accuracy by processing high-quality multimodal data at scale for training and customization. NeMo Curator uses NVIDIA RAPIDS libraries to accelerate data processing pipelines on multi-node GPU systems, lowering processing time and total cost of ownership. It also provides pre-built pipelines and building blocks for synthetic data generation, data filtering, classification and deduplication to process high-quality data.

After fine-tuning with NeMo, the final model leads on multiple accuracy benchmarks for AI models with up to 8 billion parameters. Packaged as a NIM microservice, it can be easily harnessed to support use cases across industries such as education, retail and healthcare.

It’s available as part of the NVIDIA AI Enterprise software platform, which gives businesses access to additional resources, including technical support and enterprise-grade security, to streamline AI development for production environments.

Bevy of Businesses Serves Multilingual Population

Innovators, major enterprises and global systems integrators across India are building customized language models using NVIDIA NeMo.

Companies in the NVIDIA Inception program for cutting-edge startups are using NeMo to develop AI models for several Indic languages.

Sarvam AI offers enterprise customers speech-to-text, text-to-speech, translation and data parsing models. The company developed Sarvam 1, India’s first homegrown, multilingual LLM, which was trained from scratch on domestic AI infrastructure powered by NVIDIA H100 Tensor Core GPUs.

Sarvam 1 — developed using NVIDIA AI Enterprise software including NeMo Curator and NeMo Framework — supports English and 10 major Indian languages, including Bengali, Marathi, Tamil and Telugu.

Sarvam AI also uses NVIDIA NIM microservices, NVIDIA Riva for conversational AI, NVIDIA TensorRT-LLM software and NVIDIA Triton Inference Server to optimize and deploy conversational AI agents with sub-second latency.

Another Inception startup, Gnani.ai, built a multilingual speech-to-speech LLM that powers AI customer service assistants that handle around 10 million real-time voice interactions daily for over 150 banking, insurance and financial services companies across India and the U.S. The model supports 14 languages and was trained on over 14 million hours of conversational speech data using NVIDIA Hopper GPUs and NeMo Framework.

Gnani.ai uses TensorRT-LLM, Triton Inference Server and Riva NIM microservices to optimize its AI for virtual customer service assistants and speech analytics.

Large enterprises building LLMs with NeMo include:

  • Flipkart, a major Indian ecommerce company majority-owned by Walmart, is integrating NeMo Guardrails, an open-source toolkit that enables developers to add programmable guardrails to LLMs, to enhance the safety of its conversational AI systems.
  • Krutrim, part of the Ola Group of businesses that includes one of India’s top ride-booking platforms, is developing a multilingual Indic foundation model using Mistral NeMo 12B, a state-of-the-art LLM developed by Mistral AI and NVIDIA.
  • Zoho Corporation, a global technology company based in Chennai, will use NVIDIA TensorRT-LLM and NVIDIA Triton Inference Server to optimize and deliver language models for its over 700,000 customers. The company will use NeMo running on NVIDIA Hopper GPUs to pretrain narrow, small, medium and large models from scratch for over 100 business applications.

India’s top global systems integrators are also offering NVIDIA NeMo-accelerated solutions to their customers.

  • Infosys will work on specific tools and solutions using the NVIDIA AI stack. The company’s center of excellence is also developing AI-powered small language models that will be offered to customers as a service.
  • Tata Consultancy Services has developed AI solutions based on NVIDIA NIM Agent Blueprints for the telecommunications, retail, manufacturing, automotive and financial services industries. TCS’ offerings include NeMo-powered, domain-specific language models that can be customized to address customer queries and answer company-specific questions for employees for all enterprise functions such as IT, HR or field operations.
  • Wipro is using NVIDIA AI Enterprise software including NIM Agent Blueprints and NeMo to help businesses easily develop custom conversational AI solutions such as digital humans to support customer service interactions.

Wipro and TCS also use NeMo Curator’s synthetic data generation pipelines to generate data in languages other than English to customize LLMs for their clients.

To learn more about NVIDIA’s collaboration with businesses and developers in India, watch the replay of company founder and CEO Jensen Huang’s fireside chat at the NVIDIA AI Summit.

Read More

India Enterprises Serve Over a Billion Local Language Speakers Using LLMs Built With NVIDIA AI

India Enterprises Serve Over a Billion Local Language Speakers Using LLMs Built With NVIDIA AI

Namaste, vanakkam, sat sri akaal — these are just three forms of greeting in India, a country with 22 constitutionally recognized languages and over 1,500 more recorded by the country’s census. Around 10% of its residents speak English, the internet’s most common language.

As India, the world’s most populous country, forges ahead with rapid digitalization efforts, its enterprises and local startups are developing multilingual AI models that enable more Indians to interact with technology in their primary language. It’s a case study in sovereign AI — the development of domestic AI infrastructure that is built on local datasets and reflects a region’s specific dialects, cultures and practices.

These projects are building language models for Indic languages and English that can power customer service AI agents for businesses, rapidly translate content to broaden access to information, and enable services to more easily reach a diverse population of over 1.4 billion individuals.

To support initiatives like these, NVIDIA has released a small language model for Hindi, India’s most prevalent language with over half a billion speakers. Now available as an NVIDIA NIM microservice, the model, dubbed Nemotron-4-Mini-Hindi-4B, can be easily deployed on any NVIDIA GPU-accelerated system for optimized performance.

Tech Mahindra, an Indian IT services and consulting company, is the first to use the Nemotron Hindi NIM microservice to develop an AI model called Indus 2.0, which is focused on Hindi and dozens of its dialects. Indus 2.0 harnesses Tech Mahindra’s high-quality fine-tuning data to further boost model accuracy, unlocking opportunities for clients in banking, education, healthcare and other industries to deliver localized services.

Tech Mahindra will showcase Indus 2.0 at the NVIDIA AI Summit, taking place Oct. 23-25 in Mumbai. The company also uses NVIDIA NeMo to develop its sovereign large language model (LLM) platform, TeNo.

NVIDIA NIM Makes AI Adoption for Hindi as Easy as Ek, Do, Teen

The Nemotron Hindi model has 4 billion parameters and is derived from Nemotron-4 15B, a 15-billion parameter multilingual language model developed by NVIDIA. The model was pruned, distilled and trained with a combination of real-world Hindi data, synthetic Hindi data and an equal amount of English data using NVIDIA NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI.

The dataset was created with NVIDIA NeMo Curator, which improves generative AI model accuracy by processing high-quality multimodal data at scale for training and customization. NeMo Curator uses NVIDIA RAPIDS libraries to accelerate data processing pipelines on multi-node GPU systems, lowering processing time and total cost of ownership. It also provides pre-built pipelines and building blocks for synthetic data generation, data filtering, classification and deduplication to process high-quality data.

After fine-tuning with NeMo, the final model leads on multiple accuracy benchmarks for AI models with up to 8 billion parameters. Packaged as a NIM microservice, it can be easily harnessed to support use cases across industries such as education, retail and healthcare.

It’s available as part of the NVIDIA AI Enterprise software platform, which gives businesses access to additional resources, including technical support and enterprise-grade security, to streamline AI development for production environments.

Bevy of Businesses Serves Multilingual Population

Innovators, major enterprises and global systems integrators across India are building customized language models using NVIDIA NeMo.

Companies in the NVIDIA Inception program for cutting-edge startups are using NeMo to develop AI models for several Indic languages.

Sarvam AI offers enterprise customers speech-to-text, text-to-speech, translation and data parsing models. The company developed Sarvam 1, India’s first homegrown, multilingual LLM, which was trained from scratch on domestic AI infrastructure powered by NVIDIA H100 Tensor Core GPUs.

Sarvam 1 — developed using NVIDIA AI Enterprise software including NeMo Curator and NeMo Framework — supports English and 10 major Indian languages, including Bengali, Marathi, Tamil and Telugu.

Sarvam AI also uses NVIDIA NIM microservices, NVIDIA Riva for conversational AI, NVIDIA TensorRT-LLM software and NVIDIA Triton Inference Server to optimize and deploy conversational AI agents with sub-second latency.

Another Inception startup, Gnani.ai, built a multilingual speech-to-speech LLM that powers AI customer service assistants that handle around 10 million real-time voice interactions daily for over 150 banking, insurance and financial services companies across India and the U.S. The model supports 14 languages and was trained on over 14 million hours of conversational speech data using NVIDIA Hopper GPUs and NeMo Framework.

Gnani.ai uses TensorRT-LLM, Triton Inference Server and Riva NIM microservices to optimize its AI for virtual customer service assistants and speech analytics.

Large enterprises building LLMs with NeMo include:

  • Flipkart, a major Indian ecommerce company majority-owned by Walmart, is integrating NeMo Guardrails, an open-source toolkit that enables developers to add programmable guardrails to LLMs, to enhance the safety of its conversational AI systems.
  • Krutrim, part of the Ola Group of businesses that includes one of India’s top ride-booking platforms, is developing a multilingual Indic foundation model using Mistral NeMo 12B, a state-of-the-art LLM developed by Mistral AI and NVIDIA.
  • Zoho Corporation, a global technology company based in Chennai, will use NVIDIA TensorRT-LLM and NVIDIA Triton Inference Server to optimize and deliver language models for its over 700,000 customers. The company will use NeMo running on NVIDIA Hopper GPUs to pretrain narrow, small, medium and large models from scratch for over 100 business applications.

India’s top global systems integrators are also offering NVIDIA NeMo-accelerated solutions to their customers.

  • Infosys will work on specific tools and solutions using the NVIDIA AI stack. The company’s center of excellence is also developing AI-powered small language models that will be offered to customers as a service.
  • Tata Consultancy Services has developed AI solutions based on NVIDIA NIM Agent Blueprints for the telecommunications, retail, manufacturing, automotive and financial services industries. TCS’ offerings include NeMo-powered, domain-specific language models that can be customized to address customer queries and answer company-specific questions for employees for all enterprise functions such as IT, HR or field operations.
  • Wipro is using NVIDIA AI Enterprise software including NIM Agent Blueprints and NeMo to help businesses easily develop custom conversational AI solutions such as digital humans to support customer service interactions.

Wipro and TCS also use NeMo Curator’s synthetic data generation pipelines to generate data in languages other than English to customize LLMs for their clients.

To learn more about NVIDIA’s collaboration with businesses and developers in India, watch the replay of company founder and CEO Jensen Huang’s fireside chat at the NVIDIA AI Summit.

Read More

India Enterprises Serve Over a Billion Local Language Speakers Using LLMs Built With NVIDIA AI

India Enterprises Serve Over a Billion Local Language Speakers Using LLMs Built With NVIDIA AI

Namaste, vanakkam, sat sri akaal — these are just three forms of greeting in India, a country with 22 constitutionally recognized languages and over 1,500 more recorded by the country’s census. Around 10% of its residents speak English, the internet’s most common language.

As India, the world’s most populous country, forges ahead with rapid digitalization efforts, its enterprises and local startups are developing multilingual AI models that enable more Indians to interact with technology in their primary language. It’s a case study in sovereign AI — the development of domestic AI infrastructure that is built on local datasets and reflects a region’s specific dialects, cultures and practices.

These projects are building language models for Indic languages and English that can power customer service AI agents for businesses, rapidly translate content to broaden access to information, and enable services to more easily reach a diverse population of over 1.4 billion individuals.

To support initiatives like these, NVIDIA has released a small language model for Hindi, India’s most prevalent language with over half a billion speakers. Now available as an NVIDIA NIM microservice, the model, dubbed Nemotron-4-Mini-Hindi-4B, can be easily deployed on any NVIDIA GPU-accelerated system for optimized performance.

Tech Mahindra, an Indian IT services and consulting company, is the first to use the Nemotron Hindi NIM microservice to develop an AI model called Indus 2.0, which is focused on Hindi and dozens of its dialects. Indus 2.0 harnesses Tech Mahindra’s high-quality fine-tuning data to further boost model accuracy, unlocking opportunities for clients in banking, education, healthcare and other industries to deliver localized services.

Tech Mahindra will showcase Indus 2.0 at the NVIDIA AI Summit, taking place Oct. 23-25 in Mumbai. The company also uses NVIDIA NeMo to develop its sovereign large language model (LLM) platform, TeNo.

NVIDIA NIM Makes AI Adoption for Hindi as Easy as Ek, Do, Teen

The Nemotron Hindi model has 4 billion parameters and is derived from Nemotron-4 15B, a 15-billion parameter multilingual language model developed by NVIDIA. The model was pruned, distilled and trained with a combination of real-world Hindi data, synthetic Hindi data and an equal amount of English data using NVIDIA NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI.

The dataset was created with NVIDIA NeMo Curator, which improves generative AI model accuracy by processing high-quality multimodal data at scale for training and customization. NeMo Curator uses NVIDIA RAPIDS libraries to accelerate data processing pipelines on multi-node GPU systems, lowering processing time and total cost of ownership. It also provides pre-built pipelines and building blocks for synthetic data generation, data filtering, classification and deduplication to process high-quality data.

After fine-tuning with NeMo, the final model leads on multiple accuracy benchmarks for AI models with up to 8 billion parameters. Packaged as a NIM microservice, it can be easily harnessed to support use cases across industries such as education, retail and healthcare.

It’s available as part of the NVIDIA AI Enterprise software platform, which gives businesses access to additional resources, including technical support and enterprise-grade security, to streamline AI development for production environments.

Bevy of Businesses Serves Multilingual Population

Innovators, major enterprises and global systems integrators across India are building customized language models using NVIDIA NeMo.

Companies in the NVIDIA Inception program for cutting-edge startups are using NeMo to develop AI models for several Indic languages.

Sarvam AI offers enterprise customers speech-to-text, text-to-speech, translation and data parsing models. The company developed Sarvam 1, India’s first homegrown, multilingual LLM, which was trained from scratch on domestic AI infrastructure powered by NVIDIA H100 Tensor Core GPUs.

Sarvam 1 — developed using NVIDIA AI Enterprise software including NeMo Curator and NeMo Framework — supports English and 10 major Indian languages, including Bengali, Marathi, Tamil and Telugu.

Sarvam AI also uses NVIDIA NIM microservices, NVIDIA Riva for conversational AI, NVIDIA TensorRT-LLM software and NVIDIA Triton Inference Server to optimize and deploy conversational AI agents with sub-second latency.

Another Inception startup, Gnani.ai, built a multilingual speech-to-speech LLM that powers AI customer service assistants that handle around 10 million real-time voice interactions daily for over 150 banking, insurance and financial services companies across India and the U.S. The model supports 14 languages and was trained on over 14 million hours of conversational speech data using NVIDIA Hopper GPUs and NeMo Framework.

Gnani.ai uses TensorRT-LLM, Triton Inference Server and Riva NIM microservices to optimize its AI for virtual customer service assistants and speech analytics.

Large enterprises building LLMs with NeMo include:

  • Flipkart, a major Indian ecommerce company majority-owned by Walmart, is integrating NeMo Guardrails, an open-source toolkit that enables developers to add programmable guardrails to LLMs, to enhance the safety of its conversational AI systems.
  • Krutrim, part of the Ola Group of businesses that includes one of India’s top ride-booking platforms, is developing a multilingual Indic foundation model using Mistral NeMo 12B, a state-of-the-art LLM developed by Mistral AI and NVIDIA.
  • Zoho Corporation, a global technology company based in Chennai, will use NVIDIA TensorRT-LLM and NVIDIA Triton Inference Server to optimize and deliver language models for its over 700,000 customers. The company will use NeMo running on NVIDIA Hopper GPUs to pretrain narrow, small, medium and large models from scratch for over 100 business applications.

India’s top global systems integrators are also offering NVIDIA NeMo-accelerated solutions to their customers.

  • Infosys will work on specific tools and solutions using the NVIDIA AI stack. The company’s center of excellence is also developing AI-powered small language models that will be offered to customers as a service.
  • Tata Consultancy Services has developed AI solutions based on NVIDIA NIM Agent Blueprints for the telecommunications, retail, manufacturing, automotive and financial services industries. TCS’ offerings include NeMo-powered, domain-specific language models that can be customized to address customer queries and answer company-specific questions for employees for all enterprise functions such as IT, HR or field operations.
  • Wipro is using NVIDIA AI Enterprise software including NIM Agent Blueprints and NeMo to help businesses easily develop custom conversational AI solutions such as digital humans to support customer service interactions.

Wipro and TCS also use NeMo Curator’s synthetic data generation pipelines to generate data in languages other than English to customize LLMs for their clients.

To learn more about NVIDIA’s collaboration with businesses and developers in India, watch the replay of company founder and CEO Jensen Huang’s fireside chat at the NVIDIA AI Summit.

Read More

India Enterprises Serve Over a Billion Local Language Speakers Using LLMs Built With NVIDIA AI

India Enterprises Serve Over a Billion Local Language Speakers Using LLMs Built With NVIDIA AI

Namaste, vanakkam, sat sri akaal — these are just three forms of greeting in India, a country with 22 constitutionally recognized languages and over 1,500 more recorded by the country’s census. Around 10% of its residents speak English, the internet’s most common language.

As India, the world’s most populous country, forges ahead with rapid digitalization efforts, its enterprises and local startups are developing multilingual AI models that enable more Indians to interact with technology in their primary language. It’s a case study in sovereign AI — the development of domestic AI infrastructure that is built on local datasets and reflects a region’s specific dialects, cultures and practices.

These projects are building language models for Indic languages and English that can power customer service AI agents for businesses, rapidly translate content to broaden access to information, and enable services to more easily reach a diverse population of over 1.4 billion individuals.

To support initiatives like these, NVIDIA has released a small language model for Hindi, India’s most prevalent language with over half a billion speakers. Now available as an NVIDIA NIM microservice, the model, dubbed Nemotron-4-Mini-Hindi-4B, can be easily deployed on any NVIDIA GPU-accelerated system for optimized performance.

Tech Mahindra, an Indian IT services and consulting company, is the first to use the Nemotron Hindi NIM microservice to develop an AI model called Indus 2.0, which is focused on Hindi and dozens of its dialects. Indus 2.0 harnesses Tech Mahindra’s high-quality fine-tuning data to further boost model accuracy, unlocking opportunities for clients in banking, education, healthcare and other industries to deliver localized services.

Tech Mahindra will showcase Indus 2.0 at the NVIDIA AI Summit, taking place Oct. 23-25 in Mumbai. The company also uses NVIDIA NeMo to develop its sovereign large language model (LLM) platform, TeNo.

NVIDIA NIM Makes AI Adoption for Hindi as Easy as Ek, Do, Teen

The Nemotron Hindi model has 4 billion parameters and is derived from Nemotron-4 15B, a 15-billion parameter multilingual language model developed by NVIDIA. The model was pruned, distilled and trained with a combination of real-world Hindi data, synthetic Hindi data and an equal amount of English data using NVIDIA NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI.

The dataset was created with NVIDIA NeMo Curator, which improves generative AI model accuracy by processing high-quality multimodal data at scale for training and customization. NeMo Curator uses NVIDIA RAPIDS libraries to accelerate data processing pipelines on multi-node GPU systems, lowering processing time and total cost of ownership. It also provides pre-built pipelines and building blocks for synthetic data generation, data filtering, classification and deduplication to process high-quality data.

After fine-tuning with NeMo, the final model leads on multiple accuracy benchmarks for AI models with up to 8 billion parameters. Packaged as a NIM microservice, it can be easily harnessed to support use cases across industries such as education, retail and healthcare.

It’s available as part of the NVIDIA AI Enterprise software platform, which gives businesses access to additional resources, including technical support and enterprise-grade security, to streamline AI development for production environments.

Bevy of Businesses Serves Multilingual Population

Innovators, major enterprises and global systems integrators across India are building customized language models using NVIDIA NeMo.

Companies in the NVIDIA Inception program for cutting-edge startups are using NeMo to develop AI models for several Indic languages.

Sarvam AI offers enterprise customers speech-to-text, text-to-speech, translation and data parsing models. The company developed Sarvam 1, India’s first homegrown, multilingual LLM, which was trained from scratch on domestic AI infrastructure powered by NVIDIA H100 Tensor Core GPUs.

Sarvam 1 — developed using NVIDIA AI Enterprise software including NeMo Curator and NeMo Framework — supports English and 10 major Indian languages, including Bengali, Marathi, Tamil and Telugu.

Sarvam AI also uses NVIDIA NIM microservices, NVIDIA Riva for conversational AI, NVIDIA TensorRT-LLM software and NVIDIA Triton Inference Server to optimize and deploy conversational AI agents with sub-second latency.

Another Inception startup, Gnani.ai, built a multilingual speech-to-speech LLM that powers AI customer service assistants that handle around 10 million real-time voice interactions daily for over 150 banking, insurance and financial services companies across India and the U.S. The model supports 14 languages and was trained on over 14 million hours of conversational speech data using NVIDIA Hopper GPUs and NeMo Framework.

Gnani.ai uses TensorRT-LLM, Triton Inference Server and Riva NIM microservices to optimize its AI for virtual customer service assistants and speech analytics.

Large enterprises building LLMs with NeMo include:

  • Flipkart, a major Indian ecommerce company majority-owned by Walmart, is integrating NeMo Guardrails, an open-source toolkit that enables developers to add programmable guardrails to LLMs, to enhance the safety of its conversational AI systems.
  • Krutrim, part of the Ola Group of businesses that includes one of India’s top ride-booking platforms, is developing a multilingual Indic foundation model using Mistral NeMo 12B, a state-of-the-art LLM developed by Mistral AI and NVIDIA.
  • Zoho Corporation, a global technology company based in Chennai, will use NVIDIA TensorRT-LLM and NVIDIA Triton Inference Server to optimize and deliver language models for its over 700,000 customers. The company will use NeMo running on NVIDIA Hopper GPUs to pretrain narrow, small, medium and large models from scratch for over 100 business applications.

India’s top global systems integrators are also offering NVIDIA NeMo-accelerated solutions to their customers.

  • Infosys will work on specific tools and solutions using the NVIDIA AI stack. The company’s center of excellence is also developing AI-powered small language models that will be offered to customers as a service.
  • Tata Consultancy Services has developed AI solutions based on NVIDIA NIM Agent Blueprints for the telecommunications, retail, manufacturing, automotive and financial services industries. TCS’ offerings include NeMo-powered, domain-specific language models that can be customized to address customer queries and answer company-specific questions for employees for all enterprise functions such as IT, HR or field operations.
  • Wipro is using NVIDIA AI Enterprise software including NIM Agent Blueprints and NeMo to help businesses easily develop custom conversational AI solutions such as digital humans to support customer service interactions.

Wipro and TCS also use NeMo Curator’s synthetic data generation pipelines to generate data in languages other than English to customize LLMs for their clients.

To learn more about NVIDIA’s collaboration with businesses and developers in India, watch the replay of company founder and CEO Jensen Huang’s fireside chat at the NVIDIA AI Summit.

Read More

India Enterprises Serve Over a Billion Local Language Speakers Using LLMs Built With NVIDIA AI

India Enterprises Serve Over a Billion Local Language Speakers Using LLMs Built With NVIDIA AI

Namaste, vanakkam, sat sri akaal — these are just three forms of greeting in India, a country with 22 constitutionally recognized languages and over 1,500 more recorded by the country’s census. Around 10% of its residents speak English, the internet’s most common language.

As India, the world’s most populous country, forges ahead with rapid digitalization efforts, its enterprises and local startups are developing multilingual AI models that enable more Indians to interact with technology in their primary language. It’s a case study in sovereign AI — the development of domestic AI infrastructure that is built on local datasets and reflects a region’s specific dialects, cultures and practices.

These projects are building language models for Indic languages and English that can power customer service AI agents for businesses, rapidly translate content to broaden access to information, and enable services to more easily reach a diverse population of over 1.4 billion individuals.

To support initiatives like these, NVIDIA has released a small language model for Hindi, India’s most prevalent language with over half a billion speakers. Now available as an NVIDIA NIM microservice, the model, dubbed Nemotron-4-Mini-Hindi-4B, can be easily deployed on any NVIDIA GPU-accelerated system for optimized performance.

Tech Mahindra, an Indian IT services and consulting company, is the first to use the Nemotron Hindi NIM microservice to develop an AI model called Indus 2.0, which is focused on Hindi and dozens of its dialects. Indus 2.0 harnesses Tech Mahindra’s high-quality fine-tuning data to further boost model accuracy, unlocking opportunities for clients in banking, education, healthcare and other industries to deliver localized services.

Tech Mahindra will showcase Indus 2.0 at the NVIDIA AI Summit, taking place Oct. 23-25 in Mumbai. The company also uses NVIDIA NeMo to develop its sovereign large language model (LLM) platform, TeNo.

NVIDIA NIM Makes AI Adoption for Hindi as Easy as Ek, Do, Teen

The Nemotron Hindi model has 4 billion parameters and is derived from Nemotron-4 15B, a 15-billion parameter multilingual language model developed by NVIDIA. The model was pruned, distilled and trained with a combination of real-world Hindi data, synthetic Hindi data and an equal amount of English data using NVIDIA NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI.

The dataset was created with NVIDIA NeMo Curator, which improves generative AI model accuracy by processing high-quality multimodal data at scale for training and customization. NeMo Curator uses NVIDIA RAPIDS libraries to accelerate data processing pipelines on multi-node GPU systems, lowering processing time and total cost of ownership. It also provides pre-built pipelines and building blocks for synthetic data generation, data filtering, classification and deduplication to process high-quality data.

After fine-tuning with NeMo, the final model leads on multiple accuracy benchmarks for AI models with up to 8 billion parameters. Packaged as a NIM microservice, it can be easily harnessed to support use cases across industries such as education, retail and healthcare.

It’s available as part of the NVIDIA AI Enterprise software platform, which gives businesses access to additional resources, including technical support and enterprise-grade security, to streamline AI development for production environments.

Bevy of Businesses Serves Multilingual Population

Innovators, major enterprises and global systems integrators across India are building customized language models using NVIDIA NeMo.

Companies in the NVIDIA Inception program for cutting-edge startups are using NeMo to develop AI models for several Indic languages.

Sarvam AI offers enterprise customers speech-to-text, text-to-speech, translation and data parsing models. The company developed Sarvam 1, India’s first homegrown, multilingual LLM, which was trained from scratch on domestic AI infrastructure powered by NVIDIA H100 Tensor Core GPUs.

Sarvam 1 — developed using NVIDIA AI Enterprise software including NeMo Curator and NeMo Framework — supports English and 10 major Indian languages, including Bengali, Marathi, Tamil and Telugu.

Sarvam AI also uses NVIDIA NIM microservices, NVIDIA Riva for conversational AI, NVIDIA TensorRT-LLM software and NVIDIA Triton Inference Server to optimize and deploy conversational AI agents with sub-second latency.

Another Inception startup, Gnani.ai, built a multilingual speech-to-speech LLM that powers AI customer service assistants that handle around 10 million real-time voice interactions daily for over 150 banking, insurance and financial services companies across India and the U.S. The model supports 14 languages and was trained on over 14 million hours of conversational speech data using NVIDIA Hopper GPUs and NeMo Framework.

Gnani.ai uses TensorRT-LLM, Triton Inference Server and Riva NIM microservices to optimize its AI for virtual customer service assistants and speech analytics.

Large enterprises building LLMs with NeMo include:

  • Flipkart, a major Indian ecommerce company majority-owned by Walmart, is integrating NeMo Guardrails, an open-source toolkit that enables developers to add programmable guardrails to LLMs, to enhance the safety of its conversational AI systems.
  • Krutrim, part of the Ola Group of businesses that includes one of India’s top ride-booking platforms, is developing a multilingual Indic foundation model using Mistral NeMo 12B, a state-of-the-art LLM developed by Mistral AI and NVIDIA.
  • Zoho Corporation, a global technology company based in Chennai, will use NVIDIA TensorRT-LLM and NVIDIA Triton Inference Server to optimize and deliver language models for its over 700,000 customers. The company will use NeMo running on NVIDIA Hopper GPUs to pretrain narrow, small, medium and large models from scratch for over 100 business applications.

India’s top global systems integrators are also offering NVIDIA NeMo-accelerated solutions to their customers.

  • Infosys will work on specific tools and solutions using the NVIDIA AI stack. The company’s center of excellence is also developing AI-powered small language models that will be offered to customers as a service.
  • Tata Consultancy Services has developed AI solutions based on NVIDIA NIM Agent Blueprints for the telecommunications, retail, manufacturing, automotive and financial services industries. TCS’ offerings include NeMo-powered, domain-specific language models that can be customized to address customer queries and answer company-specific questions for employees for all enterprise functions such as IT, HR or field operations.
  • Wipro is using NVIDIA AI Enterprise software including NIM Agent Blueprints and NeMo to help businesses easily develop custom conversational AI solutions such as digital humans to support customer service interactions.

Wipro and TCS also use NeMo Curator’s synthetic data generation pipelines to generate data in languages other than English to customize LLMs for their clients.

To learn more about NVIDIA’s collaboration with businesses and developers in India, watch the replay of company founder and CEO Jensen Huang’s fireside chat at the NVIDIA AI Summit.

Read More

India Enterprises Serve Over a Billion Local Language Speakers Using LLMs Built With NVIDIA AI

India Enterprises Serve Over a Billion Local Language Speakers Using LLMs Built With NVIDIA AI

Namaste, vanakkam, sat sri akaal — these are just three forms of greeting in India, a country with 22 constitutionally recognized languages and over 1,500 more recorded by the country’s census. Around 10% of its residents speak English, the internet’s most common language.

As India, the world’s most populous country, forges ahead with rapid digitalization efforts, its enterprises and local startups are developing multilingual AI models that enable more Indians to interact with technology in their primary language. It’s a case study in sovereign AI — the development of domestic AI infrastructure that is built on local datasets and reflects a region’s specific dialects, cultures and practices.

These projects are building language models for Indic languages and English that can power customer service AI agents for businesses, rapidly translate content to broaden access to information, and enable services to more easily reach a diverse population of over 1.4 billion individuals.

To support initiatives like these, NVIDIA has released a small language model for Hindi, India’s most prevalent language with over half a billion speakers. Now available as an NVIDIA NIM microservice, the model, dubbed Nemotron-4-Mini-Hindi-4B, can be easily deployed on any NVIDIA GPU-accelerated system for optimized performance.

Tech Mahindra, an Indian IT services and consulting company, is the first to use the Nemotron Hindi NIM microservice to develop an AI model called Indus 2.0, which is focused on Hindi and dozens of its dialects. Indus 2.0 harnesses Tech Mahindra’s high-quality fine-tuning data to further boost model accuracy, unlocking opportunities for clients in banking, education, healthcare and other industries to deliver localized services.

Tech Mahindra will showcase Indus 2.0 at the NVIDIA AI Summit, taking place Oct. 23-25 in Mumbai. The company also uses NVIDIA NeMo to develop its sovereign large language model (LLM) platform, TeNo.

NVIDIA NIM Makes AI Adoption for Hindi as Easy as Ek, Do, Teen

The Nemotron Hindi model has 4 billion parameters and is derived from Nemotron-4 15B, a 15-billion parameter multilingual language model developed by NVIDIA. The model was pruned, distilled and trained with a combination of real-world Hindi data, synthetic Hindi data and an equal amount of English data using NVIDIA NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI.

The dataset was created with NVIDIA NeMo Curator, which improves generative AI model accuracy by processing high-quality multimodal data at scale for training and customization. NeMo Curator uses NVIDIA RAPIDS libraries to accelerate data processing pipelines on multi-node GPU systems, lowering processing time and total cost of ownership. It also provides pre-built pipelines and building blocks for synthetic data generation, data filtering, classification and deduplication to process high-quality data.

After fine-tuning with NeMo, the final model leads on multiple accuracy benchmarks for AI models with up to 8 billion parameters. Packaged as a NIM microservice, it can be easily harnessed to support use cases across industries such as education, retail and healthcare.

It’s available as part of the NVIDIA AI Enterprise software platform, which gives businesses access to additional resources, including technical support and enterprise-grade security, to streamline AI development for production environments.

Bevy of Businesses Serves Multilingual Population

Innovators, major enterprises and global systems integrators across India are building customized language models using NVIDIA NeMo.

Companies in the NVIDIA Inception program for cutting-edge startups are using NeMo to develop AI models for several Indic languages.

Sarvam AI offers enterprise customers speech-to-text, text-to-speech, translation and data parsing models. The company developed Sarvam 1, India’s first homegrown, multilingual LLM, which was trained from scratch on domestic AI infrastructure powered by NVIDIA H100 Tensor Core GPUs.

Sarvam 1 — developed using NVIDIA AI Enterprise software including NeMo Curator and NeMo Framework — supports English and 10 major Indian languages, including Bengali, Marathi, Tamil and Telugu.

Sarvam AI also uses NVIDIA NIM microservices, NVIDIA Riva for conversational AI, NVIDIA TensorRT-LLM software and NVIDIA Triton Inference Server to optimize and deploy conversational AI agents with sub-second latency.

Another Inception startup, Gnani.ai, built a multilingual speech-to-speech LLM that powers AI customer service assistants that handle around 10 million real-time voice interactions daily for over 150 banking, insurance and financial services companies across India and the U.S. The model supports 14 languages and was trained on over 14 million hours of conversational speech data using NVIDIA Hopper GPUs and NeMo Framework.

Gnani.ai uses TensorRT-LLM, Triton Inference Server and Riva NIM microservices to optimize its AI for virtual customer service assistants and speech analytics.

Large enterprises building LLMs with NeMo include:

  • Flipkart, a major Indian ecommerce company majority-owned by Walmart, is integrating NeMo Guardrails, an open-source toolkit that enables developers to add programmable guardrails to LLMs, to enhance the safety of its conversational AI systems.
  • Krutrim, part of the Ola Group of businesses that includes one of India’s top ride-booking platforms, is developing a multilingual Indic foundation model using Mistral NeMo 12B, a state-of-the-art LLM developed by Mistral AI and NVIDIA.
  • Zoho Corporation, a global technology company based in Chennai, will use NVIDIA TensorRT-LLM and NVIDIA Triton Inference Server to optimize and deliver language models for its over 700,000 customers. The company will use NeMo running on NVIDIA Hopper GPUs to pretrain narrow, small, medium and large models from scratch for over 100 business applications.

India’s top global systems integrators are also offering NVIDIA NeMo-accelerated solutions to their customers.

  • Infosys will work on specific tools and solutions using the NVIDIA AI stack. The company’s center of excellence is also developing AI-powered small language models that will be offered to customers as a service.
  • Tata Consultancy Services has developed AI solutions based on NVIDIA NIM Agent Blueprints for the telecommunications, retail, manufacturing, automotive and financial services industries. TCS’ offerings include NeMo-powered, domain-specific language models that can be customized to address customer queries and answer company-specific questions for employees for all enterprise functions such as IT, HR or field operations.
  • Wipro is using NVIDIA AI Enterprise software including NIM Agent Blueprints and NeMo to help businesses easily develop custom conversational AI solutions such as digital humans to support customer service interactions.

Wipro and TCS also use NeMo Curator’s synthetic data generation pipelines to generate data in languages other than English to customize LLMs for their clients.

To learn more about NVIDIA’s collaboration with businesses and developers in India, watch the replay of company founder and CEO Jensen Huang’s fireside chat at the NVIDIA AI Summit.

Read More

Combining Machine Learning and Homomorphic Encryption in the Apple Ecosystem

At Apple, we believe privacy is a fundamental human right. Our work to protect user privacy is informed by a set of privacy principles, and one of those principles is to prioritize using on-device processing. By performing computations locally on a user’s device, we help minimize the amount of data that is shared with Apple or other entities. Of course, a user may request on-device experiences powered by machine learning (ML) that can be enriched by looking up global knowledge hosted on servers. To uphold our commitment to privacy while delivering these experiences, we have implemented a…Apple Machine Learning Research

Unlocking generative AI for enterprises: How SnapLogic powers their low-code Agent Creator using Amazon Bedrock

Unlocking generative AI for enterprises: How SnapLogic powers their low-code Agent Creator using Amazon Bedrock

This post is cowritten with Greg Benson, Aaron Kesler and David Dellsperger from SnapLogic.

The landscape of enterprise application development is undergoing a seismic shift with the advent of generative AI. SnapLogic, a leader in generative integration and automation, has introduced the industry’s first low-code generative AI development platform, Agent Creator, designed to democratize AI capabilities across all organizational levels. Agent Creator is a no-code visual tool that empowers business users and application developers to create sophisticated large language model (LLM) powered applications and agents without programming expertise.

This intuitive platform enables the rapid development of AI-powered solutions such as conversational interfaces, document summarization tools, and content generation apps through a drag-and-drop interface. By using SnapLogic’s library of more than 800 pre-built connectors and data transformation capabilities, users can seamlessly integrate various data sources and AI models, dramatically accelerating the development process compared to traditional coding methods. This innovative platform empowers employees, regardless of their coding skills, to create generative AI processes and applications through a low-code visual designer.

Pre-built templates tailored to various use cases are included, significantly enhancing both employee and customer experiences. Agent Creator is a versatile extension to the SnapLogic platform that is compatible with modern databases, APIs, and even legacy mainframe systems, fostering seamless integration across various data environments. Its low-code interface drastically reduces the time needed to develop generative AI applications.

Agent Creator

Creating enterprise-grade, LLM-powered applications and integrations that meet security, governance, and compliance requirements has traditionally demanded the expertise of programmers and data scientists. Not anymore! SnapLogic’s Agent Creator revolutionizes this landscape by empowering everyone to create generative AI–powered applications and automations without any coding. Enterprises can use SnapLogic’s Agent Creator to store their knowledge in vector databases and create powerful generative AI solutions that augment LLMs with relevant enterprise-specific knowledge, a framework also known as Retrieval Augmented Generation (RAG). This capability accelerates business operations by providing a toolkit for users to create departmental chat assistants, add LLM-powered search to portals, automate processes involving documents, and much more. Additionally, this platform offers:

  • LLM-powered processes and apps in minutes – Agent Creator empowers enterprise users to create custom LLM-powered workflows without coding. Whether your HR department needs a Q&A workflow for employee benefits, your legal team needs a contract redlining solution, or your analysts need a research report analysis engine, Agent Creator provides the tools and flexibility to build it all.
  • Automate intelligent document processing (IDP) – Agent Creator can extract valuable data from invoices, purchase orders, resumes, insurance claims, loan applications, and other unstructured sources automatically. The IDP solution uses the power of LLMs to automate tedious document-centric processes, freeing up your team for higher-value work.
  • Boost productivity – Empowers knowledge workers with the ability to automatically and reliably summarize reports and articles, quickly find answers, and extract valuable insights from unstructured data. Agent Creator’s low-code approach allows anyone to use the power of AI to automate tedious portions of their work, regardless of their technical expertise.

The following demo shows Agent Creator in action.

To deliver these robust features, Agent Creator uses Amazon Bedrock, a foundational platform that provides managed infrastructure to use state-of-the-art foundation models (FMs). This eliminates the complexities of setting up and maintaining the underlying hardware and software so SnapLogic can focus on innovation and application development rather than infrastructure management.

What is Amazon Bedrock

Amazon Bedrock is a fully managed service that provides access to high-performing FMs from leading AI startups and Amazon through a unified API, making it easier for enterprises to develop generative AI applications. Users can choose from a wide range of FMs to find the best fit for their use case. With Amazon Bedrock, organizations can experiment with and evaluate top models, customize them with their data using techniques like fine-tuning and RAG, and build intelligent agents that use enterprise systems and data sources. The serverless experience offered by Amazon Bedrock enables quick deployment, private customization, and secure integration of these models into applications without the need to manage underlying infrastructure. Key features include experimenting with prompts, augmenting response generation with data sources, creating reasoning agents, adapting models to specific tasks, and improving application efficiency with provisioned throughput, providing a robust and scalable solution for enterprise AI needs. The robust capabilities and unified API of Amazon Bedrock make it an ideal foundation for developing enterprise-grade AI applications.

By using the Amazon Bedrock high-performing FMs, secure customization options, and seamless integration features, SnapLogic’s Agent Creator maximizes its potential to deliver powerful, low-code AI solutions. This integration not only enhances the Agent Creator’s ability to create and deploy sophisticated AI models quickly but also makes them scalable, secure, and efficient.

Why Agent Creator uses Amazon Bedrock

SnapLogic’s Agent Creator uses Amazon Bedrock to deliver a powerful, low-code generative AI development platform that meets the unique needs of its enterprise customers. By integrating Amazon Bedrock, Agent Creator benefits from several key advantages:

  • Access to top-tier FMs – Amazon Bedrock provides access to high-performing FMs from leading AI providers through a unified API. Agent Creator offers enterprises the ability to experiment with and deploy sophisticated AI models without the complexity of managing the underlying infrastructure.
  • Seamless customization and integration –The serverless architecture of Amazon Bedrock frees up the time of Agent Creator developers so they can focus on innovation and rapid development. It facilitates the seamless customization of FMs with enterprise-specific data using advanced techniques like prompt engineering and RAG so outputs are relevant and accurate.
  • Enhanced security and compliance – Security and compliance are paramount for enterprise AI applications. SnapLogic uses Amazon Bedrock to build its platform, capitalizing on the proximity to data already stored in Amazon Web Services (AWS). Because of this strategic decision, SnapLogic can offer enhanced security and compliance measures while significantly reducing latency for its customers. By processing data closer to where it resides, SnapLogic promotes faster, more efficient operations that meet stringent regulatory requirements, ultimately delivering a superior experience for businesses relying on their data integration and management solutions. Because Amazon Bedrock offers robust features to meet these requirements, Agent Creator adheres to stringent security protocols and governance standards, giving enterprises confidence in their generative AI deployments.
  • Accelerated development and deployment – With Amazon Bedrock, Agent Creator empowers users to quickly experiment with various FMs, accelerating the development cycle. The managed infrastructure streamlines the testing and deployment process, enabling rapid iteration and implementation of intelligent applications.
  • Scalability and performance – Generative AI applications built using Agent Creator are scalable and performant because of Amazon Bedrock. It can handle large volumes of data and interactions, which is crucial for enterprises requiring robust applications. Provisioned throughput options enable efficient model inference, promoting smooth operation even under heavy usage.

By harnessing the capabilities of Amazon Bedrock, SnapLogic’s Agent Creator delivers a comprehensive, low-code solution that allows enterprises to capitalize on the transformative potential of generative AI. This integration simplifies the development process while enhancing the capabilities, security, and scalability of AI applications, driving significant business value and innovation.

Solution approach

Agent Creator integrates Amazon Bedrock, Anthropic’s Claude, and Amazon OpenSearch Service vector databases to deliver a comprehensive and powerful low-code visual interface for building generative AI solutions. At its core, Amazon Bedrock provides the foundational infrastructure for robust performance, security, and scalability for deploying machine learning (ML) models. This foundational layer is critical for managing the complexities of AI model deployment, and therefore SnapLogic can offer a seamless user experience. This integrated architecture not only supports advanced AI functionalities but also makes it easy to use. By abstracting the complexities of generative AI development and providing a user-friendly visual interface, Agent Creator offers enterprises the ability to use powerful AWS generative AI services without needing deep technical knowledge.

Control plane and data plane implementation

SnapLogic’s Agent Creator platform follows a decoupled architecture, separating the control plane and data plane for enhanced security and scalability.

Control plane

The control plane is responsible for managing and orchestrating the various components of the platform. The control plane is hosted and managed by SnapLogic, meaning that customers don’t have to worry about the underlying infrastructure and can focus on their core business requirements. SnapLogic’s control plane comprises several components that manage and orchestrate the platform’s operations. Here are some key components:

  • Designer – A visual interface where users can design, build, and configure integrations and data flows
  • Manager – A centralized management console for monitoring, scheduling, and controlling the execution of integrations and data pipelines
  • Monitor – A comprehensive reporting and analytics dashboard that provides insights into the performance, usage, and health of the platform
  • API management (APIM) – A component that manages and secures the exposure of integrations and data services as APIs, providing seamless integration with external applications and systems.

By separating the control plane from the data plane, SnapLogic offers a scalable and secure architecture so customers can use generative AI capabilities while maintaining control over their data within their own virtual private cloud (VPC) environment.

Data plane

The data plane is where the actual data processing and integration take place. To address customers’ requirements about data privacy and sovereignty, SnapLogic deploys the data plane within the customer’s VPC on AWS. This approach means that customer data never leaves their controlled environment, providing an extra layer of security and compliance. By using Amazon Bedrock, SnapLogic can invoke generative AI models directly from the customer’s VPC, enabling real-time processing and analysis of customer data without needing to move it outside the secure environment. The integration with Amazon Bedrock is achieved through the Amazon Bedrock InvokeModel APIs. SnapLogic’s data plane, running within the customer’s VPC, calls these APIs to invoke the desired generative AI models hosted on Amazon Bedrock.

Functional components

The solution comprises the following functional components:

  • Vector Database Snap Pack – Manages the reading and writing of data to vector databases. This pack is crucial for maintaining the integrity and accessibility of the enterprise-specific knowledge stored in the OpenSearch vector database.
  • Chunker Snap – Segments large texts into manageable pieces. This functionality is important for processing large documents so the AI can handle and analyze text effectively.
  • Embedding Snap – Converts text segments into vectors. This step is vital for integrating enterprise-specific knowledge into AI prompts, enhancing the relevance and accuracy of AI responses.
  • LLM Snap Pack – Facilitates interactions with Claude and other language models. The AI can generate responses and perform tasks based on the processed and retrieved data.
  • Prompt Generator Snap – Enriches queries with the most relevant data so the AI prompts are contextually accurate and tailored to the specific needs of the enterprise.
  • Pre-Built Pipeline Patterns for indexing and retrieving – To streamline the deployment of intelligent applications, Agent Creator includes pre-built pipeline patterns. These patterns simplify common tasks such as indexing, retrieving data, and processing documents so AI-driven solutions can be deployed without the need for deep technical expertise.
  • Frontend Starter Kit – To simplify the deployment of user-facing applications, Agent Creator includes a Frontend Starter Kit. This kit provides pre-built components and templates for creating intuitive and responsive interfaces. Enterprises can quickly develop and deploy chat assistant UI applications, and applications not only function well but also provide a seamless and engaging user experience.

Data flow and control flow

In the architecture of Agent Creator, the interaction between Agent Creator platform, Amazon Bedrock, OpenSearch Service, and Anthropic’s Claude involves a sophisticated and efficient management of data flow and control flow. By effectively managing the data and control flows between Agent Creator and AWS services, SnapLogic provides a robust, secure, and efficient platform for developing and deploying enterprise-grade solutions. This architecture supports advanced integration functionalities and offers a seamless, user-friendly experience, making it a valuable tool for enterprise customers.

Data flow

Here is an example of this data flow for an Agent Creator pipeline that involves data ingestion, preprocessing, and vectorization using Chunker and Embedding Snaps. The resulting vectors are stored in OpenSearch Service databases for efficient retrieval and querying. When a query is initiated, relevant vectors are retrieved to augment the query with context-specific data, and the enriched query is processed by the LLM Snap Pack to generate responses.

The data flow follows these steps:

  1. Data ingestion and preprocessing – Enterprise data is ingested from various sources such as documents, databases, and APIs. Chunker Snap processes large texts and documents by segmenting them into smaller, manageable chunks to make them compatible with downstream processing steps.
  2. Vectorization – The text chunks are passed to the Embedding Snap, which converts them into vector representations using embedding models. These vectors are numerical representations that capture the semantic meaning of the text. The resulting vectors are stored in OpenSearch Service vector databases, which manage and index these vectors for efficient retrieval and querying.
  3. Data retrieval and augmentation – When a query is initiated, the Vector Database Snap Pack retrieves relevant vectors from OpenSearch Service using similarity search algorithms to match the query with stored vectors. The retrieved vectors augment the initial query with context-specific enterprise data, enhancing its relevance.
  4. PromptResponse generation – The Prompt Generator Snap refines the final query so it’s well-formed and optimized for the language model. The language model generates a response, which is then postprocessed, if necessary, before delivery.
  5. Interaction with LLMs – The augmented query is forwarded to the LLM Snap Pack, which interacts with Anthropic’s Claude and other integrated language models. This interaction generates responses based on the enriched query.

Control flow

The control flow in Agent Creator is orchestrated between the control plane and the data plane. The control plane hosts the user environment, stores configuration settings and user-created assets, and provides access to various components. The data plane executes pipelines, connecting to cloud-based or on-premises data endpoints, with the control plane orchestrating the workflow across interconnected snaps. Here is an example of this control flow for a Agent Creator.

The control flow follows these steps:

  1. Initiating requests – Users initiate requests using Agent Creator’s low-code visual interface, specifying tasks such as creating Q&A assistants or automating document processing. Pre-built UI components such as the Frontend Starter Kit capture user inputs and streamline the interaction process.
  2. Orchestrating pipelines – Agent Creator orchestrates workflows using interconnected snaps, each performing a specific function such as ingestion, chunking, vectorization, or querying. The architecture employs an event-driven model, where the completion of one snap triggers the next step in the workflow.
  3. Managing interactions with AWS services – Agent Creator communicates with AWS services, including Amazon Bedrock and OpenSearch Service, and Anthropic’s Claude in Amazon Bedrock, through secure API calls. The serverless infrastructure of Amazon Bedrock manages the execution of ML models, resulting in a scalable and reliable application.
  4. Observability – Robust mechanisms are in place for handling errors during data processing or model inference. Errors are logged and notifications are sent to system administrators for resolution. Continuous logging and monitoring provide transparency and facilitate troubleshooting. Logs are centrally stored and analyzed to maintain system integrity.
  5. Final output delivery – The generated AI responses are delivered to end user applications or interfaces, integrated into SnapLogic’s dashboards. User feedback is collected to continuously improve AI models and processing pipelines, enhancing overall system performance.

Use cases

You can use the SnapLogic Agent Creator for many different use cases. The next paragraphs illustrate just a few.

IDP on quarterly reports

A leading pharmaceutical data provider empowered their analysts by using Agent Creator and AutoIDP to automate data extraction on pharmaceutical drugs. By processing their portfolio of quarterly reports through LLMs, they could ask standardized questions to extract information that was previously gathered manually. This automation not only reduced errors but also saved significant time and resources, leading to a 35% reduction in costs and a centralized pool of reusable data assets, providing a single source of truth for their entire organization.

Automating market intelligence insights

A global telecommunications company used Agent Creator to process a multitude of RSS feeds, extracting only business-relevant information. This data was then integrated into Salesforce as a real-time feed of market insights. As the customer noted, “This automation allows us to filter and synthesize crucial data, delivering targeted, real-time insights to our sales teams, enhancing their productivity without the need for individual AI licenses.”

Agent Creator Amazon Bedrock roadmap

Development and improvement are ongoing for Agent Creator, with several enhancements released recently and more to come in the future.

Recent releases

Extended support for more Amazon Bedrock capabilities was made available with the August 2024 release. Support for retrieving and generating against Amazon Bedrock and Amazon Bedrock Knowledge Bases through snap orchestration was added as well as support for invoking Amazon Bedrock Agents. Continual enhancements for new models and additional authentication mechanisms have been released supporting AWS Identity and Access Management (IAM) role authentication and cross-account IAM role authentication. All Agent Creator LLM Snaps have also been updated to support a more raw request payload, adding support to specify entire conversations (for continued conversations) as well as the ability to specify prompts beyond just text.

Support for the Amazon Bedrock Converse API was released recently. With the Amazon Bedrock Converse API support, Agent Creator is able to support models beyond Amazon Titan and Anthropic’s Claude. This comes with added support for multi-modal prompt capabilities, which is delivered through new Snaps to orchestrate the building of these more complex payloads.

Conclusion

SnapLogic has revolutionized enterprise AI with its Agent Creator, the industry’s first low-code generative AI development platform. By integrating advanced generative AI services such as Amazon Bedrock and OpenSearch Service vector databases and cutting edge LLMs such as Anthropic’s Claude, SnapLogic empowers enterprise users, from product to sales to marketing, to create sophisticated generative AI–driven applications without deep technical expertise. This platform reduces dependency on specialized programmers and accelerates innovation by streamlining the generative AI development process with pre-built pipeline patterns and a Frontend Starter Kit.

Agent Creator offers robust performance, security, and scalability so enterprises can use powerful generative AI tools for competitive advantage. By pioneering this comprehensive approach, SnapLogic not only addresses current enterprise needs but also positions organizations to harness Amazon Bedrock for future advancements in generative AI technology, driving significant business value and operational efficiency for our enterprise customers.

To use Agent Creator effectively, schedule a demo of SnapLogic’s Agent Creator  to learn how it can address your specific use cases. Identify potential pilot projects, such as creating departmental Q&A assistants, automating document processing, or putting an LLM to work for you behind the scenes. Prepare to store your enterprise knowledge in vector databases, which Agent Creator can use to augment LLMs with your specific information through RAG. Begin with a small project, such as creating a departmental Q&A assistant, to demonstrate the value of Agent Creator and use this success to build momentum for larger initiatives. To learn more about how to make best use of Amazon Bedrock, refer to the Amazon Bedrock Documentation.


About the authors

Asheesh Goja is Principal Solutions Architect at AWS. Prior to AWS, Asheesh worked at prominent organizations such as Cisco and UPS, where he spearheaded initiatives to accelerate the adoption of several emerging technologies. His expertise spans ideation, co-design, incubation, and venture product development. Asheesh holds a wide portfolio of hardware and software patents, including a real-time C++ DSL, IoT hardware devices, Computer Vision and Edge AI prototypes. As an active contributor to the emerging fields of Generative AI and Edge AI, Asheesh shares his knowledge and insights through tech blogs and as a speaker at various industry conferences and forums.

Dhawal PatelDhawal Patel is a Principal Machine Learning Architect at AWS. He has worked with organizations ranging from large enterprises to mid-sized startups on problems related to distributed computing, and Artificial Intelligence. He focuses on Deep learning including NLP and Computer Vision domains. He helps customers achieve high performance model inference on SageMaker.

Greg Benson is a Professor of Computer Science at the University of San Francisco and Chief Scientist at SnapLogic. He joined the USF Department of Computer Science in 1998 and has taught undergraduate and graduate courses including operating systems, computer architecture, programming languages, distributed systems, and introductory programming. Greg has published research in the areas of operating systems, parallel computing, and distributed systems. Since joining SnapLogic in 2010, Greg has helped design and implement several key platform features including cluster processing, big data processing, the cloud architecture, and machine learning. He currently is working on Generative AI for data integration.

Aaron Kesler is the Senior Product Manager for AI products and services at SnapLogic, Aaron applies over ten years of product management expertise to pioneer AI/ML product development and evangelize services across the organization. He is the author of the upcoming book “What’s Your Problem?” aimed at guiding new product managers through the product management career. His entrepreneurial journey began with his college startup, STAK, which was later acquired by Carvertise with Aaron contributing significantly to their recognition as Tech Startup of the Year 2015 in Delaware. Beyond his professional pursuits, Aaron finds joy in golfing with his father, exploring new cultures and foods on his travels, and practicing the ukulele.

David Dellsperger is a Senior Staff Software Engineer and Technical Lead of the Agent Creator product at SnapLogic. David has been working as a Software Engineer emphasizing in Machine Learning and AI for over a decade previously focusing on AI in Healthcare and now focusing on the SnapLogic Agent Creator. David spends his time outside of work playing video games and spending quality time with his yellow lab, Sudo

Read More

Next-generation learning experience using Amazon Bedrock and Anthropic’s Claude: Innovation from Classworks

Next-generation learning experience using Amazon Bedrock and Anthropic’s Claude: Innovation from Classworks

This post is co-written with Jerry Henley, Hans Buchheim and Roy Gunter from Classworks.

Classworks is an online teacher and student platform that includes academic screening, progress monitoring, and specially designed instruction for reading and math for grades K–12. Classworks’s unique ability to ingest student assessment data from various sources, analyze it, and automatically deliver a customized learning progression for each student sets them apart. Although this evidence-based model has significantly impacted student growth, supporting diverse learning needs in a classroom of 25 students working independently remains challenging. Teachers often find themselves torn between assisting individual students and delivering group instruction, ultimately hindering the learning experience for all.

To address the challenges of personalized learning and teacher workload, Classworks introduces Wittly by Classworks, an AI-powered learning assistant built on Amazon Bedrock, a fully managed service that makes it straightforward to build generative AI applications.

Wittly’s innovative approach centers on two key aspects:

  • Harnessing Anthropic’s Claude in Amazon Bedrock for advanced AI capabilities – Wittly uses Amazon Bedrock to seamlessly integrate with Anthropic’s Claude Sonnet 3.5, a state-of-the-art large language model (LLM). This powerful combination enables Wittly to provide tailored learning support and foster self-directed learning environments at scale.
  • Personalization and teacher empowerment – This comprises two objectives:
    • Personalized learning – Through AI-driven differentiated instruction, Wittly adapts to individual student needs, enhancing their learning experience.
    • Reduced teacher workload – By reducing the workload, Wittly allows educators to concentrate on high-impact student support, facilitating better educational outcomes.

In this post, we discuss how Classworks uses Amazon Bedrock and Anthropic’s Claude Sonnet to deliver next-generation differentiated learning with Wittly.

Powering differentiated learning with Amazon Bedrock

The ability to deliver differentiated learning to a classroom of diverse learners is transformative. Engaging students with instruction tailored to their current learning skills accelerates mastery and fosters critical thinking and independent problem-solving. However, providing such personalized instruction to an entire classroom is labor-intensive and time-consuming for teachers.

Wittly uses generative AI to offer explanations of each skill at a student’s interest level in various ways. When students encounter challenging concepts, Wittly provides clear, concise guidance tailored to their learning style and language preferences, enabling them to grasp concepts at their own pace and overcome obstacles independently. With the scalable infrastructure of Amazon Bedrock, Wittly handles diverse classroom needs simultaneously, making personalized instruction a reality for every student.

Amazon Bedrock serves as the cornerstone of Wittly’s AI capabilities, offering several key advantages:

  • Single API access – Simplifies integration with Anthropic’s Claude foundation models (FMs), allowing for straightforward updates and potential expansion to other models in the future. This unified interface accelerates development cycles by reducing the complexity of working with multiple AI models. It also future proofs Wittly’s AI infrastructure, enabling seamless adoption of new models of capabilities as they become available, without significant code changes.
  • Serverless architecture – Eliminates the need for infrastructure management, enabling Classworks to focus on educational content and user experience. This approach provides automatic scaling to handle varying loads, from individual student sessions to entire school districts accessing the platform simultaneously. It also optimizes costs by allocating resources based on actual usage rather than maintaining constant capacity. The reduced operational overhead allows Wittly’s team to dedicate more time and resources to enhancing the core educational features of the platform.

Combining cutting-edge AI technology with thoughtful implementation and robust safeguards, Wittly represents a significant leap forward in personalized digital learning assistance. The system’s architecture, powered by Amazon Bedrock and Anthropic’s Claude Sonnet 3.5, enables Wittly to adapt to individual student needs while maintaining high standards of safety, privacy, and educational efficacy. By integrating these advanced technologies, Wittly not only enhances the learning experience but also makes sure it’s accessible, secure, and tailored to the unique requirements of every student.

Increasing teacher capacity and bandwidth

Meeting the diverse needs of students in a single classroom, particularly during intervention periods or in resource rooms, can be overwhelming. By differentiating instruction for students learning independently, Wittly saves valuable teacher time. Students can seek clarification and guidance from Wittly before asking for the teacher’s help, fostering a self-directed learning environment that eases the teacher’s burden.

This approach is particularly beneficial when a teacher delivers small group lessons while others learn independently. Knowing that interactive explanations are available to students learning each concept is a significant relief for teachers managing diverse ability levels in a classroom. By harnessing the powerful capabilities of Anthropic’s Claude Sonnet 3.5, Wittly creates a more efficient, personalized learning ecosystem that benefits both students and teachers.

Solution overview

The following diagram illustrates the solution architecture.

 

The solution consists of the following key components:

  • Wittly interface – The frontend component where students interact with the learning assistant is designed to be intuitive and engaging.
  • Classworks API – This API manages the data exchange and serves as the central hub for communication between various system components.
  • Wittly AI assistant prompt – Students receive a tailored prompt for the AI based on the student’s first name, grade level, learning objectives, and conversation history.
  • Student common misconception prompt – This prompt actively identifies potential misconceptions related to the current learning objective, enhancing the student experience.
  • Anthropic’s Claude on Amazon Bedrock – Amazon Bedrock orchestrates AI interactions, providing a fully managed service that simplifies the integration of the state-of-the-art Anthropic’s Claude models.

Monitoring the Wittly platform

In the rapidly evolving landscape of AI-powered education, robust monitoring isn’t only beneficial—it’s essential. Classworks recognizes this criticality and has developed a comprehensive monitoring strategy for the Wittly platform. This approach is pivotal in maintaining the highest standards of performance, optimizing resource allocation, and continually refining the user experience. More specifically, the Wittly platform monitors the following metrics:

  • Token usage – By tracking overall token consumption and visualizing usage patterns by feature and user type, we can plan resources efficiently and manage costs effectively.
  • Request volume – Monitoring API calls helps us detect unusual spikes and analyze usage patterns, enabling predictive scaling decisions and providing system reliability.
  • Response times – Measuring and analyzing latency, breaking down response times by query complexity and user segments. This allows us to identify and address performance bottlenecks promptly.
  • Costs – Implementing detailed cost tracking and modeling for various usage scenarios supports our budget management and pricing strategies, leading to sustainable growth.
  • Quality metrics – Logging and analyzing user feedback, along with correlating satisfaction metrics with model performance, guides our continuous improvement efforts.
  • Error tracking – Setting up alerts for critical errors and performing advanced error categorization and trend analysis helps us integrate seamlessly with our development workflow and maintain system integrity.
  • User engagement – Visualizing user journeys and feature adoption rates through monitoring feature usage informs our product development priorities, enhancing the overall user experience.
  • System health – By tracking overall system performance, we gain a holistic view of system dependencies, supporting proactive maintenance and maintaining a stable platform.

To achieve this, we use Amazon CloudWatch to capture key performance data, such as average latency and token counts. This information is then seamlessly integrated into our Grafana dashboard for real-time visualization and analysis. The following screenshot showcases our monitoring dashboard created using Grafana, which visually represents these critical metrics and provides actionable insights. Grafana is an open-source platform for monitoring and observability, enabling users to query, visualize, and understand their data through customizable dashboards.

This comprehensive monitoring framework enables Classworks to deliver exceptional value to our users by optimizing AI-powered features and maintaining high performance standards. With cutting-edge tools like Grafana for data collection, alerting, and in-depth visualization and analysis, we can adapt and expand our monitoring capabilities in tandem with the growing complexity of our AI integration.

Engaging with Wittly: A student’s experience

As students embark on their Classworks activities, they are greeted by Wittly, their AI-powered learning assistant, integrated seamlessly into the Classworks instructional toolbar. When students encounter challenging concepts or need additional help, they can choose the Wittly icon to open an interactive chat window.

Unlike other AI chat-based systems that rely on open-ended questions, Wittly offers a set of pre-created AI response options. This guided approach makes sure conversations remain focused and relevant to the current activity. When Wittly provides explanations or poses questions, students can select from the provided responses, indicating their understanding or need for further clarification.

The student engagement workflow includes the following steps:

  1. Wittly is called when a student needs help with a specific activity in Classworks.
  2. Each Classworks activity focuses on a particular skill or concept, and we’ve tagged all activities with learning objectives for the specific activity.
  3. When a student accesses Wittly, we send key pieces of information, including the student’s first name, the learning objective of the activity they’re working on, and the language preference of the student.
  4. Wittly generates a personalized response to help the student. This typically includes a greeting using the student’s name, an explanation of the concept, an example related to the learning objective, and a prompt asking if the explanation helped the student understand the concept.

The following is a sample interaction, starting with the input sent to Wittly:

{

  "student_name": "Alex",

  "learning_objective": "Identify and use proper punctuation in compound sentences",

  "language": "English"

}

Wittly’s output is as follows:

"Hi Alex! Let's work on punctuating compound sentences. Remember to use a comma before coordinating conjunctions like 'and' or 'but'. For example: 'I love pizza, and I enjoy pasta.' Do you understand this? Please reply with 'thumbs up' or 'thumbs down'."

Wittly is designed to adapt to each student’s unique needs. It can communicate in both English and Spanish, and students can choose a voice they find engaging. For those who prefer auditory learning, Wittly reads its answers aloud while highlighting the corresponding text, making the learning experience both dynamic and accessible.

The structured interactions with Wittly are recorded, allowing teachers to monitor student progress and identify areas where additional support may be needed. This makes sure teachers remain actively involved in the learning process and that Wittly’s interactions are always appropriate and aligned with educational objectives.

With Wittly as their learning companion, students can delve into complex concepts in language arts, math, and science through guided, interactive exchanges. Wittly supports their learning journey, making their time in Classworks more engaging and personalized, all within a safe and controlled environment.

The following example showcases the interactive experience with Wittly in action, demonstrating how students engage with personalized learning through guided interactions.

Data privacy and safety considerations

In the era of AI-powered education, protecting student data and providing safe interactions are paramount. Classworks has implemented rigorous measures to uphold the highest standards of privacy and safety in Wittly’s design and operation.

Ethical AI foundation

Classworks employs a human-in-the-loop (HITL) model, combining AI technology with human expertise and insight. Wittly uses advanced AI algorithms, overseen and enhanced by the expertise of human educators and engineers, to generate instructional recommendations.

Student data protection

A core tenet in developing Wittly was achieving personalized learning without compromising student privacy. We don’t share any personally identifiable information with Wittly. Anthropic’s Claude LLM is trained on a dataset of anonymous data, not data from the Classworks platform, providing complete student privacy. Furthermore, when engaging with Wittly, students select from various pre-created responses to indicate whether the differentiated instruction was helpful or if they need further assistance. This approach eliminates the risk of inappropriate conversations, maintaining a safe learning environment.

Amazon Bedrock enhances this protection by encrypting data both in transit and rest and preventing the sharing of prompts with any third parties, including Anthropic. Additionally, Amazon Bedrock doesn’t train models with Classworks’s data, so all interactions remain secure and private.

Conclusion

Amazon Bedrock represents a pivotal advancement in AI technology, offering vast opportunities for innovation and efficiency in education. At Classworks, we’re not just adopting this technology, we’re pioneering its application to craft exceptional, personalized learning experiences. Our commitment extends beyond students to empowering educators with cutting-edge resources that elevate learning outcomes.

Based on Wittly’s capabilities, we estimate that teachers could potentially save 15–25 hours per month. This time savings might come from reduced need for individual student support, decreased time spent on classroom management, and less after-hours support. These efficiency gains significantly enhance the learning environment, allowing teachers to focus more on high impact, tailored educational experiences.

As AI continues to evolve, we’re committed to refining our policies and practices to uphold the highest standards of safety, quality, and efficacy in educational technology. By embracing Amazon Bedrock, we can make sure Classworks remains at the forefront of delivering safe, impactful, and meaningful educational experiences to students and educators alike.

To learn more about how generative AI and Amazon Bedrock can revolutionize your educational platform by delivering personalized learning experiences, enhancing teacher capacity, and enforcing data privacy, visit Amazon Bedrock. Discover how you can use advanced AI to create innovative applications, streamline development processes, and provide impactful data insights for your users.

To learn more about Classworks and our groundbreaking generative AI capabilities, visit our website.

This is a guest post from Classworks. Classworks is an award-winning K–12 special education and tiered intervention platform that uses advanced technology and comprehensive data to deliver superior personalized learning experiences. The comprehensive solution includes academic screeners, math and reading interventions, specially designed instruction, progress monitoring, and powerful data. Validated by the National Center on Intensive Intervention (NCII) and endorsed by The Council of Administrators of Special Education (CASE), Classworks partners with districts nationwide to deliver data-driven personalized learning to students where they are ready to learn.

 


About the Authors

Jerry Henley, VP of Technology at Curriculum Advantage, leads the product technical vision, platform services, and support for Classworks. With 18 years in EdTech, he oversees innovation, roadmaps, and AI integration, enhancing personalized learning experiences for students and educators.

 

Hans Buchheim, VP of Engineering at Curriculum Advantage, has spent 25 years developing Classworks. He leads software architecture decisions, mentors junior developers, and ensures the product evolves to meet educator needs.

 

Roy Gunter, DevOps Engineer at Curriculum Advantage, manages cloud infrastructure and automation for Classworks. He focuses on system reliability, troubleshooting, and performance optimization to deliver an excellent user experience.

 

Gowtham Shankar is a Solutions Architect at Amazon Web Services (AWS). He is passionate about working with customers to design and implement cloud-native architectures to address business challenges effectively. Gowtham actively engages in various open source projects, collaborating with the community to drive innovation.

 

Dr. Changsha Ma is an AI/ML Specialist at AWS. She is a technologist with a PhD in Computer Science, a master’s degree in Education Psychology, and years of experience in data science and independent consulting in AI/ML. She is passionate about researching methodological approaches for machine and human intelligence. Outside of work, she loves hiking, cooking, hunting food, and spending time with friends and families

Read More