Riding the Rays: Sunswift Racing Shines in World Solar Challenge Race

Riding the Rays: Sunswift Racing Shines in World Solar Challenge Race

In the world’s largest solar race car event of the year, the University of New South Wales Sunswift Racing team is having its day in the sun.

The World Solar Challenge, which first began some 35 years ago, attracts academic participants from across the globe. This year’s event drew nearly 100 competitors.

The race runs nearly 1,900 miles over the course of about four days and pits challengers in a battle not for speed but for greatest energy efficiency.

UNSW Sydney won the energy efficiency competition and crossed the finish line first, taking the Cruiser Cup with its Sunswift 7 vehicle, which utilizes NVIDIA Jetson Xavier NX for energy optimization. It was also the only competitor to race with 4 people on board and a remote mission control team.

“It’s a completely different proposition to say we can use the least amount of energy and arrive in Adelaide before anybody else, but crossing the line first is just about bragging rights,” said Richard Hopkins, project manager at Sunswift and a UNSW professor. Hopkins previously managed Formula 1 race teams in the U.K.

Race organizers bill the event, which cuts across the entire Australian continent on public roads — from Darwin in the north to Adelaide in the south — as the “world’s greatest innovation and engineering challenge contributing to a more sustainable mobility future.” It’s also become a launchpad for students pursuing career paths in the electric vehicle industry.

Like many of the competitors, UNSW is coming back after a three-year hiatus from the race due to the COVID-19 pandemic, making this year’s competition highly anticipated.

“Every single team member needs to understand what they’re doing and what their role is on the team and perform at the very best during those five-and-a-half days,” said Hopkins. “It is exhausting.”

All In on Energy Efficiency  

The race allows participants to start with a fully charged battery and to charge when the vehicles stop for the night at two locations. The remaining energy used, some 90%, comes from the sun and the vehicles’ solar panels.

UNSW’s seventh-generation Sunswift 7 runs algorithms to optimize for energy efficiency, essentially shutting down all nonessential computing to maximize battery life.

The solar electric vehicle relies on NVIDIA Jetson AI to give it an edge across its roughly 100 automotive monitoring and power management systems.

It can also factor in whether it should drive faster or slower based on weather forecasts. For instance, the car will urge the driver to go faster if it’s going to rain later in the day when conditions would force the car to slow down.

The Sunswift 7 vehicle was designed to mostly drive in a straight line from Darwin to Adelaide, and the object is to use the least amount of power outside of that mission, said Hopkins.

“Sunswift 7 late last year was featured in the Guinness Book of World Records for being the fastest electric vehicle for over 1,000 kilometers on a single charge of battery,” he said.

Jetson-Based Racers for Learning

The UNSW team created nearly 60 design iterations to improve on the aerodynamics of the vehicle. They used computational fluid dynamics modeling and ran simulations to analyze each version.

“We didn’t ever put the car through a physical wind tunnel,” said Hopkins.

The technical team has been working on a model to determine what speed the vehicle should be driven at for maximum energy conservation. “They’re working on taking in as many parameters as you can, given it’s really hard to get good driving data,” said Josh Bramley, technology manager at Sunswift Racing.

Sunswift 7 is running on the Robot Operating System (ROS) suite of software and relies on its NVIDIA Jetson module to process all the input from the sensors for analytics, which can be monitored by the remote pit crew back on campus at UNSW.

Jetson is used for all the control systems on the car, so everything from the accelerator pedal, wheel sensors, solar current sensors and more are processed on it for data to analyze for ways AI might help, said Bramley. The next version of the vehicle is expected to pack more AI, he added.

“A lot of the AI and computer vision will be coming for Sunswift 8 in the next solar challenge,” said Bramley.

More than 100 students are getting course credit for the Sunswift Racing team work, and many are interested in pursuing careers in electric vehicles, said Hopkins.

Past World Solar Challenge contestants have gone on to work at Tesla, SpaceX and Zipline.

Talk about a bright future.

Learn more about the NVIDIA Jetson platform for edge AI and robotics.

Read More

Schneider Electric leverages Retrieval Augmented LLMs on SageMaker to ensure real-time updates in their ERP systems

Schneider Electric leverages Retrieval Augmented LLMs on SageMaker to ensure real-time updates in their ERP systems

This post was co-written with Anthony Medeiros, Manager of Solutions Engineering and Architecture for North America Artificial Intelligence, and Blake Santschi, Business Intelligence Manager, from Schneider Electric. Additional Schneider Electric experts include Jesse Miller, Somik Chowdhury, Shaswat Babhulgaonkar, David Watkins, Mark Carlson and Barbara Sleczkowski. 

Enterprise Resource Planning (ERP) systems are used by companies to manage several business functions such as accounting, sales or order management in one system. In particular, they are routinely used to store information related to customer accounts. Different organizations within a company might use different ERP systems and merging them is a complex technical challenge at scale which requires domain-specific knowledge.

Schneider Electric is a leader in digital transformation of energy management and industrial automation. To best serve their customers’ needs, Schneider Electric needs to keep track of the links between related customers’ accounts in their ERP systems. As their customer base grows, new customers are added daily, and their account teams have to manually sort through these new customers and link them to the proper parent entity.

The linking decision is based on the most recent information available publicly on the Internet or in the media, and might be affected by recent acquisitions, market news or divisional re-structuring. An example of account linking would be to identify the relationship between Amazon and its subsidiary, Whole Foods Market [source].

Schneider Electric is deploying large language models for their capabilities in answering questions in various knowledge specific domains, the date the model has been trained is limiting its knowledge. They addressed that challenge by using a Retriever-Augmented Generation open source large language model available on Amazon SageMaker JumpStart to process large amounts of external knowledge pulled and exhibit corporate or public relationships among ERP records.

In early 2023, when Schneider Electric decided to automate part of its accounts linking process using artificial intelligence (AI), the company partnered with the AWS Machine Learning Solutions Lab (MLSL). With MLSL’s expertise in ML consulting and execution, Schneider Electric was able to develop an AI architecture that would reduce the manual effort in their linking workflows, and deliver faster data access to their downstream analytics teams.

Generative AI

Generative AI and large language models (LLMs) are transforming the way business organizations are able to solve traditionally complex challenges related to natural language processing and understanding. Some of the benefits offered by LLMs include the ability to comprehend large portions of text and answer related questions by producing human-like responses. AWS makes it easy for customers to experiment with and productionize LLM workloads by making many options available via Amazon SageMaker JumpStart, Amazon Bedrock, and Amazon Titan.

External Knowledge Acquisition

LLMs are known for their ability to compress human knowledge and have demonstrated remarkable capabilities in answering questions in various knowledge specific domains, but their knowledge is limited by the date the model has been trained. We address that information cutoff by coupling the LLM with a Google Search API to deliver a powerful Retrieval Augmented LLM (RAG) that addresses Schneider Electric’s challenges. The RAG is able to process large amounts of external knowledge pulled from the Google search and exhibit corporate or public relationships among ERP records.

See the following example:

Question: Who is the parent company of One Medical?
Google query: “One Medical parent company” → information → LLM
Answer: One Medical, a subsidiary of Amazon…

The preceding example (taken from the Schneider Electric customer database) concerns an acquisition that happened in February 2023 and thus would not be caught by the LLM alone due to knowledge cutoffs. Augmenting the LLM with Google search guarantees the most up-to-date information.

Flan-T5 model

In that project we used Flan-T5-XXL model from the Flan-T5 family of models.

The Flan-T5 models are instruction-tuned and therefore are capable of performing various zero-shot NLP tasks. In our downstream task there was no need to accommodate a vast amount of world knowledge but rather to perform well on question answering given a context of texts provided through search results, and therefore, the 11B parameters T5 model performed well.

JumpStart provides convenient deployment of this model family through Amazon SageMaker Studio and the SageMaker SDK. This includes Flan-T5 Small, Flan-T5 Base, Flan-T5 Large, Flan-T5 XL, and Flan-T5 XXL. Furthermore, JumpStart provides a few versions of Flan-T5 XXL at different levels of quantization. We deployed Flan-T5-XXL to an endpoint for inference using Amazon SageMaker Studio Jumpstart.

Path to Flan-T5 SageMaker JumpStart

Retrieval Augmented LLM with LangChain

LangChain is popular and fast growing framework allowing development of applications powered by LLMs. It is based on the concept of chains, which are combinations of different components designed to improve the functionality of LLMs for a given task. For instance, it allows us to customize prompts and integrate LLMs with different tools like external search engines or data sources. In our use-case, we used Google Serper component to search the web, and deployed the Flan-T5-XXL model available on Amazon SageMaker Studio Jumpstart. LangChain performs the overall orchestration and allows the search result pages be fed into the Flan-T5-XXL instance.

The Retrieval-Augmented Generation (RAG) consists of two steps:

  1. Retrieval of relevant text chunks from external sources
  2. Augmentation of the chunks with context in the prompt given to the LLM.

For Schneider Electric’ use-case, the RAG proceeds as follows:

  1. The given company name is combined with a question like “Who is the parent company of X”, where X is the given company) and passed to a google query using the Serper AI
  2. The extracted information is combined with the prompt and original question and passed to the LLM for an answer.

The following diagram illustrates this process.

RAG Workflow

Use the following code to create an endpoint:

# Spin FLAN-T5-XXL Sagemaker Endpoint
llm = SagemakerEndpoint(...)

Instantiate search tool:

search = GoogleSerperAPIWrapper()
search_tool = Tool(
	name="Search",
	func=search.run,
	description="useful for when you need to ask with search",
	verbose=False)

In the following code, we chain together the retrieval and augmentation components:

my_template = """
Answer the following question using the information. n
Question : {question}? n
Information : {search_result} n
Answer: """
prompt_template = PromptTemplate(
	input_variables=["question", 'search_result'],
	template=my_template)
question_chain = LLMChain(
	llm=llm,
	prompt=prompt_template,
	output_key="answer")

def search_and_reply_company(company):
	# Retrieval
	search_result = search_tool.run(f"{company} parent company")
	# Augmentation
	output = question_chain({
		"question":f"Who is the parent company of {company}?",
		"search_result": search_result})
	return output["answer"]

search_and_reply_company("Whole Foods Market")
"Amazon"

The Prompt Engineering

The combination of the context and the question is called the prompt. We noticed that the blanket prompt we used (variations around asking for the parent company) performed well for most public sectors (domains) but didn’t generalize well to education or healthcare since the notion of parent company is not meaningful there. For education, we used “X” while for healthcare we used “Y”.

To enable this domain specific prompt selection, we also had to identify the domain a given account belongs to. For this, we also used a RAG where a multiple choice question “What is the domain of {account}?” as a first step, and based on the answer we inquired on the parent of the account using the relevant prompt as a second step. See the following code:

my_template_options = """
Answer the following question using the information. n
Question :  {question}? n
Information : {search_result} n
Options :n {options} n
Answer:
"""

prompt_template_options = PromptTemplate(
input_variables=["question", 'search_result', 'options'],
template=my_template_options)
question_chain = LLMChain(
	llm=llm,
	prompt=prompt_template_options,
	output_key="answer")
	
my_options = """
- healthcare
- education
- oil and gas
- banking
- pharma
- other domain """

def search_and_reply_domain(company):
search_result = search_tool.run(f"{company} ")
output = question_chain({
	"question":f"What is the domain of {company}?",
	"search_result": search_result,
	"options":my_options})
return output["answer"]

search_and_reply_domain("Exxon Mobil")
"oil and gas"

The sector specific prompts have boosted the overall performance from 55% to 71% of accuracy. Overall, the effort and time invested to develop effective prompts appear to significantly improve the quality of LLM response.

RAG with tabular data (SEC-10k)

The SEC 10K filings is another reliable source of information for subsidiaries and subdivisions filed annually by a publicly traded companies. These filings are available directly on SEC EDGAR or through  CorpWatch API.

We assume the information is given in tabular format. Below is a pseudo csv dataset that mimics the original format of the SEC-10K dataset. It is possible to merge multiple csv data sources into a combined pandas dataframe:

# A pseudo dataset similar by schema to the CorpWatch API dataset
df.head()

index	relation_id		source_cw_id	target_cw_id	parent		subsidiary
  1		90				22569           37				AMAZON		WHOLE FOODS MARKET
873		1467			22569			781				AMAZON		TWITCH
899		1505			22569			821				AMAZON		ZAPPOS
900		1506			22569			821				AMAZON		ONE MEDICAL
901		1507			22569			821				AMAZON		WOOT!

The LangChain provides an abstraction layer for pandas through create_pandas_dataframe_agent.  There are two key advantages to using LangChain/LLMs for this task:

  1. Once spun up, it allows a downstream consumer to interact with the dataset in natural language rather than code
  2. It is more robust to misspellings and different ways of naming accounts.

We spin the endpoint as above and create the agent:

# Create pandas dataframe agent
agent = create_pandas_dataframe_agent(llm, df, varbose=True)

In the following code, we query for the parent/subsidiary relationship and the agent translates the query into pandas language:

# Example 1
query = "Who is the parent of WHOLE FOODS MARKET?"
agent.run(query)

#### output
> Entering new AgentExecutor chain...
Thought: I need to find the row with WHOLE FOODS MARKET in the subsidiary column
Action: python_repl_ast
Action Input: df[df['subsidiary'] == 'WHOLE FOODS MARKET']
Observation:
source_cw_id	target_cw_id	parent		subsidiary
22569			37				AMAZON		WHOLE FOODS MARKET
Thought: I now know the final answer
Final Answer: AMAZON
> Finished chain.
# Example 2
query = "Who are the subsidiaries of Amazon?"
agent.run(query)
#### output
> Entering new AgentExecutor chain...
Thought: I need to find the row with source_cw_id of 22569
Action: python_repl_ast
Action Input: df[df['source_cw_id'] == 22569]
...
Thought: I now know the final answer
Final Answer: The subsidiaries of Amazon are Whole Foods Market, Twitch, Zappos, One Medical, Woot!...> Finished chain.
'The subsidiaries of Amazon are Whole Foods Market, Twitch, Zappos, One Medical, Woot!.'

Conclusion

In this post, we detailed how we used building blocks from LangChain to augment an LLM with search capabilities, in order to uncover relationships between Schneider Electric’s customer accounts. We extended the initial pipeline to a two-step process with domain identification before using a domain specific prompt for higher accuracy.

In addition to the Google Search query, datasets that detail corporate structures such as the SEC 10K filings can be used to further augment the LLM with trustworthy information. Schneider Electric team will also be able to extend and design their own prompts mimicking the way they classify some public sector accounts, further improving the accuracy of the pipeline. These capabilities will enable Schneider Electric to maintain up-to-date and accurate organizational structures of their customers, and unlock the ability to do analytics on top of this data.


About the Authors

Anthony Medeiros is a Manager of Solutions Engineering and Architecture at Schneider Electric. He specializes in delivering high-value AI/ML initiatives to many business functions within North America. With 17 years of experience at Schneider Electric, he brings a wealth of industry knowledge and technical expertise to the team.

Blake Sanstchi is a Business Intelligence Manager at Schneider Electric, leading an analytics team focused on supporting the Sales organization through data-driven insights.

Joshua LevyJoshua Levy is Senior Applied Science Manager in the Amazon Machine Learning Solutions lab, where he helps customers design and build AI/ML solutions to solve key business problems.

Kosta Belz is a Senior Applied Scientist with AWS MLSL with focus on Generative AI and document processing. He is passionate about building applications using Knowledge Graphs and NLP. He has around 10 years of experience in building Data & AI solutions to create value for customers and enterprises.

Aude Genevay is an Applied Scientist in the Amazon GenAI Incubator, where she helps customers solve key business problems through ML and AI. She previously was a researcher in theoretical ML and enjoys applying her knowledge to deliver state-of-the-art solutions to customers.

Md Sirajus Salekin is an Applied Scientist at AWS Machine Learning Solution Lab. He helps AWS customers to accelerate their business by building AI/ML solutions. His research interests are multimodal machine learning, generative AI, and ML applications in healthcare.

Zichen Wang, PhD, is a Senior Applied Scientist in AWS. With several years of research experience in developing ML and statistical methods using biological and medical data, he works with customers across various verticals to solve their ML problems.

Anton Gridin is a Principal Solutions Architect supporting Global Industrial Accounts, based out of New York City. He has more than 15 years of experience building secure applications and leading engineering teams.

Read More

DLSS 3.5 With Ray Reconstruction Now Available in NVIDIA Omniverse

DLSS 3.5 With Ray Reconstruction Now Available in NVIDIA Omniverse

The highly anticipated NVIDIA DLSS 3.5 update, including Ray Reconstruction for NVIDIA Omniverse — a platform for connecting and building custom 3D tools and apps — is now available.

RTX Video Super Resolution (VSR) will be available with tomorrow’s NVIDIA Studio Driver release — which also supports the DLSS 3.5 update in Omniverse and is free for RTX GPU owners. The version 1.5 update delivers greater overall graphical fidelity, upscaling for native videos and support for GeForce RTX 20 Series GPUs.

NVIDIA Creative Director and visual effects producer Sabour Amirazodi returns In the NVIDIA Studio to share his Halloween-themed project: a full projection mapping show on his house, featuring haunting songs, frightful animation, spooky props and more.

Creators can join the #SeasonalArtChallenge by submitting harvest- and fall-themed pieces through November.

The latest Halloween-themed Studio Standouts video features ghouls, creepy monsters, haunted hospitals, dimly lit homes and is not for the faint-of-heart.

Remarkable Ray Reconstruction

NVIDIA DLSS 3.5 — featuring Ray Reconstruction — enhances ray-traced image quality on GeForce RTX GPUs by replacing hand-tuned denoisers with an NVIDIA supercomputer-trained AI network that generates higher-quality pixels in between sampled rays.

Previewing content in the viewport, even with high-end hardware, can sometimes offer less than ideal image quality, as traditional denoisers require hand-tuning for every scene.

With DLSS 3.5, the AI neural network recognizes a wide variety of scenes, producing high-quality preview images and drastically reducing time spent rendering scenes.

NVIDIA Omniverse and the USD Composer app — featuring the Omniverse RTX Renderer — specialize in real-time preview modes, offering ray-tracing inference and higher-quality previews while building and iterating.

The feature can be enabled by opening “Render Settings” under “Ray Tracing,” opening the “Direct Lighting” tab and ensuring “New Denoiser (experimental)” is turned on.

The ‘Haunted Sanctuary’ Returns

Sabour Amirazodi’s “home-made” installation, Haunted Sanctuary, has become an annual tradition, much to the delight of his neighbors.

Crowds form to watch the spectacular Halloween light show.

Amirazodi begins by staging props, such as pumpkins and skeletons, around his house.

Physical props add to the spooky atmosphere.

Then he carefully positions his projectors — building protective casings to keep them both safe and blended into the scene.

Amirazodi custom builds, paints and welds his projector cases to match the Halloween-themed decor.

“In the last few years, I’ve rendered 32,862 frames of 5K animation out of the Octane Render Engine. The loop has now become 21 minutes long, and the musical show is another 28 minutes!” — Sabour Amirazodi

Building a virtual scene onto a physical object requires projection mapping, so Amirazodi used NVIDIA GPU-accelerated MadMapper software and its structured light-scan feature to map custom visuals onto his house. He achieved this by connecting a DSLR camera to his mobile workstation, which was powered by an NVIDIA RTX A5000 GPU.

He used the camera to shoot a series of lines and capture photos. Then, he translated to the projector’s point of view an image on which to base a 3D model. Basic camera-matching tools found in Cinema 4D helped recreate the scene. Afterward, Amirazodi applied various mapping and perspective correction edits.

Projection mapping requires matching the virtual world with real-world specifications, done in Cinema 4D.

Next, Amirazodi animated and rigged the characters. GPU acceleration in the viewport enabled smooth interactivity with complex 3D models.

“I like having a choice between several third-party NVIDIA GPU-accelerated 3D renderers, such as V-Ray, OctaneRender and Redshift in Cinema 4D,” noted Amirazodi.

“I switched to NVIDIA graphics cards in 2017. GPUs are the only way to go for serious creators.” — Sabour Amirazodi

Amirazodi then spent hours on his RTX 6000 workstation creating and rendering out all the animations, assembling them in Adobe After Effects and compositing them on the scanned canvas in MadMapper. There, he crafted individual scenes to render out as chunks and assembled them in Adobe Premiere Pro. Remarkably, he repeated this workflow for every projector.

Once satisfied with the sequences, Amirazodi encoded everything using Adobe Media Encoder and loaded them onto BrightSign digital players — all networked to run the show synchronously.

Amirazodi used the advantages of GPU acceleration to streamline his workflow — saving him countless hours. “After Effects has numerous plug-ins that are GPU-accelerated — plus, Adobe Premiere Pro and Media Encoder use the new dual encoders found in the Ada generation of NVIDIA RTX 6000 GPUs, cutting my export times in half,” he said.

Smooth timeline movement in Adobe Premiere Pro assisted by the NVIDIA RTX A6000 GPU.

Amirazodi’s careful efforts are all in the Halloween spirit — creating a hauntingly memorable experience for his community.

“The hard work and long nights all become worth it when I see the smile on my kids’ faces and all the joy it brings to the entire neighborhood,” he reflected.

NVIDIA Creative Director Sabour Amirazodi.

Discover more of Amirazodi’s work on IMDb.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Read More