Site icon Free Online PCB CAD Library

Edge AI: Simplified

 Edge AI studio is used to create advanced models for visual defect detection equipment

Automated optical inspection equipment relies on models created with Edge AI Studio

As engineers look to integrate artificial intelligence (AI) into their work, there needs to be clarity about edge AI and its role in embedded product design. Edge artificial intelligence or edge AI is still new ground for designers, and many engineers are looking to improve their knowledge, skills, and resources with this new technology.

Texas Instruments is leading the way in this area and has provided the answers engineers are looking for in the Edge AI Simplified webinar. This presentation provides insight into AI and deep learning and explains the difference between edge AI and cloud-based AI. A visual defect detection system for manufacturing test case is described that illustrates how to easily begin to implement Edge AI Studio to solve their engineering problems with this advanced technology.

What is Edge AI?

Artificial intelligence is steadily influencing the workflow of engineers. There are many algorithms and paradigms that fall under the artificial intelligence umbrella of computing systems that exhibit the capability of ‘making  decisions’ autonomously. Many of these are used to process sizable amounts of data and this is typically done in the cloud. As shown in table below, edge AI differs from cloud AI in several important aspects.

Comparing Edge AI to Cloud AI

Edge AI

Cloud AI

As illustrated, edge AI provides faster, more private data processing.

Both, edge AI and cloud AI have advantages. However, privacy can be a priority. For example, during new product development and testing. Nevertheless, edge AI applications are based on the same artificial intelligence and deep learning foundations used on larger data by cloud AI.

Artificial Intelligence and Deep Learning

Image and voice recognition are some of the most popular technologies used today across many applications. Some of the more notable examples are:

Due to the diversity and size of data sets for these applications, artificial intelligence and deep learning systems are often deployed for data processing. The products designed for these applications rely on specific models for pattern recognition and primarily implement one of three methodologies.

Conventional Computer Vision:

This method has been in use for several decades now but is limited in its application. It relies on an algorithm that must be programmed specifically for the application, meaning that it is limited in its use and has a long development time.

Classic Machine Learning:

This next step up from conventional computer vision no longer requires specific instructions in an algorithm. Instead, the algorithms are trainable, making them useful in similar applications. The self-learning ability of the algorithms cuts down on development time but is more computationally intensive.

Deep Learning:

Deep learning is the current state of AI in model building and requires complicated data sets with complex training algorithms, making them much more computationally intensive. However, the advantages are producing models that are more generalized with the ability to change easily and that have a shorter development time.

To implement these methodologies most effectively, embedded system engineers need new ways to quickly and effectively create these models. Texas Instruments has designed Edge AI Studio for this purpose.

Using Edge AI Studio to Simplify Model Development

Edge AI Studio is a collection of tools that enable development, benchmarking, and deployment of AI applications on the cloud. Edge AI Studio includes a model composer that allows users to easily gather data, specify and train their selected model, and implement the model for their problem. The model creation process, which includes the following steps, is described in detail.

Edge AI Studio model development 

Once completed, the models are used to train the system to detect visible defects such as irregular product shapes, broken parts, cracks, and other manufacturing defects. Reference designs are one of the best ways to jumpstart a development project quickly. And to help you hit the ground running for computer vision projects, TI has developed a series of vision processors and reference designs that are scaled for various implementations.

Computer Vision Development Kits

AM62A 5 MP at 60 fps, a 2 teraoperations per second (TOPS) AI accelerator, a quad-core 64-bit Arm® Cortex®-A53 microprocessor, a single-core Arm Cortex-R5F and an H.264/H.265 video encode/decode.

AM68X 480MP/s, an 8 tera-operations-per-second (TOPS) AI accelerator, two 64-bit Arm® Cortex®-A72 CPUs, and support for H.264/H.265 video encode/decode

AM69X1440MP/s, 32 tera-operations-per-second (TOPS) AI accelerator, eight 64-bit Arm®-Cortex® A72 microprocessor, and H.264/H.265 video encode/decode.

Edge AI Studio includes fixed models, but also allows designers to create custom models by  varying  parameters; such as classes within the datasets, model types, learning rates, batch sizes, and more. These and other important aspects enabling you to quickly leverage edge AI, are presented by Dr. Qutaiba Saleh in the presentation.

Dr. Qutaiba Saleh, System Applications Engineer with TI, is an experienced researcher in the areas of deep learning, image processing and computer vision and their implementation in many fields; including product design and biological systems.

If you’re looking for CAD models for common components or design guidance for quickly implementing emerging technologies like edge AI, Ultra Librarian helps by compiling all your sourcing and CAD information in one place.

Working with Ultra Librarian sets up your team for success to ensure streamlined and error-free design, production, and sourcing. Register today for free.