www.design-reuse-embedded.com
Find Top SoC Solutions
for AI, Automotive, IoT, Security, Audio & Video...

Brainchip looks at transfer and incremental learning - which is more efficient?

www.newelectronics.co.uk, Aug. 25, 2021 – 

The AI processor specialist, Brainchip, has looked at whether transfer learning is more efficient than incremental learning when it comes to training neural nets to perform AI/ML tasks.

The massive computing resources required to train neural networks for AI/ML tasks has driven interest in these two forms of learning.

In transfer learning, applicable knowledge established in a previously trained AI model is "imported" and used as the basis of a new model. After taking this shortcut of using a pretrained model, such as an open-source image or NLP dataset, new objects can be added to customise the result for the particular scenario.

Accuracy is the primary downfall of this system. Fine-tuning the pretrained model requires large amounts of task-specific data to add new weights or data points. As it requires working with layers in the pretrained model to get to where it has value for creating the new model, it may also require more specialized, machine-learning savvy skills, tools, and service vendors.

When used for edge AI applications, transfer learning involves sending data to the cloud for retraining, incurring privacy and security risks. Once a new model is trained, any time there is new information to learn, the entire training process needs to be repeated. This is a frequent challenge in edge AI, where devices must constantly adapt to changes in the field.

click here to read more...

 Back

Partner with us

List your Products

Suppliers, list and add your products for free.

More about D&R Privacy Policy

© 2024 Design And Reuse

All Rights Reserved.

No portion of this site may be copied, retransmitted, reposted, duplicated or otherwise used without the express written permission of Design And Reuse.