Adding a conclusion section

This commit is contained in:
Vijay Janapa Reddi
2023-09-19 18:29:46 -04:00
parent a47e6b082f
commit ff3e24ce0b

View File

@@ -212,4 +212,12 @@ As we entered the 2020s, the field witnessed the advent of tiny Machine Learning
### 2023 and Beyond: Towards a Ubiquitous Embedded AI Era
As we progress further into this decade, we anticipate a transformative phase in the landscape of technology where embedded AI and TinyML evolves from being a noteworthy innovation to a pervasive force integral to our technological infrastructures, thus ushering in an era of ubiquitous embedded AI. The horizon of embedded AI is massive and expansive, potentially bringing us a future where the boundaries between artificial intelligence and daily functionalities become increasingly blurred, fostering a new era of innovation and efficiency.
As we progress further into this decade, we anticipate a transformative phase in the landscape of technology where embedded AI and TinyML evolves from being a noteworthy innovation to a pervasive force integral to our technological infrastructures, thus ushering in an era of ubiquitous embedded AI. The horizon of embedded AI is massive and expansive, potentially bringing us a future where the boundaries between artificial intelligence and daily functionalities become increasingly blurred, fostering a new era of innovation and efficiency.
## Conclusion
In this chapter, we provided an overview of the emerging landscape of embedded machine learning, spanning cloud, edge, and tiny ML approaches. Cloud-based machine learning enables powerful and accurate models by leveraging the vast compute resources of cloud platforms. However, cloud ML incurs latency, connectivity, privacy, and cost limitations for many embedded use cases. Edge ML addresses these constraints by deploying ML inference directly on edge devices, providing lower latency and reduced connectivity needs. TinyML miniaturizes ML models to run directly on microcontrollers and other highly resource-constrained devices, enabling a new class of intelligent applications.
Each approach has tradeoffs in terms of model complexity, latency, privacy, connectivity requirements, and hardware costs. Cloud ML enables the most sophisticated models, while edge and tiny ML simplify models to meet real-time and hardware constraints. Over time, we expect embedded ML approaches to converge, with cloud pre-training enabling sophisticated edge and tiny ML execution. Federated learning and on-device learning will also allow embedded devices to improve their models by learning from real-world data over time.
The landscape of embedded ML is rapidly evolving to enable intelligent applications across a spectrum of devices and use cases. This chapter provided a snapshot of current embedded ML approaches and their capabilities. As algorithms, hardware, and connectivity continue improving, embedded devices of all scales will become even more capable, unlocking transformative new applications of artificial intelligence.