Featuring
In recent years, enterprise AI initiatives have made great strides in solving the challenges of training massive, distributed computational models. Data wrangling, experimentation management, GPU resource allocation…these challenges have spawned an ever-growing market of new tools and considerable investment. But despite all this, the ability to productize AI is still stunted in most enterprises.
In our survey of more than 200 data scientists and MLOps/IT experts, a majority of AI/ML models still aren't making it to production. AI/ML teams are now under pressure to optimize and manage AI inference workloads in production and deliver a return on investment.
In this webinar, we will walk through the distinct characteristics of each stage in the ML lifecycle and their computational requirements. We’ll discuss solutions to improve throughput and reduce latency, and finally, we’ll show how one organization built an efficient inference platform on top of Kubernetes to support their scaling AI initiatives.
“Rapid AI development is what this is all about for us. What Run:AI helps us do is to move from a company doing pure research, to a company with results in production.”
Siddharth Sharma, Sr. Research Engineer, Wayve
About the speaker
Dr. Ronen Dar, Co-founder and CTO at Run:AI
As Run:AI CTO, Dr. Ronen Dar manages research and product roadmap for Run:AI, a start-up he co-founded in 2018. Previously, Ronen worked at Bell Labs, Anobit Technologies (acquired by Apple), and in the algorithms research group in Intel's mobile wireless division. He is the co-author of many patents in the fields of communication, coding and signal processing. Ronen received his B.Sc., M.Sc. and Ph.D. degrees from Tel Aviv University.
On- demand coming soon