Tales from Production – Debugging LLMs and…

Tales from Production – Debugging LLMs and GenAI Apps on VMware Tanzu Platform

Tales from Production – Debugging LLMs and…

GenAI is all the rage, but do you have any mileage supporting the platform or infrastructure needed to run intelligent applications in production? Join this session to get real-world insights and stories on running GenAI applications at scale on VMware Tanzu® Platform. Learn tips and tricks related to model selection, AI governance committees, context window and response time issues, the differences between popular large language model (LLM) inference engines, such as Ollama and vLLM, and many more topics. This session will leave you with valuable knowledge on how you can confidently take your intelligent applications into production on Tanzu Platform and VMware Cloud Foundation®-based infrastructure.


Broadcom Social Media Advocacy

Hinterlasse einen Kommentar

Webseite erstellt mit WordPress.com.

Nach oben ↑